Next Article in Journal
Estimation of Asymmetric Spatial Autoregressive Dependence on Irregular Lattices
Previous Article in Journal
Graded Weakly 2-Absorbing Ideals over Non-Commutative Graded Rings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint-Prior-Based Uneven Illumination Image Enhancement for Surface Defect Detection

State Key Laboratory of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2022, 14(7), 1473; https://doi.org/10.3390/sym14071473
Submission received: 20 June 2022 / Revised: 15 July 2022 / Accepted: 18 July 2022 / Published: 19 July 2022

Abstract

:
Images in real surface defect detection scenes often suffer from uneven illumination. Retinex-based image enhancement methods can effectively eliminate the interference caused by uneven illumination and improve the visual quality of such images. However, these methods suffer from the loss of defect-discriminative information and a high computational burden. To address the above issues, we propose a joint-prior-based uneven illumination enhancement (JPUIE) method. Specifically, a semi-coupled retinex model is first constructed to accurately and effectively eliminate uneven illumination. Furthermore, a multiscale Gaussian-difference-based background prior is proposed to reweight the data consistency term, thereby avoiding the loss of defect information in the enhanced image. Last, by using the powerful nonlinear fitting ability of deep neural networks, a deep denoised prior is proposed to replace existing physics priors, effectively reducing the time consumption. Various experiments are carried out on public and private datasets, which are used to compare the defect images and enhanced results in a symmetric way. The experimental results demonstrate that our method is more conducive to downstream visual inspection tasks than other methods.

1. Introduction

Surface defect detection is of great significance to product quality and has been widely used in many important industrial fields such as automobiles, railroad tracks, and aerospace engines. The traditional surface defect detection is performed by human eyes, which is time-consuming and low precision. In recent years, the deep learning methods have been widely used in the field of surface defect detection. However, due to the influence of high curvature or inconsistent surface reflection characteristics, there is uneven illumination on the surface defect image, which seriously affects the accuracy of the subsequent surface defect detection task.
To solve this problem, a simple preprocessing method based on the histogram transform is often used to correct uneven illumination, such as contrast limited adaptive histogram equalization (CLAHE) [1], gamma correction (GC) [2], logarithmic transformation, contrast stretching transformation, and normalization. The advantage of these methods is that they have low computational complexity and can directly improve the uneven grayscale distribution of the image. However, these methods can only alleviate the influence of uneven illumination on the defective image and may also introduce noise interference after preprocessing, which is not conducive to downstream defect detection tasks.
Recently, convolutional neural networks (CNNs) have been widely applied in image processing, including illumination correction [3,4]. Such models learn the relationship between image pairs with uneven and normal illumination via an end-to-end approach. Popular network structures, such as fully convolutional networks (FCNs) and encoder–decoder networks, have achieved good results when applied for uneven illumination correction. However, it is difficult to obtain uneven/normal illumination image pairs in an industrial context. Furthermore, deep-learning-based methods exhibit a strong dependence on the training dataset, and therefore, the difficulty of data acquisition has greatly restricted their application in complex scenarios.
The retinex model is the current mainstream illumination model [5]. According to retinex theory, an image can be essentially regarded as the product of an illumination component and a reflection component. Finding the solution to this model is an ill-conditioned inverse problem, and prior knowledge of certain constraints needs to be introduced. Scholars have designed many physical priors about the illumination and reflection components to constrain the solution space of the retinex model, which can effectively realize uneven illumination image enhancement.
Notably, previous studies on retinex-based defect image enhancement methods still have the following disadvantages in industrial scenarios:
(1) The current methods cannot effectively retain important defect information while eliminating uneven background illumination.
(2) Existing methods require multiple iterations to complete the uneven illumination enhancement of images and consequently cannot meet industrial real-time requirements.
This paper proposes a joint-prior-based image enhancement algorithm for uneven illumination correction that can quickly and effectively realize the effective enhancement of images with uneven illumination. First, we design a simplified retinex semi-coupled model to transform the uneven illumination enhancement problem into an accurate estimation of illumination components. Then, a multiscale Gaussian difference-based background prior (BP) is proposed to avoid defect information loss by introducing semantic information. A deep denoising prior (DDP) is also designed to replace the physical prior knowledge in the existing model, such as the L2-norm, etc., to realize the efficient and fast solution of the retinex models. Finally, the effectiveness of the proposed algorithm is verified on public and private datasets. By comparing the defect images and enhanced results in a symmetric way, it can be found that our method is more conducive to downstream visual inspection tasks than state-of-the-art uneven illumination enhancement methods. In summary, our main contributions can be described as follows:
(1) We develop a novel joint prior retinex model to accurately remove uneven illumination in surface defect images. This method can effectively retain defect information while accurately eliminating uneven illumination.
(2) Considering the multiscale characteristics and low semantics of industrial defect images, we propose a formulation of background prior knowledge based on multiscale Gaussian differences to suppress the loss of defect information in the enhanced image.
(3) Taking full advantage of the powerful feature expression ability of deep learning, we propose an illumination constraint based on a depth prior to realize a fast iterative solution process for the illumination model.
(4) Experiments on public and private defect datasets demonstrate that our JPUIE method achieves better performance than previous competitive methods for uneven illumination enhancement.
The remainder of the article is organized as follows. The related work on the retinex model is discussed in Section 2. Section 3 describes the proposed method in detail. Section 4 presents the experimental results in comparison with those of different start of-the-art methods. Finally, the conclusion and future work are summarized in Section 5.

2. Related Works

The retinex model [6] is mainly used to solve the problems of uneven illumination and color deviation in digital images. It is also widely used in image processing tasks such as haze images and underwater images to obtain high contrast images. The retinex model regards an image S R n × m as the product of an illumination component and a reflection component:
S = R L ,
where R R n × m denotes the scene illumination component, L R n × m is the component representing the reflection from the surface of the imaged object, and ∘ denotes the elementwise multiplication. The uneven illumination image enhancement methods based on the retinex model can be divided into two categories: the model-driven methods and the data-driven methods.
The model-driven methods consider the local smoothness property of the illumination component and the piecewise constant property of the reflection component. Many prior-knowledge-guided uneven illumination enhancement algorithms have been proposed. Kimmel et al. [7] proposed a pyramid-based retinex variational model using the L2-norm to constrain the illumination and reflection components and finally applied the alternating direction method of multipliers (ADMM) to optimize the solution. Subsequently, Fu et al. [8] proposed the WVM, a probabilistic method for simultaneously estimating retinex models, by adding exponentially weighted coefficients to the regularization term to enhance the estimation of the illumination/reflection components in the logarithmic domain. To accelerate the solution of the model, Guo et al. [9] proposed an illumination estimation model based on maximum illumination initialization. They used gradient descent as the approximation method to optimize the solution. The above methods focus on solving the retinex model in the logarithmic domain. However, Gu et al. [10] believed that the estimation of the reflection component in the logarithmic domain would cause the loss of image details, so they proposed a retinex model solution method based on the image domain. Subsequently, the authors [11] proposed a retinex model with a fractional-order regularization term, which allows the original details of the image to be preserved by optimizing either the traditional first-order regularization term or a second-order regularization term. On this basis, Dai et al. [12] introduced illumination initialization constraints and added multiexposure image fusion technology to achieve detail preservation after illumination enhancement. Similarly, Yue et al. [13] introduced a local smoothness constraint on the reflection component on the basis of the original model to achieve local contrast enhancement in the decomposed image. To further remove noise interference in the reflection component, Li et al. [14] first proposed a retinex solution model with a structure of “illumination + reflection + noise”, which improved the effect of image decomposition through the addition of a noise constraint term. Ren et al. [15] proposed a low-rank canonical retinex model named LR3M by incorporating the low-rank characteristics of the reflection component into the optimization model and proposed a corresponding optimization-based solution method.
The data-driven method mainly learns the complex relationship between high- and low-quality images, so as to enhance the low-quality images. Wei et al. [16] proposed an illumination optimization network based on the retinex model for the first time. The network adopts a two-stage method to realize end-to-end image enhancement. Zhang et al. [17] proposed a human–computer interactive illumination enhancement network, which is also inspired by the retinex model and consists of three modules: layer decomposition, reflectivity recovery, and illumination adjustment. Through training the images with different illumination levels, the characteristic information of low-quality images can be recovered. Then, Wang et al. [18] proposed an underexposed image enhancement network, which is different from the previous methods. The network enhances the low light images by introducing intermediate illumination to correlate the input images and enhancement results.

3. Proposed Method

3.1. Motivation

Most existing illumination models can be expressed in the following form [13,14]:
min S R L F 2 + R 1 ( R ) + R 2 ( L ) ,
where R 1 and R 2 represent the regularization priors for the illumination R and the reflectance L, respectively. In recent years, many effective priors have been proposed to constrain the solution space to a more accurate region, such as the L2-norm, nonlocal similarity, and low-rank priors. However, these methods still have two shortcomings for the illumination correction of industrial images:
(1) Defect information loss: The decomposed illumination component will contain some residual defect information, especially for a large-area defect image. This will result in the loss of important defect information from the reflection component in the final enhancement result.
(2) Long time consumption: The existing priors are designed based on physical statistical information, and their constraint ability is limited. Therefore, multiple iterations are required during the model solution process, which increases the time consumption.
To overcome the shortcomings of the existing algorithms, we propose an uneven illumination enhancement method based on joint priors. First, the key to illumination correction is how well the illumination component is estimated. Inspired by the literature [9], we adopt a semi-decoupled decomposition model that requires only the estimation of the illumination L, regarding R = S / L as the uneven illumination enhancement result. In this way, our method can not only more accurately eliminate the uneven illumination in industrial images, but also reduce the solution time by nearly half. Second, we take advantage of the low semantics and defect area diversity of industrial defect images and propose a background prior based on multiscale Gaussian differences to suppress the residual defect information during illumination estimation, thereby effectively retaining the defect information in the reflection component. Third, we exploit the powerful prior modeling ability of deep neural networks and design a deep denoising network as a regularization constraint for the illumination component. Compared with the physical priors in existing models, the proposed deep denoised prior has a better constraint effect, greatly shortening the running time by reducing the number of iterations.

3.2. Proposed Model and Optimization

As shown in Figure 1, the proposed retinex model is formally given by
min L 1 2 B S L 2 2 + λ D L ,
where the first term is the data fidelity term determined, the second term is the regularization prior term, and λ is the regularization parameter. B denotes the background prior. H means the deep regularization prior. For simplification, the proposed retinex model can be rewritten as
min L 1 2 S 0 B L 2 2 + λ D L ,
where S 0 = B · S . This is a nonconvex function and cannot be solved directly. Therefore, we adopt the alternating direction method of multipliers (ADMM) [19] to solve the optimization problem. First, an auxiliary variable U is introduced to transform the retinex energy functional into a convex optimization problem. Thus, the formula is rewritten as
L = arg min L 1 2 S 0 B L 2 2 + λ D U s . t . U = L
By introducing the Lagrangian multiplier V, the formula can then be converted into the form of an augmented Lagrangian function:
L L , U , V = arg min L , U , V 1 2 S 0 B L 2 2 + λ D ( U ) + θ 2 U + V L 2 2 ,
where θ is the Lagrangian parameter, which is empirically set to 1. According to the ADMM, the optimization problem can be solved as a sequence of subproblems:
L k + 1 = arg min L 1 2 S 0 B L 2 2 + θ 2 L L ˜ k 2 2
U k + 1 = arg min U θ 2 U k U ˜ k 2 2 + λ D ( U k )
V k + 1 = V k ( L k + 1 U k + 1 ) ,
where L ˜ k = U k V k and U ˜ k = L k + 1 + V k . In the solution process, one variable is updated at a time, while the other two variables are fixed. In this way, all the variables can be gradually solved in an alternating manner. The specific subproblems for optimization are as follows:
(1) L subproblem:
The L subproblem is a quadratic convex function, which has a closed-form solution as follows:
L k + 1 = ( B T B + θ I ) 1 B S 0 + θ ( U k + V k )
However, solving Equation (10) incurs a high computational cost because it involves the inversion of a large matrix, B T B + θ I . Therefore, we adopt an approximate solution approach using the iterative conjugate gradient (CG) in place of the direct closed-form solution to reduce the time consumed for the whole iterative solution process.
L k + 1 = B ¯ L k + δ B T S 0 + δ θ ( U k + V k ) ,
where B ¯ = [ ( 1 δ θ ) I δ B T B ] and δ is the step size, which is empirically set to 0.1. B ¯ can be solved in advance to shorten the calculation time.
(2) U subproblem:
The subproblem for updating U can be regarded as the process of denoising the image U ˜ . To explain, we assume that the degraded model of the denoising problem can be expressed as
U ˜ = U + n ,
where n denotes the noise. Based on the maximum a posteriori (MAP) derivation, we have the following:
p ( U ˜ U ) = p ( U U ˜ ) p ( z ) e | U w | 2 2 2 σ 2 × e φ ( U ˜ )
max p ( U ˜ U ) = min 1 2 σ 2 U U ˜ 2 2 + φ ( U ˜ )
Existing physical priors, such as the L1-norm and the TV, cannot strongly constrain the smoothness of the illumination component. To address this problem, we solve the U subproblem by building a deep denoising network:
U ( k + 1 ) = D U ˜ k
where D represents the deep denoising network.
As the number of iterations increases, when the relative error of the illumination component is less than the threshold L k + 1 L k L k + 1 ε , the iteration process is terminated. The threshold ε was empirically set to 0.0001.

3.3. Background Prior

To effectively preserve the defect information in the enhanced image, we propose a background prior to decrease the weight of data consistency in defective areas. Industrial defect images have the following two characteristics:
(1) Low semantics: Unlike natural images, defect images consist of only two components: the defect and the background.
(2) Defect area diversity: The proportion of the defect area in the whole image can differ greatly from one image to another.
Therefore, we exploit the illumination robustness of the difference of Gaussians (DoG) [20] to design multiscale background prior knowledge. The DoG can reflect the local salient information of an image at the current scale:
D o G σ 1 , σ 2 ( x , y ) = G σ 1 ( x , y ) G σ 2 ( x , y )
where G σ ( x , y ) = ( 1 / 2 π σ 2 ) e ( x 2 + y 2 ) / 2 σ 2 is a 2D Gaussian function and σ is the standard deviation. σ i can represent the standard deviation at different scales:
σ i = t i σ 0 , i 0 , n
where σ 0 is the initial standard deviation, t is a positive constant coefficient, and i denotes the scale. The DoG image is expressed as follows:
L ( x , y ) = I ( x , y ) D o G σ 1 , σ 2 ( x , y )
To adapt to the defect area diversity, we employ minimum filtering to obtain the background prior as follows:
B ( x , y ) = min i L i ( x , y ) ,
where B represents the background prior, which is used to guide the optimization and solution of the retinex model and to suppress the important defect information loss. The parameters σ 0 and t are set according to [21]. The number n is set to 3 to balance the accuracy and efficiency of the background prior.

3.4. Deep Denoised Prior

The regularization term is mainly used to constrain the smoothness of the illumination component. As analyzed in Section 3.2, the process of solving the subproblem focused on the regularization term can be regarded as an image denoising process. In this way, existing denoisers, such as the L2-norm, BM3D [22], and Dncnn [23], can serve as plug-and-play regularization priors. However, these denoisers have limited smoothing capabilities and may bring artificial artifacts after the denoising process. In this paper, we apply a simple, yet effective deep denoising prior network, which is shown in Figure 2. Inspired by the huge success of UNet in the field of image-to-image translation [24,25], we adopted an encoder–decoder structure as our denoising network backbone. The network contains two downsampling steps and two upsampling steps. We added three Resnet blocks between the encoder and decoder to increase the network depth, which can enlarge the representation capacity of the network and stabilize the training process. The details of this network are illustrated in Table 1.
The training loss function of the denoising network consists of two components: the reconstruction loss and smoothness loss. We chose the MSE as the reconstruction loss, which is defined as:
L D e n o i s e Θ = D U ˜ , Θ U 2 2 ,
where U ˜ , U is the noise/clean image pair. Θ is the training parameter of the deep denoised network. Furthermore, we used the total variation loss (TV) regularizer to constrain the smoothness:
L S m o o t h n e s s = U ˜ U 2 2 ,
where represents the first-order difference operation.
Finally, the total loss can be expressed as:
L = L D e n o i s e + η L s m o o t h n e s s ,
where η denotes the tradeoff parameter.

4. Experiments and Analysis

To verify the effectiveness of the algorithm in this paper, a series of experiments is presented. First, the experiment details are introduced. Second, we compare the proposed method with seven state-of-the-art illumination correction methods on both public and private datasets. Third, an ablation study is carried out to investigate the effectiveness of the proposed method. All the experiments were conducted on a high-performance server, which was equipped with a dual NVIDIA Tesla P100 GPU, 40-core CPU 2.4 GHz, and 256 GB memory.

4.1. Experiment Details

We evaluated the performance of our proposed method on two surface defect datasets with uneven illumination, the Rail Surface Discrete Defect Dataset (RSDD) and the Motor Commutator Surface Defect Dataset (MCSD), and the details are as follows:
(1) RSDD: The RSDD Dataset is a public high-speed rail dataset. Due to the high curvature of rail surfaces, the grey distributions of rail images are uneven. We cropped the images in the original dataset to a size of 224 × 224 and adopted the data augmentation methods to increase the training samples. The dataset contains 1206 defect-free samples and 885 defective samples. We randomly divided the training set and the test set according to the ratio of 0.7:0.3.
(2) MCSD: The MCSD Dataset was collected on real production lines, as shown in Figure 3. This dataset includes 1420 motor commutator images with a size of 256 × 256. To verify the segmentation accuracy, the corresponding ground-truth images were generated with the open-source annotation tool LabelMe. We divided the dataset into 994 training images and 426 test images.
We trained the proposed denoiser on the above defect image datasets with pytorch. In order to obtain the noisy/clean image pairs, we added Gaussian noise to the defect images. The noise deviation σ was empirically set to 50, which would obtain better performance. The proposed denoiser model was trained using the Adam optimizer with β 1 = 0.9 , β 2 = 0.999 , and the epoch and batch size were set to 300 and 24, respectively, while the learning rate was set to 10−3. We separately trained the denoiser model on the corresponding defect datasets. The parameter η was used to balance the reconstruction loss and the total variation loss. Since the total loss was applied to train the denoised network, the weight of the reconstruction loss was more important than the total variation loss, so we set parameter η to 0.1. The regularization parameter λ was used to balance the data fidelity term and the regularization term. When the value was large, the enhanced image could not guarantee the uniformity of the enhanced image. In this experiment, the parameter λ was set to 0.1.

4.2. Comparisons with State-of-the-Art Methods

We chose six popular algorithms for comparison, namely CLAHE, GC, GTV, LD, and STAR. For fairness, we tested the compared methods using the source code published by the authors and set the parameters to their default values.

4.2.1. Qualitative Analysis

Figure 4 and Figure 5 show the enhancement results for images in the RSDD and MCSD Datasets, where Figure 4a and Figure 5a show the sample images and Figure 4b–f and Figure 5b–f show the results of the enhancement with the different methods. It can be seen that the compared methods cannot accurately eliminate uneven illumination or lose defect information. As for the CLAHE and GC methods, these methods aim to adjust the gray distribution to enhance the uneven illumination images and can partially alleviate the influence of uneven illumination. In particular, there are many artifacts in the CLAHE enhancement results, which interfere with the downstream defect detection tasks. JieP, GTV, LD, and STAR can effectively eliminate uneven illumination, because these methods are based on retinex theory. However, the results of LD still contain a certain degree of uneven illumination in the background. JieP, GTV, and STAR cause serious defect information loss after image enhancement, especially for large-area and high-contrast defect images. In comparison, our method generates the best image enhancement results, with the resulting images showing more consistent backgrounds and more defect information compared to the results of the other methods.

4.2.2. Quantitative Analysis

To evaluate the effectiveness of our image enhancement method on the downstream defect defection task, we employed two popular semantic segmentation models, UNet [30] and PSPNet [31], which are widely used in industrial scenarios for defect detection. The defect images with uneven illumination were directly fed into the segmentation models, serving as the baseline. For comparison, the original defect images were first enhanced by different enhancement methods, then the enhanced results were fed into the segmentation models. Due to space limitations, we show the defect detection results before and after image enhancement with the UNet model in Figure 6 and Figure 7. It can be seen that the defect images without the enhancement operation failed to obtain high performance. In contrast, the defect images enhanced by our method are more cognizable to achieve finer segmentation result.
In addition, to quantitatively analyze the segmentation performance, we employed the IOU metric to evaluate the defect detection accuracy [32]. Table 2 summarizes the detection results. On the whole, our method achieved the best detection performance compared with the other enhancement methods. Specifically, on the RSDD Dataset, the IOU index of our method was 3.1% and 4.3% higher than the second best, respectively. On the MCSD Dataset, the IOU index of our method was 1.2% and 2.1% higher than the second best, respectively. There are two aspects worth noting. First, compared to the original defect images, the enhanced results by our methods had better detection performance, which proves that the proposed enhancement method is beneficial to the downstream defect detection task. Second, not all enhancement algorithms can improve the downstream image enhancement accuracy, because some methods cannot accurately eliminate the uneven illumination or lose important defect information during the enhancement process, which will hamper the defect detection performance.

4.2.3. Running Time

To verify the computational complexity of the proposed algorithm, we tested the inference time on the MCSD sample images of size 256 × 256. The average inference times of CLAHE, GC, JieP, GTV, LD, STAR, and the proposed method are shown in Table 3. Although the inference time of the proposed method was not the shortest, compared with CLAHE and GC, it can reach 0.112 s per image, meeting the real-time requirements of industrial scenarios. Compared with other retinex-based image enhancement methods, the proposed method greatly shortens the time consumption. There are two main reasons for the runtime superiority of our method: (1) we adopted a semi-decoupled retinex model, which can shorten the algorithm time by nearly half, and (2) we used the deep denoised prior, which has a stronger regularization constraint effect, leading to faster iterative convergence.

4.3. Ablation Study

We performed several ablation studies on MCSD to demonstrate the effectiveness of the deep denoised prior and background prior in our method:
(1) The effect of the deep denoised prior: A deep denoised prior aims to efficiently and effectively realize the smoothness constraint of the estimated illumination images. To verify the effectiveness of the deep denoiser prior, we took the simplified semi-coupled retinex model with an L1-norm regularization term (SCR) as the baseline and then replaced the L1-norm regularization term with other denoiser priors, such as BM3D, Dncnn, and our deep denoiser prior DDP, respectively. The corresponding retinex decomposition results are shown in Figure 8b–e. It can be seen that the illumination images obtained by the compared denoiser prior contained a large amount of texture information or artificial artifacts, which led to the loss of fine details in the estimated reflectance images. In contrast, our estimated illumination images were more piecewise smooth, and the detail information can be effectively preserved after image enhancement. The defect detection results of different denoising priors are displayed in Table 4; we can find that the deep denoiser prior has better performance than the other denoiser priors. This phenomenon shows that the proposed deep denoiser prior is more suitable for uneven illumination image enhancement.
Further, we analyzed the convergence properties of different denoising priors. The iterative curve of the estimated illumination images are shown in Figure 9; it can be seen that the iterative processes of different denoising priors are all monotonically convergent. The deep denoiser prior has the fastest convergence speed, which only needs six iterations to converge and obtain the decomposed results.
(2) The effect of the background prior: The background prior (BP) is used to prevent the loss of defect information after image enhancement. Figure 8f shows the retinex decomposition results with background priors. It can be seen that there is no residual defect information in the estimated illumination image, which effectively retains the defect information in the reflectance image. As shown in Table 4, after adding the background prior, the IoU also increased from 0.752 to 0.767. This proves that the background prior is conducive to subsequent defect detection tasks.

5. Conclusions

In this paper, we proposed a novel uneven illumination image enhancement method JPUIE for surface defect detection. In our JPUIE, we transformed the uneven illumination enhancement problem into a problem of accurate illumination estimation and established a simplified and effective semi-coupled retinex illumination model. Then, the semantic information was introduced to establish the background prior, so as to avoid the loss of defect information after image enhancement. The deep denoised prior is designed to improve the optimization efficiency of the proposed retinex model. Finally, we presented adequate quantitative and qualitative experiments to compare our method with state-of-the-art uneven illumination enhancement approaches. To verify the generalization of our method, all the experiments were carried out on a public defect image dataset RSDD and a real defect image dataset MCSD. The experimental results showed that the defect images enhanced by our method had the highest defect detection accuracy compared with other enhancement methods, and this proved that our method is superior to other methods in improving image quality.
In the future, we will consider a variety of image distortion types, such as defocus blur, noise, etc. and establish a unified image enhancement method to improve the image quality in complex industrial scenes.

Author Contributions

Conceptualization, Y.Q.; methodology, B.L.; software, S.N.; validation, T.N.; formal analysis, Y.Q.; investigation, W.L.; resources, Y.Q.; data curation, B.L.; writing—original draft preparation, S.N.; writing—review and editing, T.N.; visualization, Y.Q.; supervision, Y.Q.; project administration, B.L.; funding acquisition, B.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Key R&D Program of China, Grant Number 2018YFB1700500.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Inquiries regarding experimental data should be made by contacting the first author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Reza, A.M. Realization of the Contrast Limited Adaptive Histogram Equalization (CLAHE) for Real-Time Image Enhancement. J. VLSI Signal Process. Syst. Signal Image Video Technol. 2004, 38, 35–44. [Google Scholar] [CrossRef]
  2. Shih, H.; Fan, C.; Chiu, Y.S. Efficient Contrast Enhancement Using Adaptive Gamma Correction with Weighting Distribution. IEEE Trans. Image Process. 2013, 22, 1032–1041. [Google Scholar] [CrossRef]
  3. Han, Y.; Huang, L.; Hong, Z.; Cao, S.; Zhang, Y.; Wang, J. Deep Supervised Residual Dense Network for Underwater Image Enhancement. Sensors 2021, 21, 3289. [Google Scholar] [CrossRef] [PubMed]
  4. Liu, X.; Yang, Y.; Zhong, Y.; Xiong, D.; Huang, Z. Super-Pixel Guided Low-Light Images Enhancement with Features Restoration. Sensors 2022, 22, 3667. [Google Scholar] [CrossRef]
  5. Xia, W.; Chen, E.C.S.; Pautler, S.E.; Peters, T.M. Laparoscopic image enhancement based on distributed retinex optimization with refined information fusion. Neurocomputing 2022, 483, 460–473. [Google Scholar] [CrossRef]
  6. Jung, C.; Sun, T.; Jiao, L. Eye detection under varying illumination using the retinex theory. Neurocomputing 2013, 113, 130–137. [Google Scholar] [CrossRef]
  7. Kimmel, R.; Elad, M.; Shaked, D.; Keshet, R.; Sobel, I. A variational framework for Retinex. Int. J. Comput. Vis. 2003, 52, 7–23. [Google Scholar] [CrossRef]
  8. Fu, X.; Liao, Y.; Zeng, D.; Huang, Y.; Zhang, X.P.; Ding, X. A Probabilistic Method for Image Enhancement With Simultaneous Illumination and Reflectance Estimation. IEEE Trans. Image Process. 2015, 24, 4965–4977. [Google Scholar] [CrossRef]
  9. Guo, X.; Li, Y.; Ling, H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef]
  10. Gu, Z.; Li, F.; Lv, X.G. A detail preserving variational model for image Retinex. Appl. Math. Model. 2019, 68, 643–661. [Google Scholar] [CrossRef]
  11. Gu, Z.; Li, F.; Fang, F.; Zhang, G. A Novel Retinex-Based Fractional-Order Variational Model for Images With Severely Low Light. IEEE Trans. Image Process. 2020, 29, 3239–3253. [Google Scholar] [CrossRef]
  12. Dai, Q.; Pu, Y.F.; Rahman, Z.; Aamir, M. Fractional-Order Fusion Model for Low-Light Image Enhancement. Symmetry 2019, 11, 512–521. [Google Scholar] [CrossRef] [Green Version]
  13. Yue, H.; Yang, J.; Sun, X.; Wu, F.; Hou, C. Contrast Enhancement Based on Intrinsic Image Decomposition. IEEE Trans. Image Process. 2017, 26, 3981–3994. [Google Scholar] [CrossRef]
  14. Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-Revealing Low-Light Image Enhancement Via Robust Retinex Model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
  15. Ren, X.; Yang, W.; Cheng, W.H.; Liu, J. LR3M: Robust Low-Light Enhancement via Low-Rank Regularized Retinex Model. IEEE Trans. Image Process. 2020, 29, 5862–5876. [Google Scholar] [CrossRef]
  16. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. In Proceedings of the British Machine Vision Conference 2018, BMVC 2018, Newcastle, UK, 3–6 September 2018; p. 155. [Google Scholar]
  17. Zhang, Y.; Zhang, J.; Guo, X. Kindling the Darkness: A Practical Low-light Image Enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, MM 2019, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar] [CrossRef]
  18. Wang, R.; Zhang, Q.; Fu, C.; Shen, X.; Zheng, W.; Jia, J. Underexposed Photo Enhancement Using Deep Illumination Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019; pp. 6849–6857. [Google Scholar] [CrossRef]
  19. Mohamadi, N.; Dong, M.; ShahbazPanahi, S. Low-Complexity ADMM-Based Algorithm for Robust Multi-Group Multicast Beamforming in Large-Scale Systems. IEEE Trans. Signal Process. 2022, 70, 2046–2061. [Google Scholar] [CrossRef]
  20. Sandic-Stankovic, D.; Kukolj, D.; Callet, P.L. Quality Assessment of DIBR-Synthesized Views Based on Sparsity of Difference of Closings and Difference of Gaussians. IEEE Trans. Image Process. 2022, 31, 1161–1175. [Google Scholar] [CrossRef]
  21. Yang, Y.X.; Li, Q.; Chen, P.; Zhang, X.Y. Strip surface defect detection algorithm based on background difference. In Proceedings of the 2010 Second Pacific-Asia Conference on Circuits, Communications and System, Beijing, China, 1–2 August 2010; Volume 2, pp. 23–26. [Google Scholar] [CrossRef]
  22. Katkovnik, V.; Egiazarian, K.O. Sparse phase imaging based on complex domain nonlocal BM3D techniques. Digit. Signal Process. 2017, 63, 72–85. [Google Scholar] [CrossRef]
  23. Zeng, T.; Li, J.; Hu, M.; Hou, S.; Zhang, Q. Toward Higher Performance for Channel Estimation With Complex DnCNN. IEEE Commun. Lett. 2020, 24, 198–201. [Google Scholar] [CrossRef]
  24. Gao, F.; Xu, X.; Yu, J.; Shang, M.; Li, X.; Tao, D. Complementary, Heterogeneous and Adversarial Networks for Image-to-Image Translation. IEEE Trans. Image Process. 2021, 30, 3487–3498. [Google Scholar] [CrossRef]
  25. Wang, Y.; Zhang, Z.; Hao, W.; Song, C. Multi-Domain Image-to-Image Translation via a Unified Circular Framework. IEEE Trans. Image Process. 2021, 30, 670–684. [Google Scholar] [CrossRef]
  26. Cai, B.; Xu, X.; Guo, K.; Jia, K.; Hu, B.; Tao, D. A Joint Intrinsic-Extrinsic Prior Model for Retinex. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4020–4029. [Google Scholar] [CrossRef]
  27. Liu, X.; Zhai, D.; Bai, Y.; Ji, X.; Gao, W. Contrast Enhancement via Dual Graph Total Variation-Based Image Decomposition. IEEE Trans. Circuits Syst. Video Technol. 2020, 30, 2463–2476. [Google Scholar] [CrossRef]
  28. Tang, M.; Xie, F.; Zhang, R.; Jiang, Z.; Bovik, A.C. A Local Flatness Based Variational Approach to Retinex. IEEE Trans. Image Process. 2020, 29, 7217–7232. [Google Scholar] [CrossRef]
  29. Xu, J.; Hou, Y.; Ren, D.; Liu, L.; Zhu, F.; Yu, M.; Wang, H.; Shao, L. STAR: A Structure and Texture Aware Retinex Model. IEEE Trans. Image Process. 2020, 29, 5022–5037. [Google Scholar] [CrossRef] [Green Version]
  30. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015—18th International Conference, Munich, Germany, 5–9 October 2015; Volume 9351, pp. 234–241. [Google Scholar] [CrossRef] [Green Version]
  31. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6230–6239. [Google Scholar] [CrossRef] [Green Version]
  32. Azizinasab, B.; Hasanzadeh, R.P.R.; Hedayatrasa, S.; Kersemans, M. Defect Detection and Depth Estimation in CFRP Through Phase of Transient Response of Flash Thermography. IEEE Trans. Ind. Inform. 2022, 18, 2364–2373. [Google Scholar] [CrossRef]
Figure 1. The flowchart of the proposed JPUIE method.
Figure 1. The flowchart of the proposed JPUIE method.
Symmetry 14 01473 g001
Figure 2. The deep denoised prior network.
Figure 2. The deep denoised prior network.
Symmetry 14 01473 g002
Figure 3. Motor commutator defect detection equipment. (a) Motor commutator production line. (b) Motor commutator vision system.
Figure 3. Motor commutator defect detection equipment. (a) Motor commutator production line. (b) Motor commutator vision system.
Symmetry 14 01473 g003
Figure 4. Comparisons of enhanced results by different methods on the RSDD Dataset. (a) Input images. (bh) Enhanced results by CLAHE [1], GC [2], JieP [26], GTV [27], LD [28], STAR [29], and our method, respectively.
Figure 4. Comparisons of enhanced results by different methods on the RSDD Dataset. (a) Input images. (bh) Enhanced results by CLAHE [1], GC [2], JieP [26], GTV [27], LD [28], STAR [29], and our method, respectively.
Symmetry 14 01473 g004
Figure 5. Comparisons of enhanced results by different methods on the MCSD Dataset. (a) Input images. (bh) Enhanced results by CLAHE [1], GC [2], JieP [26], GTV [27], LD [28], STAR [29], and our method, respectively.
Figure 5. Comparisons of enhanced results by different methods on the MCSD Dataset. (a) Input images. (bh) Enhanced results by CLAHE [1], GC [2], JieP [26], GTV [27], LD [28], STAR [29], and our method, respectively.
Symmetry 14 01473 g005
Figure 6. Comparisons of defect detection results on the RSDD Dataset. (a) Input images. (b) Label. (c) Baseline. (dj) Defect detection results of the enhanced images after applying CLAHE [1], GC [2], JieP [26], GTV [27], LD [28], STAR [29], and our method, respectively.
Figure 6. Comparisons of defect detection results on the RSDD Dataset. (a) Input images. (b) Label. (c) Baseline. (dj) Defect detection results of the enhanced images after applying CLAHE [1], GC [2], JieP [26], GTV [27], LD [28], STAR [29], and our method, respectively.
Symmetry 14 01473 g006
Figure 7. Comparisons of defect detection results on the MCSD Dataset. (a) Input images. (b) Label. (c) Baseline. (dj) Defect detection results of the enhanced images after applying CLAHE [1], GC [2], JieP [26], GTV [27], LD [28], STAR [29], and our method, respectively.
Figure 7. Comparisons of defect detection results on the MCSD Dataset. (a) Input images. (b) Label. (c) Baseline. (dj) Defect detection results of the enhanced images after applying CLAHE [1], GC [2], JieP [26], GTV [27], LD [28], STAR [29], and our method, respectively.
Symmetry 14 01473 g007
Figure 8. Qualitative analysis of ablation experiments. (a) Input image. (b) SCR. (c) SCR + BM3D. (d) SCR + Dncnn. (e) SCR + DDP. (f) SCR + DDP + BP (ours).
Figure 8. Qualitative analysis of ablation experiments. (a) Input image. (b) SCR. (c) SCR + BM3D. (d) SCR + Dncnn. (e) SCR + DDP. (f) SCR + DDP + BP (ours).
Symmetry 14 01473 g008
Figure 9. The convergence curves of different denoised priors.
Figure 9. The convergence curves of different denoised priors.
Symmetry 14 01473 g009
Table 1. Architecture of the deep denoised prior network.
Table 1. Architecture of the deep denoised prior network.
Input NameOperatorKernel SizeStrideOutput Channel
Conv1Conv&BN&ReLU3164
Conv2Conv&BN&ReLU32128
Conv3Conv&BN&ReLU32256
ResnetBlock1 31256
ResnetBlock2 31256
ResnetBlock3 31256
Deconv1Deconv&BN&ReLU32128
Deconv2Deconv&BN&ReLU3264
Conv4Conv&Tanh111
Table 2. Defect segmentation results of different enhancement methods (red is the best, blue is the second best).
Table 2. Defect segmentation results of different enhancement methods (red is the best, blue is the second best).
RSDD DatasetMCSD Dataset
MethodUNetPSPNetUNetPSPNet
Baseline0.6710.6930.6820.713
CLAHE0.6780.7120.6790.704
GC0.6590.7180.7010.725
JieP0.7310.7580.7350.746
GTV0.7140.7280.7420.744
LD0.7150.7310.7240.738
STAR0.7210.7360.630.673
Our0.7620.8010.7540.767
Table 3. Average inference time of different methods.
Table 3. Average inference time of different methods.
MethodsCLAHEGCJiePGTVLDSTAROur
Time (s)0.0070.0020.7810.8615.5420.3400.112
Table 4. Quantitative analysis of ablation experiments.
Table 4. Quantitative analysis of ablation experiments.
MethodsSCRSCR + BM3DSCR + DncnnSCR + DDPSCR + DDP + BP
IOU0.6820.7310.7350.7420.767
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Qiu, Y.; Niu, S.; Niu, T.; Li, W.; Li, B. Joint-Prior-Based Uneven Illumination Image Enhancement for Surface Defect Detection. Symmetry 2022, 14, 1473. https://doi.org/10.3390/sym14071473

AMA Style

Qiu Y, Niu S, Niu T, Li W, Li B. Joint-Prior-Based Uneven Illumination Image Enhancement for Surface Defect Detection. Symmetry. 2022; 14(7):1473. https://doi.org/10.3390/sym14071473

Chicago/Turabian Style

Qiu, Yuanhong, Shuanlong Niu, Tongzhi Niu, Weifeng Li, and Bin Li. 2022. "Joint-Prior-Based Uneven Illumination Image Enhancement for Surface Defect Detection" Symmetry 14, no. 7: 1473. https://doi.org/10.3390/sym14071473

APA Style

Qiu, Y., Niu, S., Niu, T., Li, W., & Li, B. (2022). Joint-Prior-Based Uneven Illumination Image Enhancement for Surface Defect Detection. Symmetry, 14(7), 1473. https://doi.org/10.3390/sym14071473

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop