Next Article in Journal
Blast Damage Assessment of Symmetrical Box-Shaped Underground Tunnel According to Peak Particle Velocity (PPV) and Single Degree of Freedom (SDOF) Criteria
Previous Article in Journal
Symmetry Breakings in Dual-Core Systems with Double-Spot Localization of Nonlinearity
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Total Variation Based Neural Network Regression for Nonuniformity Correction of Infrared Images

Department of Microelectronics, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(5), 157; https://doi.org/10.3390/sym10050157
Submission received: 26 March 2018 / Revised: 3 May 2018 / Accepted: 12 May 2018 / Published: 14 May 2018

Abstract

:
Many existing scene-adaptive nonuniformity correction (NUC) methods suffer from slow convergence rate together with ghosting effects. In this paper, an improved NUC algorithm based on total variation penalized neural network regression is presented. Our work mainly focuses on solving the overfitting problem in least mean square (LMS) regression of traditional neural network NUC methods, which is realized by employing a total variation penalty in the cost function and redesigning the processing architecture. Moreover, an adaptive gated learning rate is presented to further reduce the ghosting artifacts and guarantee fast convergence. The performance of the proposed algorithm is comprehensively investigated with artificially corrupted test sequences and real infrared image sequences, respectively. Experimental results show that the proposed algorithm can effectively accelerate the convergence speed, suppress ghosting artifacts, and promote correction precision.

1. Introduction

Infrared imaging systems generally suffer from the detector’s nonuniform response, so-called nonuniformity (NU) or fixed pattern noise (FPN), which strongly degrades the resolving capability of the infrared image. In order to eliminate the NU or FPN, the fundamental solution is to develop more sophisticated techniques and new materials in the manufacture of infrared sensors. However, the abovementioned development is a slow and difficult process. Therefore, nonuniformity correction (NUC), being an alternative solution based on signal processing methods, is now applied to most infrared imaging applications.
NUC methods are mainly classified into two categories: reference-based NUC (RBNUC) and scene-based NUC (SBNUC). RBNUC employs the response of blackbody radiation at different temperatures to calculate the gain and offset parameter by using linear or high-order fitting. RBNUC is not applicable to many applications because its parameter calibration process inevitably interrupts the normal imaging operation. Even more serious, spatial nonuniformity tends to drift slowly over time, which leads to the failure of RBNUC methods with fixed parameters. To address this problem, many SBNUC approaches have been developed to minimize the correction error resulting from the drifting response of infrared focal plane array (IRFPA) sensors [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16].
SBNUCs are motion-dependent and easily affected by extreme scenes, which makes most of them face the problem of ghosting artifacts. These artifacts are generally generated when global scene movement happens or partial content of the scene slows down or abruptly halts [8]. To suppress the ghosting effect, Harris et al. designed a deghosting module in the constant-statistics method [2,3]. Qian et al. introduced spatial filtering to promote the temporal high-pass filtering method [4], which was then ameliorated by Bai et al. [5] and Zuo et al. [6]. Moreover, the adaptive learning rate [7,8,9], edge-directed parameter estimation [10], edge-preserving spatial filter [11,12,13,14], and temporal statistics thresholding [15] strategies have been proposed in the neural network NUC (NN-NUC) algorithm to prevent ghosting artifacts and detail loss. The abovementioned efforts have been confirmed to be effective but still deficient. Serious ghosting artifacts accompanied with slow convergence speed is still a challenging problem to be solved.
In seeking minor residual correction error and fast convergence rate, we present a novel scene adaptive NUC algorithm in this paper. Specifically, the proposed adaptive NUC method relies on the employment of a total variation (TV) penalty to solve the overfitting problem in the linear regression estimation of correction parameters, which helps to eliminate FPN as possible and preserve the details in the iterative correction process. Moreover, an adaptive gated learning rate is presented to further suppress the ghosting artifacts and retain a higher convergence rate.
The rest of this paper is organized as follows. Section 2 starts with a detailed description of the typical NN-NUC method, and then the proposed TV-regularized NN-NUC algorithm along with its highlight improvements is presented and discussed in Section 3. Following that, the performance of the proposed algorithm is respectively verified with artificially-corrupted sequences and real infrared sequences in Section 4. The paper is ended with a summary in Section 5.

2. Typical NN-NUC Method

Based on the vision bionics research, Scribner et al. presented a retina-like neural-network structure and its corresponding NUC method [1]. The NN-NUC method links every nerve cell in a hidden layer to a pixel in an input layer and implements the calculation of the error function in the hidden layer. Following that, it minimizes the square error function to get the correction parameters iteratively in the output layer based on a least mean square (LMS) algorithm. Finally, the NUC for the IRFPA is implemented according to those parameters.
The response of each detector in IRFPA is generally considered to be linear in most of its dynamic range, the relationship between the scene irradiance x i j and the detector response y i j can be expressed as
y i j = a i j x i j + b i j ,
where a i j and b i j are gain and offset parameters associated to the ( i , j ) detector, respectively.
According to the linear response model defined in Equation (1), the calibrated output of the ( i , j ) detector in IRFPA can be represented as
X ^ i j = W ^ i j T Y i j ,
where X ^ i j and Y i j = ( y i j , 1 ) T denote the corrected value and the observation value, respectively. W ^ i j = ( g ^ i j , o ^ i j ) T stands for the correction parameter matrix, in which g ^ i j = 1 / a i j and o ^ i j = b i j / a i j are the correction parameters for gain and offset, respectively.
Since LMS techniques are utilized to iteratively estimate the correction parameter, the needed error function can be defined as
E i j = ( X ^ i j D i j ) 2 ,
where the desired target value D i j , being the output of the ( i , j ) neuron located at the hidden layer, is calculated by
D i j = 1 ( 2 r + 1 ) 2 m = r r n = r r Y i + m , j + n ,
where r denotes the radius of the kernel.
Thereafter, a stochastic gradient descent algorithm is applied to seek ways to minimize the error function E i j , and yields the recursive update of correction parameter matrix W ^ i j . Following that, the adaptive correction results can be obtained from the output layer according to Equation (2).

3. TV-Regularized NN-NUC Algorithm

3.1. Scheme Description

In order to solve the overfitting problem in the LMS regression of the traditional NN-NUC method, a total variation penalty is introduced into the cost function. In the newly presented algorithm, the TV-regularized cost function can be expressed as
J i j ( n ) = E i j ( n ) + δ ϒ T V ( X ^ i j ( n ) ) ,
where n stands for the frame index, the error function. E i j ( n ) acts as the fidelity term to preserve the characters of the original image, and the parameter δ is a scalar for properly weighting the similarity cost against the regularization term ϒ T V ( X ^ i j ( n ) ) .
In Equation (5), the TV criterion penalizes the entire change in the corrected image as quantified by the L 1 norm of the gradient magnitude
ϒ T V ( X ^ i j ( n ) ) = X ^ i j ( n ) 1 ,
where is gradient operator. The advantage of the TV criterion is that it tends to preserve edges in the denoising without severely penalizing steep local gradients [17,18].
According to the steepest descent method, we minimize the cost function and deduce the iterative formula of the correction parameter as
W ^ i j ( n + 1 ) = W ^ i j ( n ) + μ i j ( n ) G p ( J i j ( n ) ) ,
where G p ( ) represents a partial derivative operator, μ i j ( n ) is a learning rate that governs the convergence of the algorithm, and the vector W ^ i j ( n ) that minimizes the Equation (5) will be the solution to
G p ( J i j ( n ) ) = { g ^ [ J i j ( n ) ] = 0 o ^ [ J i j ( n ) ] = 0
Thereafter, incorporating the solution of Equation (8) into Equation (7), we can respectively obtain the iterative formula of estimated gain and offset correction parameter
g ^ i j ( n + 1 ) = g ^ i j ( n ) μ i j ( n ) { F i j ( n ) + δ R i j ( n ) } y i j ( n ) ,
o ^ i j ( n + 1 ) = o ^ i j ( n ) μ i j ( n ) { F i j ( n ) + δ R i j ( n ) } ,
where the components representing the derivative of fidelity and penalty terms can be respectively expressed as
F i j ( n ) = X ^ i j ( n ) D i j ( n )
and
R i j ( n ) = d i v ( X ^ i j ( n ) | X ^ i j ( n ) | 2 + ε ) ,
where d i v ( ) denotes the divergence operator, and ε = 1 × 10 6 is a small constant added to avoid non-differentiability of the TV penalty.
Finally, the calibrated output of the ( n + 1 ) th iteration can be given by
X ^ i j ( n + 1 ) = W i j T ( n + 1 ) Y i j ( n + 1 ) .
According to the previously described theory, the complete scheme representing the whole process of the proposed TV-regularized neural-network NUC (TVRNN-NUC) method is presented in Figure 1.

3.2. Gated Adaptive Learning Rate

In order to eliminate burn-in ghosting from lack of scene motion, we present a gated adaptive learning rate μ i j ( n ) , which can be defined as
μ i j ( n ) = { η i j ( n ) 1 + σ Y i j ( n ) , | D i j ( n ) B i j ( n 1 ) | > K 0 , e l s e
and
B i j ( n ) = { D i j ( n ) , | D i j ( n ) B i j ( n 1 ) | > K B i j ( n 1 ) , e l s e ,
where K is the threshold value that determines the updating of learning rate, σ Y i j ( n ) stands for the local standard deviation of the current observation Y i j ( n ) , which makes μ i j ( n ) increase in flat regions and decrease in texture regions. B i j ( 0 ) = is defined to ensure that | D i j ( 1 ) B i j ( 0 ) | > K for all i , j , and the variable step size coefficient that further solves the problem of decreasing convergence rate caused by gated strategy is given by
η i j ( n ) = { η max , i f η i j ( n ) η max η min , i f η i j ( n ) η min η i j ( n ) , e l s e
and
η i j ( n ) = α η i j ( n 1 ) + β E i j ( n ) ,
where 0 < η min < η max . The constant η max denotes the upper limit of step size, which is introduced to guarantee the mean square error (MSE) of the algorithm to be bounded, and η min is chosen to provide a lower limit of tracking capability. The initial step size η i j ( 0 ) is generally set to be η max . The step size η i j ( n ) is determined by the result of the error function E i j ( n ) as well as the parameters α and β . α should be taken in a range of (0, 1) to provide exponential forgetting. β is a small constant that is selected in conjunction with α to satisfy the misadjustment requirements [19]. Obviously, a large prediction error yields faster tracking; on the contrary, a small prediction error restrains the misadjustment.

4. Experiments and Results Analysis

In this section, we will employ artificially-corrupted sequences and real infrared sequences to verify the effectiveness of the newly-developed TVRNN-NUC method, and then compare the performance of the proposed TVRNN-NUC method to that of a typical NN-NUC method [1] and a TV-NUC method [20] with objective and subjective evaluation assessment. For each NUC method, we initialized the estimated gain matrix g ^ and offset matrix o ^ with ones and zeros in all elements, set parameters δ = 10 , α = 0.97 , β = 2 × 10 9 , and then implemented a proper fine-tuning of the learning rate to pursuit the best performance with a trade-off between convergence speed and stability.

4.1. Simulation with Artificially Corrupted Data

In this simulation, 4000 frames of 471 × 358-sized images collected by A615 camera (FLIR Systems, Inc., Wilsonville, OR, USA) and 500 frames of 532 × 478-sized images acquired by IRT102 imager (JOHO Technology, Wuhan, China) were first extracted from two different infrared sequences and named sequence 1 and sequence 2, respectively. Following that, both of them were artificially corrupted by FPN according to the model in Equation (1). In order to test the correction effect in different noise modes, the stripe gain parameters and random gain parameters with mean 1 and standard deviation 0.15 were respectively applied to sequence 1 and sequence 2, while offset parameters with mean 0 and standard deviation 11.55 were generated as realizations of iid Gaussian random variables and applied to both sequences. Thereafter, the NN-NUC, TV-NUC, and TVRNN-NUC with different gated threshold ( K = 1 and K = 10 ) and gated TV (GTV, simplified TVRNN-NUC with gated threshold K = 10 and fixed step size η i j ( n ) = 1.5 × 10 4 ) were respectively applied to correct the artificially-corrupted data frame-by-frame.

4.1.1. Comparison of Ghost Suppression Performance

In this section, the ghost suppression performance of variant NUC methods is compared by using the subjective performance index and objective visual effect. Firstly, the mean square error (MSE) of the gain estimates [11] is introduced to quantitatively evaluate the ghost suppression performance, which is defined as
M S E g a i n ( n ) = ( 1 / N r N c ) i = 1 N r j = 1 N c [ g i j ( n ) g ^ i j ( n ) ] 2 ,
where N r and N c are the number of rows and columns in the simulated gain g and estimated gain g ^ , respectively.
The curves of MSE gain estimation obtained by applying different NUC methods to sequence 1 are plotted in a logarithmic scale in Figure 2. The NN-NUC showed an obvious non-convergence trend after the 800th frame due to the lack of deghosting ability. On the contrary, the TV-NUC posed a relatively slow descent and started to obtain a lower MSE gain estimation after the 2500th frame when compared to the NN-NUC. For the proposed TVRNN-NUC with K = 1 , the MSE dropped fastest and sustained a lower level in most of the iterative correction process. When global scene motion was negligible, once the scene motion slowed down (from about the 1000th to the 1200th frame), the TVRNN-NUC with larger gated threshold K = 10 yielded smaller MSE, indicating a better deghosting capability. Moreover, TVRNN-NUC with K = 10 took a distinct advantage from the use of adaptive step size when compared to the fixed step GTV-NUC with K = 10 . This experiment confirmed that the novel TV-regularized neural network regression model and gated adaptive learning strategy of the proposed TVRNN-NUC resulted in a stable and swift deghosting effect when compared to the existing NN-NUC and TV-NUC.
Figure 3 shows the corrected results of various NUC methods for the corrupted 1210th frame (immediately after the pause of object movement in the scene) in sequence 1. Figure 3a shows the simulated FPN disturbing image. The outputs using the NN-NUC, TV-NUC, GTV-NUC ( K = 10 ), TVRNN-NUC ( K = 1 ), and TVRNN-NUC ( K = 10 ) are shown in Figure 3b–f, respectively. From Figure 3b, we can observe strong ghosting artifacts in the NN-NUC outputs, and most of the ghosting artifacts trended to be suppressed by the TV-NUC except for a small amount of residual. By contrast, the TVRNN-NUC further reduced ghosting and presented the better correction result. Compared to TVRNN-NUC with K = 1 , we can find that TVRNN-NUC and GTV-NUC with K = 10 eliminated the ghosting artifacts more completely. This phenomenon validates that a higher gated threshold is beneficial for ghosting suppression.

4.1.2. Comparison of Correction Precision and Convergence Rate

The correction results of different NUC methods was quantitatively assessed with both the peak signal-to-noise ratio (PSNR) criterion
P S N R ( n ) = 10   log 10 255 2 1 M N i = 1 M j = 1 N [ X ^ i j ( n ) x i j ( n ) ] 2 ,
and the roughness index [21], which was employed to measure the roughness of the image and can be defined as
ρ ( n ) = h 1 X ^ ( n ) 1 + h 2 X ^ ( n ) 1 X ^ ( n ) 1 ,
where h 1 is a horizontal mask [ 1 , 1 ] , h 2 = h 1 T is a vertical mask, represents a convolution operator, and 1 refers to an L 1 norm. A lower roughness and higher PSNR means a better correction result.
Figure 4 shows the performance of different NUC methods upon sequence 1. It can be seen clearly from Figure 4a that the PSNR for TVRNN-NUC with K = 1 and K = 10 grew faster and finally reached a gap of nearly 2 dB over NN-NUC and TV-NUC. When we focus on the gated threshold, we can find the convergence rate of TVRNN-NUC with K = 10 was obviously slower than that with K = 1 but was faster than GTV-NUC with K = 10 . Essentially, for the TVRNN-NUC method, the increased gated threshold led to a better deghosting effect while paying the price of decreasing convergence rate; for the GTV-NUC method, the lacking adaptive variable step size involved in the TVRNN-NUC method pays back a slower convergence. For the TV-NUC method, the PSNR curve keeps a slow increase during the first 900 frames. Thereafter, it decreases suddenly and fluctuates fiercely in the rest of sequence 1.
As can be seen clearly from the roughness curves plotted in a logarithmic scale in Figure 4b, the proposed TVRNN-NUC method obtained a lower roughness value with faster and more stable convergence. Performance assessment for the averaged values of the whole iterative correction processes belonging to each of NUC methods are listed in Table 1, where TVRNN-NUC methods presented an outstanding increment of PSNR over both of NN-NUC and TV-NUC. In addition, TVRNN-NUC methods obtained the smallest reported mean roughness index.
It is worth noting that the performance promotion of the TV-NUC was not distinct over NN-NUC from the entire correction process. The reason lies in that the convergence process of TV-NUC is easily interrupted and restarted by frequent scene switching. In order to further confirm this view, we repeated the above experiment using sequence 2. Different from sequence 1, the scene in sequence 2 is continuous without sudden scene changes. In this case, the TV-NUC method was not disturbed by scene switching, and achieved stable convergence. From the assessment shown in Figure 5 and Table 2, we can find that TV-NUC showed an obvious advantage in PSNR and held a slender lead in roughness after about 110 frames of iterative correction when compared to NN-NUC.
Moreover, it is clear that neither TVRNN-NUC nor GTV-NUC with K = 5 were superior to the TVRNN-NUC with K = 1 . Since the strong edge object in sequence 2 keeps moving without perceptible pause, even if the NUC methods without gated strategy were not bothered by the ghosting artifacts, in this situation, the additional gated strategy led to slow convergence and residual noise rather than suppressing ghosting.
The aforementioned experimental results indicate that our proposed TVRNN-NUC method presented outstanding correction precision and had faster convergence speed. More importantly, the stability of our proposed TVRNN-NUC was outstanding whether the object moved or not and scene switched or not.

4.2. Applications to Real Infrared Image Sequences

To test the practical processing effect, 1970 frames of real infrared data (named sequence 3 with gentle scene motion) collected by a 384 × 288 ULIS Pico384 camera (ULIS, Veurey-Voroize, France) and 200 frames of 320 × 256-sized real infrared data (named sequence 4 with more dramatic scene motion) acquired by RTD3172C imager (IRay, Yantai, China) were adopted to contrast the performance of the fine-tuning TVRNN-NUC with K = 5 against NN-NUC and TV-NUC. For each method, we carried out a quantitative performance test with the metric of roughness index (ρ).
Figure 6 shows the performance of different NUC methods upon sequence 3. It can be seen clearly that the TVRNN-NUC method converged more rapidly and reached a stable convergence with smallest roughness index. Note that at the motion pause stage (from frame 940 to frame 1110), the NN-NUC method still updated the correction parameters and continued to implement correction, which made the output become over-smoothed (roughness curve reached the concave point of the pause stage). After the resumption of scene motion, the NN-NUC method began to quickly recover, but noticeable ghosting existed for the following 50 frames at least (this phenomenon can be observed in Figure 7). The TV-NUC method slowed down the correction with the TV-norm changed faintly in the pause stage, which resulted in a comparatively better deghosting effect. In contrast, the TVRNN-NUC method adopted a gated strategy and TV regularization to simultaneously further suppress the ghosting and avoid over-smoothing effects. The reason lies in that the edge-preserving total variation regularizer contributes to smooth out the FPN without the need to increase the average window size of target value D in the fidelity term. This is why the proposed TVRNN-NUC can avoid the damage of the over-smoothing effect from the averaging target.
In order to observe the corrected results and deghosting effects of the three methods after the motion pause intuitively, take the correction results of the 1155th frame shown in Figure 7 as an example. Figure 7a shows the input image. The corrected results of the NN-NUC, TV-NUC, and TVRNN-NUC are shown in Figure 7b–d, respectively. By comparing the visual effect with the naked eye, we can easily find that the proposed TVRNN-NUC method suppressed the FPN better without perceptible ghosting artifacts and preserved more high-frequency details, which made the corrected scene seem sharper and more distinguishable.
Figure 8 shows the curves of roughness obtained by different NUC methods for sequence 4. The proposed TVRNN-NUC acquired lowest roughness and required only 20 iterations to reach convergence. From the visual effect comparisons shown in Figure 9, we can see clearly that the correction results of TVRNN-NUC provided distinct details while suppressing the annoying fixed pattern noise and ghosting artifacts simultaneously. Meanwhile, NN-NUC produced obvious ghosting artifacts on the region near the roof and lost much of the detailed spatial information, and TV-NUC had partial grid-shaped fixed pattern noise remaining interspersed in the background.
It is worth nothing that the TVRNN-NUC method needed about 200 iterative corrections to reach convergence in sequence 3 and only 20 iterations in sequence 4. The reason lies in that the global scene motion of sequence 4 was more violent than sequence 3.
The above testing results validate the fact that our proposed TVRNN-NUC method can effectively eliminate nonuniformity and suppress ghosting artifacts more completely for real corrupted sequences. At the same time, abundant scene motion is beneficial to promote the convergence rate of the TVRNN-NUC. In other words, the increase of scene motion could be regarded as a reasonable idea to overcome the limitation of slow convergence introduced by the gated strategy.

4.3. Comparison of Real-Time Performance

In order to verify the real-time performance, we corrected the abovementioned four testing sequences with different length and frame size and then evaluated the processing efficiency on a 2.4 GHz Dell desktop PC. Table 3 gives the actually achievable processing frame frequency for different frame sizes. The results reported in Table 3 indicate that the proposed TVRNN-NUC demanded more computational load than others, which is the cost of its outstanding correction performance. Even so, the TVRNN-NUC can afford to process considerably large frames in excess of the real-time requirement in serial computing mode. Once implemented on specially-designed parallel hardware (e.g., Graphics Processing Unit (GPU) or Field Programmable Gate Array (FPGA)), the TVRNN-NUC will be further accelerated by parallel convolution computation.

5. Conclusions

While many scene-adaptive NUC approaches have been presented to suppress FPN in infrared images, they may not be able to solve ghosting artifacts accompanied with slow convergence speed. To this end, we introduce a total variation penalty into the objective function to solve the overfitting problem existing in the LMS regression of the NN-NUC method. Moreover, an adaptive gated learning rate is presented to further suppress the ghosting effect while retaining a higher convergence rate. Experiments with both of simulated data and real scenes have successfully demonstrated that the proposed TVRNN-NUC method achieved a higher correction precision with a faster convergence rate. Moreover, the correction results of our proposed method had a sharper visual effect without perceptible ghosting artifacts.

Author Contributions

R.L. performed the theoretical derivation; G.Y. and G.Z. conducted the test data collection and designed the experiment; R.L. and G.Y. wrote the paper.

Acknowledgments

This work was supported by Natural Science Foundation of China under grant No.61674120 and Fundamental Research Funds for the Central Universities of China under grant No. JBG161113.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Scribner, D.A.; Sarkady, K.A.; Kruer, M.R.; Caulfield, J.T.; Hunt, J.; Colbert, M.; Descour, M. Adaptive Retina-like Preprocessing for Imaging Detector Arrays. In Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, USA, 28 March–1 April 1993; Volume 3, pp. 1955–1960. [Google Scholar]
  2. Harris, J.G.; Chiang, Y.-M. Minimizing the Ghosting Artifact in Scene-Based Nonuniformity Correction. In Proceedings of the SPIE Conference on Infrared Imaging Systems: Design Analysis, Modeling, and Testing IX, Orlando, FL, USA, 15–16 April 1998; Volume 3377, pp. 106–113. [Google Scholar]
  3. Harris, J.; Chiang, Y. Nonuniformity correction of infrared image sequences using the constant-statistics constraint. IEEE Trans. Image Process. 1999, 8, 1148–1151. [Google Scholar] [CrossRef] [PubMed]
  4. Qian, W.; Chen, Q.; Gu, G. Space low-pass and temporal high-pass nonuniformity correction algorithm. Opt. Rev. 2010, 17, 24–29. [Google Scholar] [CrossRef]
  5. Bai, J.; Chen, Q.; Qian, W.; Wang, X. Ghosting reduction in scene-based nonuniformity correction of infrared image sequences. Chin. Opt. Lett. 2010, 8, 1113–1116. [Google Scholar]
  6. Zuo, C.; Chen, Q.; Gu, G.; Qian, W. New temporal high-pass filter nonuniformity correction based on bilateral filter. Opt. Rev. 2011, 18, 197–202. [Google Scholar] [CrossRef]
  7. Vera, E.; Torres, S. Fast adaptive nonuniformity correction for infrared focal plane array detectors. EURASIP J. Appl. Signal Process. 2005, 13, 1994–2004. [Google Scholar] [CrossRef]
  8. Hardie, R.C.; Baxley, F.; Brys, B.; Hytla, P. Scene-Based Nonuniformity Correction with Reduced Ghosting Using a Gated LMS Algorithm. Opt. Express 2009, 17, 14918–14933. [Google Scholar] [CrossRef] [PubMed]
  9. Rui, L.; Yin-Tang, Y.; Duan, Z.; Yue-Jin, L. Improved neural network based scene-adaptive nonuniformity correction method for infrared focal plane arrays. Appl. Opt. 2008, 47, 4331–4335. [Google Scholar] [CrossRef]
  10. Zhang, T.; Shi, Y. Edge-directed adaptive nonuniformity correction for staring infrared focal plane arrays. Opt. Eng. 2006, 45, 016402-1-11. [Google Scholar] [CrossRef]
  11. Rossi, A.; Diani, M. Bilateral filter-based adaptive nonuniformity correction for infrared focal-plane array systems. Opt. Eng. 2010, 49, 057003-1-13. [Google Scholar] [CrossRef]
  12. Sheng-Hui, R.; Hui-Xin, Z.; Han-Lin, Q.; Rui, L.; Kun, Q. Guided filter and adaptive learning rate based non-uniformity correction algorithm for infrared focal plane array. Infrared Phys. Technol. 2016, 76, 691–697. [Google Scholar] [CrossRef]
  13. Zhaolong, L.; Tongsheng, S.; Shuli, L. Scene-based nonuniformity correction based on bilateral filter with reduced ghosting. Infrared Phys. Technol. 2016, 77, 360–365. [Google Scholar]
  14. Yu, H.; Zhang, Z.; Wang, C. An improved retina-like nonuniformity correction for infrared focal-plane array. Infrared Phys. Technol. 2015, 73, 62–72. [Google Scholar] [CrossRef]
  15. Zhang, B.H.; Zhang, J.J.; Xu, H. A Nonuniformity Correction Enhancement Method Based on Temporal Statistical for Infrared System. In Proceedings of the IEEE International Symposium on Photonics and Optoelectronics, Shanghai, China, 21–23 May 2012; pp. 1–4. [Google Scholar]
  16. Boutemedjet, A.; Deng, C.; Zhao, B. Robust Approach for Nonuniformity Correction in Infrared Focal Plane Array. Sensors. 2016, 16, 1890. [Google Scholar] [CrossRef] [PubMed]
  17. Tang, L.; Fang, Z. Edge and contrast preserving in total variation image denoising. EURASIP J. Adv. Signal Process. 2016, 13, 2–21. [Google Scholar] [CrossRef]
  18. David, S.; Chan, T. Edge-preserving and scale-dependent properties of total variation regularization. Inverse Probl. 2003, 19, 165–187. [Google Scholar]
  19. Kwong, R.; Johnston, E.W. A Variable Step Size LMS Algorithm. IEEE Trans. Signal Process. 1992, 40, 1633–1642. [Google Scholar] [CrossRef]
  20. Vera, E.; Meza, P.; Torres, S. Total variation approach for adaptive nonuniformity correction in focal-plane arrays. Opt. Lett. 2011, 36, 172–174. [Google Scholar] [CrossRef] [PubMed]
  21. Hayat, M.M.; Torres, S.N.; Armstrong, E.; Cain, S.C.; Yasuda, B. Statistical algorithm for nonuniformity correction in focal-plane arrays. Appl. Opt. 1999, 38, 772–780. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Scheme of the proposed TV-regularized neural-network nonuniformity correction (TVRNN-NUC) method.
Figure 1. Scheme of the proposed TV-regularized neural-network nonuniformity correction (TVRNN-NUC) method.
Symmetry 10 00157 g001
Figure 2. Comparison of the mean square error (MSE) gain estimation obtained by operating various methods upon sequence 1. GTV: gated total variation; NN: neural network; TV: total variation.
Figure 2. Comparison of the mean square error (MSE) gain estimation obtained by operating various methods upon sequence 1. GTV: gated total variation; NN: neural network; TV: total variation.
Symmetry 10 00157 g002
Figure 3. Comparison of correction results for artificially corrupted data in sequence 1. (a) Corrupted 1210th frame; (b) Corrected with NN-NUC; (c) Corrected with TV-NUC; (d) Corrected with GTV-NUC ( K = 10 ); (e) Corrected with TVRNN-NUC ( K = 1 ); (f) Corrected with TVRNN-NUC ( K = 10 ).
Figure 3. Comparison of correction results for artificially corrupted data in sequence 1. (a) Corrupted 1210th frame; (b) Corrected with NN-NUC; (c) Corrected with TV-NUC; (d) Corrected with GTV-NUC ( K = 10 ); (e) Corrected with TVRNN-NUC ( K = 1 ); (f) Corrected with TVRNN-NUC ( K = 10 ).
Symmetry 10 00157 g003
Figure 4. NUC performance of various methods for artificially corrupted sequence 1. (a) Peak signal-to-noise ratio (PSNR, dB) and (b) Roughness index (ρ).
Figure 4. NUC performance of various methods for artificially corrupted sequence 1. (a) Peak signal-to-noise ratio (PSNR, dB) and (b) Roughness index (ρ).
Symmetry 10 00157 g004
Figure 5. NUC performance of various methods for artificially-corrupted sequence 2. (a) PSNR (dB); (b) Roughness index (ρ).
Figure 5. NUC performance of various methods for artificially-corrupted sequence 2. (a) PSNR (dB); (b) Roughness index (ρ).
Symmetry 10 00157 g005
Figure 6. The roughness of sequence 3 and its correction results.
Figure 6. The roughness of sequence 3 and its correction results.
Symmetry 10 00157 g006
Figure 7. Correction results of different NUC methods for the 1155th frame in sequence 3. (a) Real scene with nonuniformity; (b) Corrected with NN-NUC; (c) Corrected with TV-NUC; (d) Corrected with TVRNN-NUC. The red boxes mark out the partial obvious ghosting artifacts in the images.
Figure 7. Correction results of different NUC methods for the 1155th frame in sequence 3. (a) Real scene with nonuniformity; (b) Corrected with NN-NUC; (c) Corrected with TV-NUC; (d) Corrected with TVRNN-NUC. The red boxes mark out the partial obvious ghosting artifacts in the images.
Symmetry 10 00157 g007
Figure 8. The roughness of sequence 4 and its correction results.
Figure 8. The roughness of sequence 4 and its correction results.
Symmetry 10 00157 g008
Figure 9. Correction results of different NUC methods for the 148th frame in sequence 4. (a) Real scene with nonuniformity; (b) Corrected with NN-NUC; (c) Corrected with TV-NUC; (d) Corrected with TVRNN-NUC. The red box marks out the partial obvious ghosting artifacts.
Figure 9. Correction results of different NUC methods for the 148th frame in sequence 4. (a) Real scene with nonuniformity; (b) Corrected with NN-NUC; (c) Corrected with TV-NUC; (d) Corrected with TVRNN-NUC. The red box marks out the partial obvious ghosting artifacts.
Symmetry 10 00157 g009
Table 1. Mean PSNR (dB) and roughness index (ρ) for artificially-corrupted sequence 1. FPN: fixed pattern noise.
Table 1. Mean PSNR (dB) and roughness index (ρ) for artificially-corrupted sequence 1. FPN: fixed pattern noise.
Performance MetricsFPN-Corrupted ImageCorrected Images
NN-NUCTV-NUCGTV-NUCTVRNN-NUC (k = 1)TVRNN-NUC (k = 10)
PSNR (dB)20.2928.1028.3127.7329.6328.27
ρ0.27600.10930.10830.11910.09990.1122
Table 2. Mean PSNR (dB) and roughness index (ρ) for artificially-corrupted sequence 2.
Table 2. Mean PSNR (dB) and roughness index (ρ) for artificially-corrupted sequence 2.
Performance MetricsFPN-Corrupted ImageCorrected Images
NN-NUCTV-NUCGTV-NUCTVRNN-NUC (k = 1)TVRNN-NUC (k = 5)
PSNR (dB)22.1732.5533.3429.8436.6533.34
ρ0.37080.09270.10410.13540.07190.1029
Table 3. Processing frame frequency (frames/second) for different testing sequences.
Table 3. Processing frame frequency (frames/second) for different testing sequences.
IndexNN-NUCTV-NUCGTV-NUCTVRNN-NUC
Sequence 1 (471 × 358)2722104643
Sequence 2 (532 × 478)1991153128
Sequence 3 (384 × 288)5804428681
Sequence 4 (320 × 256)62854911299

Share and Cite

MDPI and ACS Style

Lai, R.; Yue, G.; Zhang, G. Total Variation Based Neural Network Regression for Nonuniformity Correction of Infrared Images. Symmetry 2018, 10, 157. https://doi.org/10.3390/sym10050157

AMA Style

Lai R, Yue G, Zhang G. Total Variation Based Neural Network Regression for Nonuniformity Correction of Infrared Images. Symmetry. 2018; 10(5):157. https://doi.org/10.3390/sym10050157

Chicago/Turabian Style

Lai, Rui, Gaoyu Yue, and Gangxuan Zhang. 2018. "Total Variation Based Neural Network Regression for Nonuniformity Correction of Infrared Images" Symmetry 10, no. 5: 157. https://doi.org/10.3390/sym10050157

APA Style

Lai, R., Yue, G., & Zhang, G. (2018). Total Variation Based Neural Network Regression for Nonuniformity Correction of Infrared Images. Symmetry, 10(5), 157. https://doi.org/10.3390/sym10050157

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop