Next Article in Journal
Design and Practical Evaluation of a Family of Lightweight Protocols for Heterogeneous Sensing through BLE Beacons in IoT Telemetry Applications
Previous Article in Journal
Vanishing Point Extraction and Refinement for Robust Camera Calibration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications

1
School of Electronics Engineering, Kyungpook National University, 80 Daehakro, Bukgu, Daegu 41566, Korea
2
Hanwha Systems Coporation, 244, 1 Gongdanro, Gumi, Gyeongsangbukdo 39376, Korea
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(1), 60; https://doi.org/10.3390/s18010060
Submission received: 26 September 2017 / Revised: 15 December 2017 / Accepted: 25 December 2017 / Published: 27 December 2017
(This article belongs to the Section Physical Sensors)

Abstract

:
Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.

1. Introduction

Electro-optical infrared (EO/IR), target acquisition and designation sights (TADS), and forward looking infrared (FLIR) sensors are mounted on weapons systems such as unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), combat planes, and helicopters. Weapons systems acquire a tactical map of the battlefield or critical data for tracking and attacking targets using electro-optical sensors. Electro-optical sensor systems used in weapon systems are required to provide continuous high-quality images in order to detect the target even in low-visibility and foggy conditions. However, images from general daylight cameras cannot clearly show a target in dark or foggy conditions, as illustrated in Figure 1. Therefore, IR cameras are used in addition to daylight cameras, to improve the visibility for detecting and tracking targets in severe environments [1]. However, various types of noise are generated in IR images due to the complex semiconductor structures of IR sensors, which require high-level manufacturing processes. Additional noise is generated in IR images when such cameras are employed in severe environments, such as battlefields. Such noise significantly degrades the performance of target detection and tracking.
There are many methods to remove noise from images, and the methods applied to remote sensing images are similar to those applied to thermal images. Xinxin Liu et al. proposed methods to remove striped noise from remote sensing images [3], and Qiangqiang Yuan et al. proposed a method to remove dot- and line-type noise from hyperspectral images [4]. However, the noise in images obtained by practical military optical systems contains various additional types of noise, such as flickering noise, oblique line noise, long term variant (LTV) noise, and wavefront aberration. Therefore, noise filtering methods for military optical systems take various different approaches.
In images from military optical systems, general noise can be corrected using nonuniformity correction (NUC) and the neighbor pixels duplication (NPD) technique, but there remained unresolved noise problems [5,6,7]. The first type of noise consists of dynamic flickering dot and line noise, generated due to the semiconductor design characteristics of the focal plane area (FPA) and readout integrated circuit (ROIC) of IR sensors [8]. Noise of the second type is generated due to optical wavefront aberrations occurring because of the atmosphere and heat sources between the IR sensor and the target. The third type of noise consists of striped noise generated because of the use of the scan mirror method or power disturbances [9,10]. General noise that has been resolved and unresolved noise that requires solutions are summarized in Table 1.
In the case of thermal imaging, several studies have been conducted on compensating for nonuniformity characteristics. The NUC method is a representative method. A lookup table of noise is created using a black body that has constant temperature, and the noise is compensated using the lookup table [11]. The process of the NUC method is illustrated in Figure 2. However, the NUC method is incapable of removing noise such as moving dots and lines and wavefront aberration, listed in Table 1, because such noise can change rapidly. The LTV fixed pattern noise can be temporarily compensated by repeating NUC, but the noise compensation is not consistent. Also, the NUC method is not possible to implement in real time, because it requires an external black body. The two-point correction (TPC) method solves this problem by using a built-in thermal electric cooler (TEC) [12]. However, the TPC method requires complex hardware designs for optical structures. There have been studies regarding the compensation of noise without using a reference object, such as a scene-based NUC method and feedback-integrated scene cancellation (FiSC) [13,14]. Weighted nuclear minimization (WNNM) is a state-of-the-art method utilizing the low-rank-based image noise reduction method [15]. The advantages and disadvantages of these methods are summarized in Table 2.
In this paper, the form and causes of three types of unresolved noise in IR images obtained by MWIR and LWIR sensors are analyzed. Thermal imaging camera systems are generally designed as shown in Figure 3a. The noise characteristics of output images depend on the properties of the semiconductor materials (InSb, MCT-HgCdTe, InGaAs, etc.), the characteristics of the peripheral circuits such as the ROIC and digital signal processing, and thermal detector elements such as the FPA. The light energy of an observed scene passes through the object lens. Then, it moves through the optical pass, and is projected onto the detector FPA. In addition, the temperature of a cooled MWIR/LWIR system drops to approximately −200 °C, improving the signal-to-noise ratio (SNR) to obtain high-resolution images. The representative types of cooler used in these systems are rotary (type-I: crank rotational driving refrigerant compression) and linear (type-II: piston reciprocating refrigerant compression). Noise can be generated by the motorized structures of cooler systems. The internal structure of the Dewar detector is illustrated in Figure 3b. The internal structure of the Dewar is maintained in a vacuum in order to prevent image quality degradation from scattering in air and incoming dust. There is a cold shield that prevents image disturbances resulting from unwanted heating, and the inner materials are covered with getter in order to minimize reflection. As shown in Figure 3c, the IR radiation projected onto the FPA is converted into the output signal on the ROIC. During this process, a dot-shaped noise occurs because of the lack of uniformity of the unit cell preamplifier, and vertical line-shaped noise occurs when the multiplexer impedance amplifier is not uniform.
In Section 2, a background registration-based adaptive noise filtering (BRANF) method is proposed to deal with these unresolved noises. BRANF performs background registration and principle component analysis (PCA) with continuous frames, which makes it possible to resolve the problematic noise listed in Table 1. Moving dots and lines and wavefront aberration are solved based on PCA, and LTV fixed pattern noise is compensated by converting it to short-term variant noise during the background registration process. In Section 3, quantitative image quality evaluation methods are proposed, which are capable of evaluating the improvement in images without using reference images. In Section 4, experiments using MWIR and LWIR images obtained from practical military optical systems are presented, and the performance is compared with the most commonly employed NUC method in practice and the state-of-the-art method WNNM.

2. Proposed Background Registration-Based Adaptive Noise Filtering Method

In this section, we describe the idea and method used to effectively remove noise, and explain the concept of an image-improvement technique using background registration-based adaptive noise filtering. The moving line- and dot-shaped noise types in the image are not fixed, and they rapidly change and move in a short time. The BRANF method is proposed, which can compensate for problematic noise such as moving dots and lines, wavefront aberrations, and LTV fixed pattern-type noise. The concept of an image improvement technique using BRANF is illustrated in Figure 4. First, noisy video frames ( F n ) of LWIR/MWIR are given as the input of the image processing module. Feature points are extracted from two consecutive frame images using the Shi–Tomasi corner detector, which exhibits a good repeatability when the number of corners per frame is low [17]. Then, the optical flow is calculated based on the feature points, and the optical movement (OM) matrix is derived. It is important to calculate the correct OM in order to generate accurate background registration frame images [18]. Next, robust principle component analysis (RPCA) is performed [19,20,21,22,23,24]. This separates images into a low-rank part and a sparse part. The low-rank parts consist of images without dynamic pattern noise and fixed pattern noise, because fixed pattern noise is converted to dynamic patterns after background registration, whereas the sparse part contains all dynamic pattern noise and fixed pattern noise. If the OM value is larger than a certain threshold value (µ), a low-rank component becomes the output as the final corrected image, but the fixed pattern noise remains if the OM value is smaller than µ in the static line of sight (LOS). Therefore, a negative feedback process using the stored sparse component is designed to remove fixed pattern noise when the OM value is less than µ. The process of the algorithm is summarized in Algorithm 1. In Step 1, feature points of sequential frames ( F n   and   F n + 1 ) are extracted. In Step 2, the background registration vector (V) is calculated using the Lucas–Kanade method [25]. Here, i is the number of pixels that a scanning window contains. If the size of the window is too small, then the feature extraction becomes poor, and if the size of the window is too big, the resolution of the optical flow decreases. Stabilized frame sequences are derived based on the background registration vector. In Step 3, the total image parts T are divided into a low-rank part and a sparse part. The number of images used for the PCA calculation should be carefully determined. If more images are used for the PCA calculation, the low-rank and sparse parts are well separated, but the computational complexity increases. Finally, in Step 4, if the OM is bigger than µ, then the low-rank part becomes the output image. If the OM is smaller than µ, then the sparse part is used to update the noise map, and noise filtering is performed on the input image. Then, the noise-filtered image becomes the output image.
Algorithm 1. Background Registration-Based Adaptive Noise Filtering
Step 1.
Background feature extraction between F n and F n + 1 where F denotes frames as shown in Figure 4.
Step 2.
Measurement of OM for background registration:
OM = [ V x V y ] = [ i = 1 i D x ( m i ) 2 i = 1 i D x ( m i ) D y ( m i ) i = 1 i D y ( m i ) D x ( m i ) i = 1 i D y ( m i ) 2 ] 1 [ i = 1 i D x ( m i ) D t ( m i ) i = 1 i D y ( m i ) D t ( m i ) ]
where D x , D y ,   and   D t are the partial derivatives of the images with respect to variable x, y, and t, respectively; OM is the optical movement vector; and Vx and Vy are local flow vectors. Sensors 18 00060 i007
D x ( m 1 ) V x + D y ( m 1 ) V y = D t ( m 1 ) D x ( m 1 ) V x + D y ( m 1 ) V y = D t ( m 1 ) D x ( m i ) V x + D y ( m i ) V y = D t ( m i )
where the green colors are optical flows, the red arrow is the OM, and
m 1 ,   m 2 ,   ,   m i are the ith pixels on the window area.
Step 3.
Separate into sparse and low-rank parts using principle of robust PCA:
min X , E   | | L | | * + λ | | S | | 1   subject   to   T = L + S
(L: low rank part, S: sparse part, λ : coefficient to control the amount of sparsity S).
Sensors 18 00060 i008
T: total imageS: sparse componentL: low-rank component
Step 4.
If OM μ , output images can be low-rank images. Else, if OM < μ, output images are created by filtering noise using the updated noise map (S). (μ: threshold value in pixels.)

3. Quantitative Evaluation Method

An improved image can be determined by one’s sight. However, evaluating the improvement of the image is difficult. Therefore, a quantitative evaluation method for correct image quality evaluation is required. Signal-to-noise ratio (SNR) and peak signal-to-noise ratio (PSNR) are popular evaluation methods. However, these methods require a ground truth image as a reference in order to verify the extent to which the image is improved. Usually, there are no ground truth images available for practical images. Therefore, quantitative evaluation methods that can operate without a ground truth image are proposed in this paper.

3.1. SSNR Index

Subtraction signal-to-noise ratio (SSNR) is based on a subtraction method to find moving noise and noise patterns. The mean subtraction signal error (MSSE), which is an index to derive SSNR, is defined in Equation (1), and SSNR is defined in Equation (2).
M S S E = 1 m × n i = 0 m 1 j = 0 n 1 [ K t ( i , j ) K t + 1 ( i , j ) ] 2 ,
SSNR = 10 log 10 ( M A X I / M S S E ) ,
where (m, n) is the image size, K t and K t + 1 are consecutive noisy images, and M A X I is the maximum pixel intensity of an image. An example of an image quality evaluation using SSNR is shown in Figure 5.

3.2. SNM Index

The quantity of image noise can be measured using a sum of noise map (SNM) index. A block diagram of this method is presented in Figure 6.
By evaluating the quantity of edges using the aforementioned procedure, it is possible to confirm the distributions of noise in an image. Figure 7 presents the results for images containing simulated noise [16].

3.3. VSSIM Index

In general, structural similarity (SSIM) is used to evaluate the quality of an image based on a ground truth (GT) image. However, a GT image does not exist in thermal-imaging systems because the noise is not fixed. The proposed variant SSIM (VSSIM) focuses on the fact that humans recognize noise from blurring, flickering, wavefront aberration, and streaming patterns that are not within the background range. Therefore, the previous frame image is considered as the GT image, and the noise values are calculated as follows [26]:
VSSIM ( x 0 , x 1 ) = [ l ( x 01 , x 1 ) ] α   ·   [ c ( x 01 , x 1 ) ] β · [ s ( x 01 , x 1 ) ] γ ,
where l, c, and s, are luminance, contrast, and structural terms, respectively,
l ( x 0 , x 1 ) = 2 μ x 0 μ x 1 + C 1 μ x 0 2 + μ x 1 2 + C 1
c ( x 0 , x 1 ) = 2 σ x 0 σ x 1 + C 2 σ x 0 2 + σ x 1 2 + C 2
s ( x 0 , x 1 ) = σ x 0 x 1 + C 3 σ x 0 σ x 1 + C 3
VSSIM ( x 0 , x 1 ) = ( 2 μ x 0 μ x 1 + C 1 ) ( 2 σ x 0 x 1 + C 2 ) ( μ x 0 2 + μ x 1 2 + C 1 ) ( σ x 0 2 + σ x 1 2 + C 2 )
Here, μ x 0 , μ x 1 are local means, σ x 0 , σ x 1 are standard deviations, and σ x 0 x 1 is the cross covariance for the images x 0 and x 1 . In addition, C 1 = ( 0.01 × L ) 2 , where L is the specified dynamic range value; C 2 = ( 0.03 × L ) 2 , where L is the specified dynamic range value; and C 3 = C 2 /2. Based on the changes in luminance, contrast, and structures of noise, VSSIM evaluates the image quality. VSSIM evaluation and the noise map are illustrated in Figure 8. Here, the denoised image is obtained by noise filtering using the proposed BRANF algorithm. The VSSIM index value is a decimal value between −1 and 1, and a value of 1 is only reached when the two images are identical.

4. Experimental Results and Analysis

We conducted an experiment to verify the noise reduction performance. NUC is the most popular method applied in practical military optical systems, and WNNM is the result of one of the latest studies on noise filtering. Therefore, these two methods are compared with the proposed BRANF method. The images used in the experiment are obtained from practical military optical systems using MWIR and LWIR containing various types of noise. The experimental configuration was installed as shown in Figure 9. Figure 9a shows the configuration for observing a four bar target of a collimator with an infinite optical distance. This is used to measure the minimum resolvable temperature difference (MRTD), in order to quantitatively determine the detection and recognition capabilities of the thermal camera. Figure 9b shows the configuration for observing outdoor scenes, such as buildings and mountains, without optical targets.
Figure 10 and Figure 11 show the results for NUC, WNNM, and the proposed BRANF on various types of noise. Figure 10a,b show the results for MWIR indoor images with flickering line and dot noise. Figure 10c shows an outdoor LWIR image with horizontal/vertical line noise occurring as a result of the mirror scanning method of the LWIR. Figure 10d,e are outdoor MWIR images with wavefront aberration, and Figure 10f is an LWIR outdoor image with LTV line noise. The subwindows in the figures are zoomed to show the detailed differences for each method. In general, BRANF removes more noise than NUC, and remains sharper than WNNM.
Figure 11g,h show the results for MWIR indoor images with oblique line noise occurring because of disturbed currents in the ROIC, the digital signal processor, or the power source. Figure 11i–k show LWIR images with vertical line noise. Figure 11l is an LWIR image with little noise, but demonstrates the sharpness of the resulting images. Figure 11m is an indoor LWIR image with jitter noise.
Figure 12 shows the result of the three proposed quantitative evaluation methods applied to the images from Figure 10a to Figure 11m. The values of SSNR and VSSIM are higher when the noise is filtered more. The value of SNM is higher when there is more noise in the resultant image. In Figure 12a,c, the proposed BRANF shows the highest index value for SSNR and VSSIM. However, in the case of SNM in Figure 12b, BRANF is higher than WNNM, which appears to be the result of WNNM having less noise than BRANF. Here, the effect of the three indices should be considered. SSNR is effective in evaluating dynamic flickering noise. SNM is effective in evaluating fixed pattern noise, but has a problem that the result is good when the image is blurred. VSSIM is effective in evaluating structural noise, such as wavefront aberration. Therefore, the three evaluation methods are considered altogether to evaluate the performance of the noise reduction. As shown in Figure 10 and Figure 11, the resultant images for WNNM are more blurred than for BRANF. Therefore, there are less edges left in the resulting images, and so the SNM value is lower than that of BRANF. Even though the SNM value of BRANF is higher than that of WNNM, BRANF is more effective in applications such as target detection and tracking, because the resulting image remains sharp. For example, the result of SNM on Figure 11k shows the best performance with WNNM. The result of WNNM can be determined clearer by one’s sight. However, as shown in Figure 11k-1, it is hard to recognize the car because the image is blurred. The values of the indexes in Figure 12 are shown in Table 3 and the statistical analysis is shown in Figure 13.

5. Conclusions

In this paper, various types of noise and their causes in images from practical military optical systems have been analyzed. The advantages and disadvantages of conventional methods of removing such noise were also analyzed. BRANF was proposed to overcome the limits of conventional methods. Furthermore, the quantitative evaluation methods SSNR, SNM, and VSSIM were proposed, which can analyze the improvements of images quantitatively without ground truth images. The proposed BRANF method was compared with a widely employed method, NUC, and a recent method, WNNM, whose principle is similar to that of the proposed method. BRANF exhibited better values for SSNR and VSSIM, but the SNM was lower than that of WNNM. However, this is because the resulting images for WNNM were more blurred than those of BRANF. The resulting images for WNNM are more suitable for human viewing, but BRANF is better to use in applications such as detection and tracking. The computational load of BRANF was lower than that of WNNM. Real-time processing is left for future work. Dividing images into patches and applying parallel processing will improve the computational speed of BRANF. In addition, improving the background registration using other feature detection methods and background modeling can also improve the performance of BRANF. It can be expected that BRANF will improve the detection and recognition performance of defense systems using MWIR and LWIR.

Supplementary Materials

Supplementary File 1

Acknowledgments

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2016R1D1A3B03930798). This study was supported by the BK21 Plus project funded by the Ministry of Education, Korea (21A20131600011).

Author Contributions

Byeong Hak Kim, Min Young Kim and You Seong Chae conceived and designed the experiments and wrote the paper. Byeong Hak Kim performed the experiments, analyzed the data, and contributed to the derivation of the results.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ma, J.; Chen, C.; Li, C.; Huang, J. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 2016, 31, 100–109. [Google Scholar] [CrossRef]
  2. Wescam. Available online: http://www.wescam.com/index.php (accessed on 15 July 2016).
  3. Liu, X.; Lu, X.; Shen, H.; Yuan, Q.; Jiao, Y.; Zhang, L. Stripe Noise Separation and Removal in Remote Sensing Images by Consideration of the Global Sparsity and Local Variational Properties. IEEE Trans. Geosci. Remote Sens. 2016, 54, 3049–3060. [Google Scholar] [CrossRef]
  4. Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral image denoising employing a spectral–spatial adaptive total variation model. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3660–3677. [Google Scholar] [CrossRef]
  5. Mudau, A.E.; Willers, C.J.; Griffith, D.; Roux, F.P. Non-uniformity correction and bad pixel replacement on LWIR and MWIR images. In Proceedings of the IEEE Saudi International Electronics, Communications and Photonics Conference (SIECPC), Riyadh, Saudi Arabia, 24–26 April 2011; pp. 1–5. [Google Scholar]
  6. Ratliff, B.M.; Kaufman, J.R. Scene-based correction of fixed pattern noise in hyperspectral image data using temporal reordering. Opt. Eng. 2015, 54, 093102. [Google Scholar] [CrossRef]
  7. Boutemedjet, A.; Deng, C.; Zhao, B. Robust Approach for Nonuniformity Correction in Infrared Focal Plane Array. Sensors 2016, 16, 1890. [Google Scholar] [CrossRef] [PubMed]
  8. Cao, Y.; Yang, M.Y.; Tisse, C.L. Effective Strip Noise Removal for Low-Textured Infrared Images Based on 1-D Guided Filtering. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 2176–2188. [Google Scholar] [CrossRef]
  9. Kim, M.Y.; Velubolu, K.C.; Lee, S.G. Lateral scanning linnik interferometry for large field of view and fast scanning: Wafer bump inspection. Int. J. Optomech. 2011, 5, 271–285. [Google Scholar] [CrossRef]
  10. Chang-Yan, C.; Ji-Xian, Z.; Zheng-Jun, L. Study on Methods of Noise Reduction in a Stripped Image. ISPRS 2008, XXXVII, 213–216. [Google Scholar]
  11. Tendero, T.; Landeau, S.; Gilles, J. Non-uniformity correction of infrared images by midway equalization. Image Process. Line 2012, 2, 134–146. [Google Scholar] [CrossRef]
  12. Kim, S. Two-point correction and minimum filter-based nonuniformity correction for scan-based aerial infrared cameras. Opt. Eng. 2012, 51, 106401. [Google Scholar] [CrossRef]
  13. Hardie, R.C.; Hayat, M.M.; Armstrong, E.; Yasuda, B. Scene-based nonuniformity correction with video sequences and registration. Appl. Opt. 2000, 39, 1241–1250. [Google Scholar] [CrossRef] [PubMed]
  14. Black, W.T.; Tyo, J.S. Feedback-integrated scene cancellation scene-based nonuniformity correction algorithm. J. Electron. Imaging 2014, 23, 023005. [Google Scholar] [CrossRef]
  15. Gu, S.; Xie, Q.; Meng, D.; Zuo, W.; Feng, X.; Zhang, L. Weighted nuclear norm minimization and its applications to low level vision. Int. J. Comput. Vis. 2017, 121, 183–208. [Google Scholar] [CrossRef]
  16. D’Andrès, L.; Salvador, J.; Kochale, A.; Süsstrunk, S. Non-Parametric Blur Map Regression for Depth of Field Extension. IEEE Trans. Image Process. 2016, 25, 1660–1673. [Google Scholar] [CrossRef] [PubMed]
  17. Tomasi, C.; Kanade, T. Detection and Tracking of Point Features. Pattern Recognit. 2004, 37, 165–168. [Google Scholar]
  18. Bouwmans, T.; Sobral, A.; Javed, S.; Jung, S.K.; Zahzah, E.H. Decomposition into low-rank plus additive matrices for background/foreground separation: A review for a comparative evaluation with a large-scale dataset. Comput. Sci. Rev. 2017, 23, 1–71. [Google Scholar] [CrossRef]
  19. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  20. Candès, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis. J. ACM 2011, 58, 11. [Google Scholar] [CrossRef]
  21. Gao, Z.; Cheong, L.F.; Shan, M. Block-sparse RPCA for consistent foreground detection. In Proceedings of the European Conference on Computer Vision, Florence, Italy, 7–13 October 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 690–703. [Google Scholar]
  22. Bishop, C.M. Pattern Recognition and Machine Learning, 8th ed.; Springer: Cambridge, UK, 2007; pp. 561–586. [Google Scholar]
  23. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  24. Tsai, F.; Chen, W.W. Striping noise detection and correction of remote sensing images. IEEE Trans. Geosci. Remote Sens. 2008, 46, 4122–4131. [Google Scholar] [CrossRef]
  25. Liu, X.; Shen, H.; Yuan, Q.; Lu, X.; Zhou, C. A Universal Destriping Framework Combining 1-D and 2-D Variational Optimization Methods. IEEE Trans. Geosci. Remote Sens. 2017. [Google Scholar] [CrossRef]
  26. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Advantages of using IR images in a general electro-optical sensor system [2].
Figure 1. Advantages of using IR images in a general electro-optical sensor system [2].
Sensors 18 00060 g001
Figure 2. Block diagrams of the conventional NUC method.
Figure 2. Block diagrams of the conventional NUC method.
Sensors 18 00060 g002
Figure 3. Design example of middle wave IR (MWIR) camera system: (a) Optical pass structures, (b) example of inner Dewar structures, (c) focal plane area (FPA) and readout integrated circuit (ROIC) schematic and electrical diagrams.
Figure 3. Design example of middle wave IR (MWIR) camera system: (a) Optical pass structures, (b) example of inner Dewar structures, (c) focal plane area (FPA) and readout integrated circuit (ROIC) schematic and electrical diagrams.
Sensors 18 00060 g003
Figure 4. Block diagrams of proposed BRANF algorithm.
Figure 4. Block diagrams of proposed BRANF algorithm.
Sensors 18 00060 g004
Figure 5. An example of evaluating the noise in MWIR images using subtraction signal-to-noise ratio (SSNR). Here, (a,c) are original images; (b) shows noise components containing horizontal line flickering noise; and (d) shows noise components containing horizontal/vertical line and dot flickering noise.
Figure 5. An example of evaluating the noise in MWIR images using subtraction signal-to-noise ratio (SSNR). Here, (a,c) are original images; (b) shows noise components containing horizontal line flickering noise; and (d) shows noise components containing horizontal/vertical line and dot flickering noise.
Sensors 18 00060 g005
Figure 6. Measurement principle for a sum of noise map (SNM) index.
Figure 6. Measurement principle for a sum of noise map (SNM) index.
Sensors 18 00060 g006
Figure 7. Measurement of noise map of an MWIR image using SNM: (a) noisy image; (c) noise-filtered image; (b,d) SNM measurement results.
Figure 7. Measurement of noise map of an MWIR image using SNM: (a) noisy image; (c) noise-filtered image; (b,d) SNM measurement results.
Sensors 18 00060 g007
Figure 8. Noise-measurement variant structural similarity (VSSIM) index: (a) original noisy image; (b) denoised image; (c) noise map of the original noisy image (VSSIM = 0.8814); (d) noise map of denoised image (VSSIM = 0.9947).
Figure 8. Noise-measurement variant structural similarity (VSSIM) index: (a) original noisy image; (b) denoised image; (c) noise map of the original noisy image (VSSIM = 0.8814); (d) noise map of denoised image (VSSIM = 0.9947).
Sensors 18 00060 g008
Figure 9. Experiment configurations: (a) An observation environment for simulation targets on a collimator with forward looking IR (FLIR) and electro-optical IR (EOIR); (b) an observation environment for actual targets.
Figure 9. Experiment configurations: (a) An observation environment for simulation targets on a collimator with forward looking IR (FLIR) and electro-optical IR (EOIR); (b) an observation environment for actual targets.
Sensors 18 00060 g009
Figure 10. NUC, WNNM, and proposed the BRANF applied on MWIR and LWIR images with various types of noise: (a,b) flickering lines and dots; (c) vertical oblique lines; (d,e) wavefront aberration; (f) LTV line noise. (Experimental data set 1).
Figure 10. NUC, WNNM, and proposed the BRANF applied on MWIR and LWIR images with various types of noise: (a,b) flickering lines and dots; (c) vertical oblique lines; (d,e) wavefront aberration; (f) LTV line noise. (Experimental data set 1).
Sensors 18 00060 g010aSensors 18 00060 g010b
Figure 11. NUC, WNNM and proposed BRANF applied on MWIR, LWIR images with various types of noises: (g,h) moving oblique line noise; (ik) LTV vertical line noise; (l) little noise case; (m) nonfixed jitter noise. (Experimental data set 2).
Figure 11. NUC, WNNM and proposed BRANF applied on MWIR, LWIR images with various types of noises: (g,h) moving oblique line noise; (ik) LTV vertical line noise; (l) little noise case; (m) nonfixed jitter noise. (Experimental data set 2).
Sensors 18 00060 g011aSensors 18 00060 g011b
Figure 12. (a) SSNR value; (b) SNM value; (c) VSSIM value; (d) computation time of WNNM and BRANF.
Figure 12. (a) SSNR value; (b) SNM value; (c) VSSIM value; (d) computation time of WNNM and BRANF.
Sensors 18 00060 g012
Figure 13. Boxplots and mean values of experimental results. (a) SSNR result; (b) SNM result; (c) VSSIM result.
Figure 13. Boxplots and mean values of experimental results. (a) SSNR result; (b) SNM result; (c) VSSIM result.
Sensors 18 00060 g013
Table 1. Problematic noise types and methods for addressing them: Non-uniformity correction(NUC), two-point correction(TPC), scene based NUC(SBNUC), feedback-integrated scene cancellation(FiSC), weighted nuclear minimization(WNNM) and the proposed method.
Table 1. Problematic noise types and methods for addressing them: Non-uniformity correction(NUC), two-point correction(TPC), scene based NUC(SBNUC), feedback-integrated scene cancellation(FiSC), weighted nuclear minimization(WNNM) and the proposed method.
CategoriesNoise TypesExamplesNoise Filtering Methods (O: Perfect, : Partial, X: Incomplete Solutions)
NUC [2]TPC [4]SBNUC [5]FiSC [6]WNNM [16] Proposed
Fixed pattern (grid, cloud, etc.) Sensors 18 00060 i001OOOOOO
Basic Types of Noise Shading phenomenon Sensors 18 00060 i002OOOOOO
Dead pixels Sensors 18 00060 i003OOOOOO
① Moving dots and lines Sensors 18 00060 i004XOXOO
Problematic Noise② Wavefront abberation Sensors 18 00060 i005XXXXXO
③ Long term variant (LTV) fixed pattern noise Sensors 18 00060 i006XO
Table 2. IR image quality improvement methods: Non-uniformity correction(NUC), two-point correction (TPC), scene based NUC (SBNUC), feedback-integrated scene cancellation (FiSC), weighted nuclear minimization(WNNM) and the proposed background registration-based adaptive noise filtering (BRANF) method.
Table 2. IR image quality improvement methods: Non-uniformity correction(NUC), two-point correction (TPC), scene based NUC (SBNUC), feedback-integrated scene cancellation (FiSC), weighted nuclear minimization(WNNM) and the proposed background registration-based adaptive noise filtering (BRANF) method.
MethodsSummary DescriptionAdvantageDisadvantage
NUC [2]One reference method using black body
  • Simple software design
  • Long process time with thermal chamber facilities
  • Time/thermal variable noise
TPC [4]Two reference method using built-in thermal electric coolers (TECs)
  • Short process time
  • Complex hardware design
  • Periodic operation
SBNUC [5]Background motion and sequence-based nonuniformity correction
  • Simple hardware design
  • Real time compensation
  • Correction limits on stationary backgrounds
  • Bluring, ghost, and fading problems
FiSC [6]Uniformity correction based on time shift estimate of two frames
  • Short-term variable noise cancellation
  • Essential background motion
  • Bluring and ghost problems
WNNM [16]Low rank-based image noise reduction methods
  • Bad fixel (moving dots) replacement
  • High peak signal-to-noise ration (PSNR) performance
  • Limits on mophological and time variant noise
Proposed BRANFDynamic, abberation, and long-term variable noise reduction using adaptive filtering algorithms
  • Both dynamic and static noise reduction.
  • Minimize loss of original image
  • Computational complexity
Table 3. Quantitative measurement values comparing NUC and WNNM methods.
Table 3. Quantitative measurement values comparing NUC and WNNM methods.
LWIR/ MWIR1 (Targets)Images (Figure 10 and Figure 11)Noise TypesSSNR (↑)SNM (↓)VSSIM (↑)
NUC [2]WNNM [16]ProposedNUCWNNMProposedNUCWNNMProposed
MWIR (Room)(a)Lines, dots37.6543.3148.9831.4413.4819.390.950.981.00
MWIR (Room)(b)Lines45.1040.3059.0036.3020.0030.540.890.980.99
LWIR (Field)(c)Oblique lines38.6845.4449.75109.6348.0577.340.880.970.99
MWIR (Field)(d)Wavefront aberration28.1230.2932.0882.8269.9182.140.820.910.99
MWIR (Field)(e)Wavefront aberration40.6244.6651.4998.7948.6593.850.840.980.99
LWIR (Field)(f)LTV lines34.5138.6041.15115.3716.5019.380.840.970.99
MWIR (Wide FOV2)(g)Oblique lines28.4031.5635.2087.4953.5456.830.890.950.96
MWIR (Narrow FOV)(h)Oblique lines37.0843.4247.82179.1740.3417.940.950.980.99
LWIR (Field)(i)LTV lines38.3641.6453.0497.9052.8980.240.940.980.99
LWIR (Field)(j)LTV lines23.2623.5324.75141.6768.17109.140.780.890.91
LWIR (Field)(k)Severe lines24.3124.7327.53174.4859.8386.580.660.820.84
LWIR (Field)(l)Less lines38.0541.8648.13115.5091.03105.910.930.980.99
LWIR (Room)(m)Jitters42.2746.8658.5146.0636.0844.770.940.970.99
1 Middle/long wavelength infrared (MWIR/LWIR), 2 field of view (FOV), Sensors 18 00060 i009: the best performance of each measurement.

Share and Cite

MDPI and ACS Style

Kim, B.H.; Kim, M.Y.; Chae, Y.S. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications. Sensors 2018, 18, 60. https://doi.org/10.3390/s18010060

AMA Style

Kim BH, Kim MY, Chae YS. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications. Sensors. 2018; 18(1):60. https://doi.org/10.3390/s18010060

Chicago/Turabian Style

Kim, Byeong Hak, Min Young Kim, and You Seong Chae. 2018. "Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications" Sensors 18, no. 1: 60. https://doi.org/10.3390/s18010060

APA Style

Kim, B. H., Kim, M. Y., & Chae, Y. S. (2018). Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications. Sensors, 18(1), 60. https://doi.org/10.3390/s18010060

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop