Infrared and Visible Image Fusion Using Truncated Huber Penalty Function Smoothing and Visual Saliency Based Threshold Optimization
Abstract
:1. Introduction
- •
- An effective two-scale decomposition algorithm using truncated Huber penalty function (THPF) smoothing is proposed to decompose the source images into the approximate and residual layers. The proposed THPF-based decomposition algorithm efficiently extracts the feature information (e.g., the edges, contours, etc.) and well restrains the edges and structures of the fusion results from the halo artifacts.
- •
- A visual saliency based threshold optimization (VSTO) fusion rule is proposed to merge the approximate layers. The VSTO fusion rule can suppress contrast loss and highlight the significant targets in the IR images and the high-intensity regions in the visible images. The fused images are more natural and consistent with human visual perception, which facilitate both the scene understanding of humans and the post processing of computers.
- •
- Unlike most fusion methods using sparse representation (SR) to decompose an image or merge the low frequency sub-band images, we utilize the SR based fusion rule to merge the residual layers for obtaining rich feature information (e.g., the detail, edge, and contrast). The subjective and objective evaluations demonstrate that the considerable feature information is integrated from the IR and visible images into the fused image.
2. Related Works
3. Methodology
3.1. Image Decomposition
3.1.1. Truncated Huber Penalty Function
3.1.2. Edge- and Structure-Preserving Image Smoothing Using Truncated Huber Penalty Function
3.1.3. THPF Smoothing-Based Two-Scale Image Decomposition
3.2. Approximate Layer Fusion Using Visual Saliency-Based Threshold Optimization
3.3. Residual Layer Fusion Using Sparse Representation
3.3.1. Sparse Representation
3.3.2. Residual Layer Fusion
3.4. Reconstruction
Algorithm 1 Pseudo code of proposed fusion method. |
|
4. Experimental Setting
4.1. Image Data Set and Setting
4.2. Other Fusion Methods for Comparison
4.3. Objective Assessment Metrics
4.4. Parameter Analysis and Setting
5. Results and Analysis
5.1. Qualitative Analysis via Subjective Evaluations
5.2. Quantitative Analysis via Objective Assessments
5.3. Running Time Comparison
6. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
References
- Yang, Y.; Zhang, Y.; Huang, S.; Zuo, Y.; Sun, J. Infrared and Visible Image Fusion Using Visual Saliency Sparse Representation and Detail Injection Model. IEEE Trans. Instrum. Meas. 2021, 70, 1–15. [Google Scholar] [CrossRef]
- Ma, J.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
- Hou, J.; Zhang, D.; Wu, W.; Ma, J.; Zhou, H. A Generative Adversarial Network for Infrared and Visible Image Fusion Based on Semantic Segmentation. Entropy 2021, 23, 376. [Google Scholar] [CrossRef] [PubMed]
- Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion: A survey of the state of the art. Inf. Fusion 2017, 33, 100–112. [Google Scholar] [CrossRef]
- Zhang, S.; Liu, F. Infrared and visible image fusion based on non-subsampled shearlet transform, regional energy, and co-occurrence filtering. Electron. Lett. 2020, 56, 761–764. [Google Scholar] [CrossRef]
- Jin, X.; Qian, J.; Yao, S.; Zhou, D.; He, K. A survey of infrared and visual image fusion methods. Infrared Phys. Technol. 2017, 85, 478–501. [Google Scholar] [CrossRef]
- Liu, L.; Chen, M.; Xu, M.; Li, X. Two-stream network for infrared and visible images fusion. Neurocomputing 2021, 460, 50–58. [Google Scholar] [CrossRef]
- Bavirisetti, D.P.; Dhuli, R. Two-scale image fusion of visible and infrared images using saliency detection. Infrared Phys. Technol. 2016, 76, 52–64. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, X.; Wang, Z.; Wang, Z.; Ward, R.K.; Wang, X. Deep learning for pixel-level image fusion: Recent advances and future prospects. Inf. Fusion 2018, 42, 158–173. [Google Scholar] [CrossRef]
- Zhang, Q.; Liu, Y.; Blum, R.S.; Han, J.; Tao, D. Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review. Inf. Fusion 2018, 40, 57–75. [Google Scholar] [CrossRef]
- Sun, C.; Zhang, C.; Xiong, N. Infrared and Visible Image Fusion Techniques Based on Deep Learning: A Review. Electronics 2020, 9, 2162. [Google Scholar] [CrossRef]
- Zhang, Z.; Blum, R.S. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proc. IEEE 1999, 87, 1315–1326. [Google Scholar] [CrossRef] [Green Version]
- Meher, B.; Agrawal, S.; Panda, R.; Abraham, A. A survey on region based image fusion methods Information Fusion. Inf. Fusion 2019, 48, 119–132. [Google Scholar] [CrossRef]
- Patel, A.; Chaudhary, J. A Review on Infrared and Visible Image Fusion Techniques. In Intelligent Communication Technologies and Virtual Mobile Networks; Balaji, S., Rocha, A., Chung, Y., Eds.; Springer: Cham, Switzerland, 2020; pp. 127–144. [Google Scholar]
- Ji, L.; Yang, F.; Guo, X. Image Fusion Algorithm Selection Based on Fusion Validity Distribution Combination of Difference Features. Electronics 2021, 10, 1752. [Google Scholar] [CrossRef]
- Burt, P.J.; Adelson, E.H. The laplacian pyramid as a compact image code. IEEE Trans. Commun. 1983, 31, 532–540. [Google Scholar] [CrossRef]
- Li, H.; Manjunath, B.; Mitra, S. Multisensor image fusion using the wavelet transform. Graph. Models Image Process. 1995, 57, 235–245. [Google Scholar] [CrossRef]
- Lewis, J.J.; O’Callaghan, R.J.; Nikolov, S.G.; Bull, D.R.; Canagarajah, N. Pixel- and region-based image fusion with complex wavelets. Inf. Fusion 2007, 8, 119–130. [Google Scholar] [CrossRef]
- Nencini, F.; Garzelli, A.; Baronti, S.; Alparone, L. Remote sensing image fusion using the curvelet transform. Inf. Fusion 2007, 8, 143–156. [Google Scholar] [CrossRef]
- Do, M.N.; Vetterli, M. The Contourlet transform: an efficient directional multiresolution image representation. IEEE Trans. Image Process. 2005, 14, 2091–2106. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Q.; Guo, B. Multifocus image fusion using the nonsubsampled Contourlet transform. Signal Process. 2009, 89, 1334–1346. [Google Scholar] [CrossRef]
- Yazdi, M.; Ghasrodashti, E.K. Image fusion based on Non-Subsampled Contourlet Transform and phase congruency. In Proceedings of the 2012 19th International Conference on Systems, Signals and Image Processing (IWSSIP), Vienna, Austria, 11–13 April 2012; pp. 616–620. [Google Scholar]
- Kong, W.; Wang, B.; Lei, Y. Technique for infrared and visible image fusion based on non-subsampled shearlet transform and spiking cortical model. Infrared Phys. Technol. 2015, 71, 87–98. [Google Scholar] [CrossRef]
- Li, S.; Kang, X.; Hu, J. Image fusion with guided filtering. IEEE Trans. Image Process. 2013, 22, 2864–2875. [Google Scholar] [PubMed]
- Kumar, B.K.S. Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process. 2015, 9, 1193–1204. [Google Scholar] [CrossRef]
- Zhou, Z.; Wang, B.; Li, S.; Dong, M. Perceptual fusion of infrared and visible images through a hybrid multiscale decomposition with Gaussian and bilateral filters. Inf. Fusion 2016, 30, 15–26. [Google Scholar] [CrossRef]
- Ma, J.; Zhou, Z.; Wang, B.; Zong, H. Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys. Technol. 2017, 82, 8–17. [Google Scholar] [CrossRef]
- Liu, Y.; Liu, S.; Wang, Z. A general framework for image fusion based on multiscale transform and sparse representation. Inf. Fusion 2015, 24, 147–164. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, X.; Ward, R.K.; Wang, Z. Image Fusion with Convolutional Sparse Representation. IEEE Signal Process. Let. 2016, 23, 1882–1886. [Google Scholar] [CrossRef]
- Liu, C.; Qi, Y.; Ding, W. Infrared and visible image fusion method based on saliency detection in sparse domain. Infrared Phys. Technol. 2017, 83, 94–102. [Google Scholar] [CrossRef]
- Ma, T.; Jie, M.; Fang, B.; Hu, F.; Quan, S.; Du, H. Multi-scale decomposition based fusion of infrared and visible image via total variation and saliency analysis. Infrared Phys. Technol. 2018, 92, 154–162. [Google Scholar] [CrossRef]
- Li, H.; Wu, X.J.; Durrani, T.S. Infrared and Visible Image Fusion with ResNet and zero-phase component analysis. Infrared Phys. Technol. 2019, 102, 103039. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Chen, X.; Cheng, J.; Peng, H.; Wang, Z. Infrared and visible image fusion with convolutional neural networks. Int. J. Wavelets Multiresolution Inf. Process. 2018, 16, 353–389. [Google Scholar] [CrossRef]
- Liu, Y.; Dong, L.; Ji, Y.; Xu, W. Infrared and Visible Image Fusion through Details Preservation. Sensors 2019, 19, 4556. [Google Scholar] [CrossRef] [Green Version]
- An, W.; Wang, H. Infrared and visible image fusion with supervised convolutional neural network. Optik 2020, 219, 165120. [Google Scholar] [CrossRef]
- Zhang, Y.; Liu, Y.; Sun, P.; Yan, H.; Zhao, X.; Zhang, L. IFCNN: A general image fusion framework based on convolutional neural network. Inf. Fusion 2020, 54, 99–118. [Google Scholar] [CrossRef]
- Ma, J.; Xu, H.; Jiang, J.; Mei, X.; Zhang, X. DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion. IEEE Trans. Image Process. 2020, 29, 4980–4995. [Google Scholar] [CrossRef] [PubMed]
- Liu, W.; Zhang, P.; Lei, Y.; Huang, X.; Reid, I. A generalized framework for edge-preserving and structure-preserving image smoothing. IEEE Trans. Pattern Anal. Mach. Intell. 2021. [Google Scholar] [CrossRef] [PubMed]
- Huber, P.J. Robust estimation of a location parameter. The Annals of Mathematical Statistics. Ann. Math. Stat. 1964, 35, 73–101. [Google Scholar] [CrossRef]
- Ghasrodashti, E.K.; Karami, A.; Heylen, R.; Scheunders, P. Spatial Resolution Enhancement of Hyperspectral Images Using Spectral Unmixing and Bayesian Sparse Representation. Remote Sens. 2017, 9, 541. [Google Scholar] [CrossRef] [Green Version]
- Ghasrodashti, E.K.; Helfroush, M.S.; Danyali, H. Sparse-Based Classification of Hyperspectral Images Using Extended Hidden Markov Random Fields. IEEE J.-STARS 2018, 11, 4101–4112. [Google Scholar] [CrossRef]
- Bruckstein, A.M.; Donoho, D.L.; Elad, M. From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev. 2009, 51, 34–81. [Google Scholar] [CrossRef] [Green Version]
- Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Trans. Image Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
- Chen, J.; Li, X.; Luo, L.; Mei, X.; Ma, J. Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf. Sci. 2020, 508, 64–78. [Google Scholar] [CrossRef]
- Qu, G.; Zhang, D.; Yan, P. Information measure for performance of image fusion. Inf. Sci. 2002, 38, 313–315. [Google Scholar] [CrossRef] [Green Version]
- Haghighat, M.B.A.; Aghagolzadeh, A. A non-reference image fusion metric based on mutual information of image featuresn. Comput. Electr. Eng. 2011, 37, 744–756. [Google Scholar] [CrossRef]
- Xydeas, C.S.; Petrovic, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef] [Green Version]
- Wang, Q.; Shen, Y.; Zhang, J.Q. A nonlinear correlation measure for multivariable data set. Phys. D Nonlinear Phenom. 2005, 200, 287–295. [Google Scholar] [CrossRef]
- Liu, Z.; Blasch, E.; Xue, Z.; Zhao, J.; Laganiere, R.; Wu, W. Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision: A Comparative Study. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 94–109. [Google Scholar] [CrossRef] [PubMed]
- Wang, Q.; Shen, Y.; Jin, J. Performance evaluation of image fusion techniques. In Image Fusion: Algorithms and Applications; Stathaki, T., Ed.; Academic Press: Oxford, UK; Orlando, FL, USA, 2008; pp. 469–492. [Google Scholar]
SD | MI | FMI | Q | NCIE | NCC | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
DWT [17] | 31.826 | 1.7204 | 0.3965 | 0.5154 | 0.8040 | 0.2152 | ||||||
CSR [29] | 28.553 | 2.1155 | 0.3888 | 0.5629 | 0.8049 | 0.2645 | ||||||
TSVM [8] | 33.724 | 1.8689 | 0.3497 | 0.5281 | 0.8044 | 0.2337 | ||||||
JSRSD [30] | 43.762 | 2.4944 | 0.2171 | 0.4003 | 0.8060 | 0.3121 | ||||||
VSMWLS [27] | 37.449 | 2.0104 | 0.3610 | 0.4265 | 0.8046 | 0.2514 | ||||||
ResNet [32] | 27.470 | 2.2058 | 0.4203 | 0.3858 | 0.8052 | 0.2758 | ||||||
IFCNN [36] | 36.630 | 2.5294 | 0.4018 | 0.5282 | 0.8060 | 0.3163 | ||||||
TE [44] | 41.362 | 2.2826 | 0.3763 | 0.5466 | 0.8061 | 0.2856 | ||||||
DDcGAN [37] | 51.008 | 1.9325 | 0.4132 | 0.4015 | 0.8043 | 0.2417 | ||||||
Ours | 42.486 | 5.2834 | 0.4263 | 0.5762 | 0.8201 | 0.6610 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Duan, C.; Liu, Y.; Xing, C.; Wang, Z. Infrared and Visible Image Fusion Using Truncated Huber Penalty Function Smoothing and Visual Saliency Based Threshold Optimization. Electronics 2022, 11, 33. https://doi.org/10.3390/electronics11010033
Duan C, Liu Y, Xing C, Wang Z. Infrared and Visible Image Fusion Using Truncated Huber Penalty Function Smoothing and Visual Saliency Based Threshold Optimization. Electronics. 2022; 11(1):33. https://doi.org/10.3390/electronics11010033
Chicago/Turabian StyleDuan, Chaowei, Yiliu Liu, Changda Xing, and Zhisheng Wang. 2022. "Infrared and Visible Image Fusion Using Truncated Huber Penalty Function Smoothing and Visual Saliency Based Threshold Optimization" Electronics 11, no. 1: 33. https://doi.org/10.3390/electronics11010033
APA StyleDuan, C., Liu, Y., Xing, C., & Wang, Z. (2022). Infrared and Visible Image Fusion Using Truncated Huber Penalty Function Smoothing and Visual Saliency Based Threshold Optimization. Electronics, 11(1), 33. https://doi.org/10.3390/electronics11010033