Advanced Driving Assistance Based on the Fusion of Infrared and Visible Images
Abstract
:1. Introduction
2. Related Work
2.1. Advanced Driving Assistance
2.2. Infrared and Visible Image Fusion Based on Deep Learning
3. Proposed Method
3.1. Fusion Formulation
3.2. Network Architecture
3.3. Loss Functions
3.3.1. Fuse-Generator Loss
3.3.2. Loss of Discriminator
4. Experimental Results and Analysis
4.1. Experimental Settings
Algorithm 1: Training details of FusionADA |
Parameter definitions |
, : The numbers of steps for training G, D. |
, and are applied to determine a range when training. |
and mean the adversarial losses of G and D. |
: the total loss of G. |
We set , , and in the first batch empirically in our work.
|
4.2. Comparison Algorithms and Evaluation Metrics
4.3. Qualitative Comparisons
4.4. Quantitative Comparisons
4.5. Ablation Experiment of Adversarial Learning
4.6. Generalization Performance
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ziebinski, A.; Cupek, R.; Erdogan, H.; Waechter, S. A survey of ADAS technologies for the future perspective of sensor fusion. In Proceedings of the International Conference on Computational Collective Intelligence, Halkidiki, Greece, 28–30 September 2016; pp. 135–146. [Google Scholar]
- Ma, J.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
- Huang, X.; Qi, G.; Wei, H.; Chai, Y.; Sim, J. A novel infrared and visible image information fusion method based on phase congruency and image entropy. Entropy 2019, 21, 1135. [Google Scholar] [CrossRef] [Green Version]
- Ma, J.; Xu, H.; Jiang, J.; Mei, X.; Zhang, X.P. DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans. Image Process. 2020, 29, 4980–4995. [Google Scholar] [CrossRef]
- Yin, S.; Wang, Y.; Yang, Y.H. A Novel Residual Dense Pyramid Network for Image Dehazing. Entropy 2019, 21, 1123. [Google Scholar] [CrossRef] [Green Version]
- Li, X.; Zhang, X.; Ding, M. A sum-modified-Laplacian and sparse representation based multimodal medical image fusion in Laplacian pyramid domain. Med. Biol. Eng. Comput. 2019, 57, 2265–2275. [Google Scholar] [CrossRef] [PubMed]
- Teng, J.; Wang, S.; Zhang, J.; Wang, X. Neuro-fuzzy logic based fusion algorithm of medical images. In Proceedings of the International Congress on Image and Signal Processing, Yantai, China, 16–18 October 2010; pp. 1552–1556. [Google Scholar]
- Zhao, F.; Xu, G.; Zhao, W. CT and MR Image Fusion Based on Adaptive Structure Decomposition. IEEE Access 2019, 7, 44002–44009. [Google Scholar] [CrossRef]
- Liu, Y.; Yang, X.; Zhang, R.; Albertini, M.K.; Celik, T.; Jeon, G. Entropy-Based Image Fusion with Joint Sparse Representation and Rolling Guidance Filter. Entropy 2020, 22, 118. [Google Scholar] [CrossRef] [Green Version]
- Jiang, W.; Yang, X.; Wu, W.; Liu, K.; Ahmad, A.; Sangaiah, A.K.; Jeon, G. Medical images fusion by using weighted least squares filter and sparse representation. Comput. Electr. Eng. 2018, 67, 252–266. [Google Scholar] [CrossRef]
- Xu, Z. Medical image fusion using multi-level local extrema. Inf. Fusion 2014, 19, 38–48. [Google Scholar] [CrossRef]
- Jiang, F.; Kong, B.; Li, J.; Dashtipour, K.; Gogate, M. Robust visual saliency optimization based on bidirectional Markov chains. Cogn. Comput. 2020, 1–12. [Google Scholar] [CrossRef]
- Tian, X.; Chen, Y.; Yang, C.; Ma, J. Variational Pansharpening by Exploiting Cartoon-Texture Similarities. IEEE Trans. Geosci. Remote Sens. 2021. [Google Scholar] [CrossRef]
- Ma, J.; Zhang, H.; Yi, P.; Wang, Z. SCSCN: A Separated Channel-Spatial Convolution Net with Attention for Single-view Reconstruction. IEEE Trans. Ind. Electron. 2020, 67, 8649–8658. [Google Scholar] [CrossRef]
- Ma, J.; Wang, X.; Jiang, J. Image Super-Resolution via Dense Discriminative Network. IEEE Trans. Ind. Electron. 2020, 67, 5687–5695. [Google Scholar] [CrossRef]
- Shopovska, I.; Jovanov, L.; Philips, W. Deep visible and thermal image fusion for enhanced pedestrian visibility. Sensors 2019, 19, 3727. [Google Scholar] [CrossRef] [Green Version]
- Huang, J.; Le, Z.; Ma, Y.; Mei, X.; Fan, F. A generative adversarial network with adaptive constraints for multi-focus image fusion. Neural Comput. Appl. 2020, 32, 15119–15129. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, X.; Cheng, J.; Peng, H.; Wang, Z. Infrared and visible image fusion with convolutional neural networks. Int. J. Wavelets Multiresolution Inf. Process. 2018, 16, 1850018. [Google Scholar] [CrossRef]
- Li, H.; Wu, X.J. DenseFuse: A fusion approach to infrared and visible images. IEEE Trans. Image Process. 2018, 28, 2614–2623. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Chen, X.; Ward, R.K.; Wang, Z.J. Image fusion with convolutional sparse representation. IEEE Signal Process. Lett. 2016, 23, 1882–1886. [Google Scholar] [CrossRef]
- Ma, J.; Yu, W.; Liang, P.; Li, C.; Jiang, J. FusionGAN: A generative adversarial network for infrared and visible image fusion. Inf. Fusion 2019, 48, 11–26. [Google Scholar] [CrossRef]
- Ma, J.; Liang, P.; Yu, W.; Chen, C.; Guo, X.; Wu, J.; Jiang, J. Infrared and visible image fusion via detail preserving adversarial learning. Inf. Fusion 2020, 54, 85–98. [Google Scholar] [CrossRef]
- Xu, H.; Liang, P.; Yu, W.; Jiang, J.; Ma, J. Learning a generative model for fusing infrared and visible images via conditional generative adversarial network with dual discriminators. In Proceedings of the International Joint Conference on Artificial Intelligence, Macao, China, 10–16 August 2019; pp. 3954–3960. [Google Scholar]
- Ma, J.; Chen, C.; Li, C.; Huang, J. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 2016, 31, 100–109. [Google Scholar] [CrossRef]
- Bavirisetti, D.P.; Xiao, G.; Liu, G. Multi-sensor image fusion based on fourth order partial differential equations. In Proceedings of the International Conference on Information Fusion, Xi’an, China, 10–13 July 2017; pp. 1–9. [Google Scholar]
- Zhou, Z.; Wang, B.; Li, S.; Dong, M. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters. Inf. Fusion 2016, 30, 15–26. [Google Scholar] [CrossRef]
- Zhang, H.; Xu, H.; Xiao, Y.; Guo, X.; Ma, J. Rethinking the Image Fusion: A Fast Unified Image Fusion Network based on Proportional Maintenance of Gradient and Intensity. In Proceedings of the AAAI Conference on Artificial Intelligence, Hilton New York Midtown, NY, USA, 7–12 February 2020; pp. 12797–12804. [Google Scholar]
- Xu, H.; Ma, J.; Jiang, J.; Guo, X.; Ling, H. U2fusion: A unified unsupervised image fusion network. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef] [PubMed]
- Ma, J.; Zhang, H.; Shao, Z.; Liang, P.; Xu, H. GANMcC: A Generative Adversarial Network With Multiclassification Constraints for Infrared and Visible Image Fusion. IEEE Trans. Instrum. Meas. 2020, 70, 1–14. [Google Scholar]
- Eskicioglu, A.M.; Fisher, P.S. Image quality measures and their performance. IEEE Trans. Commun. 1995, 43, 2959–2965. [Google Scholar] [CrossRef] [Green Version]
- Roberts, J.W.; van Aardt, J.A.; Ahmed, F.B. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J. Appl. Remote Sens. 2008, 2, 023522. [Google Scholar]
- Yang, Z.; Chen, Y.; Le, Z.; Fan, F.; Pan, E. Multi-source medical image fusion based on Wasserstein generative adversarial networks. IEEE Access 2019, 7, 175947–175958. [Google Scholar] [CrossRef]
- Aslantas, V.; Bendes, E. A new image quality metric for image fusion: The sum of the correlations of differences. AEU-Int. J. Electron. Commun. 2015, 69, 1890–1896. [Google Scholar] [CrossRef]
Number of Input Channels | Number of Output Channels | ||
---|---|---|---|
Encoder | Convolutional layer 1 of branch 1/2 | 1 | 16 |
Convolutional layer 2 of branch 1/2 | 16 | 16 | |
Convolutional layer 3 of branch 1/2 | 32 | 16 | |
Convolutional layer 4 of branch 1/2 | 48 | 16 | |
concatenation | 128 | ||
Decoder | Convolutional layer 1 | 128 | 64 |
Convolutional layer 2 | 64 | 32 | |
Convolutional layer 3 | 32 | 1 |
SD | SF | EN | MG | EI | FMI | SCD | VIF | |
---|---|---|---|---|---|---|---|---|
GTF | ||||||||
FPDE | ||||||||
HMSD | ||||||||
DenseFuse | ||||||||
PMGI | ||||||||
U2Fusion | ||||||||
GANMcC | ||||||||
FusionADA |
GTF | FPDE | HMSD | DenseFuse | PMGI | U2Fusion | GANMcC | FusionADA | |
---|---|---|---|---|---|---|---|---|
Running Time |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gu, Y.; Wang, X.; Zhang, C.; Li, B. Advanced Driving Assistance Based on the Fusion of Infrared and Visible Images. Entropy 2021, 23, 239. https://doi.org/10.3390/e23020239
Gu Y, Wang X, Zhang C, Li B. Advanced Driving Assistance Based on the Fusion of Infrared and Visible Images. Entropy. 2021; 23(2):239. https://doi.org/10.3390/e23020239
Chicago/Turabian StyleGu, Yansong, Xinya Wang, Can Zhang, and Baiyang Li. 2021. "Advanced Driving Assistance Based on the Fusion of Infrared and Visible Images" Entropy 23, no. 2: 239. https://doi.org/10.3390/e23020239
APA StyleGu, Y., Wang, X., Zhang, C., & Li, B. (2021). Advanced Driving Assistance Based on the Fusion of Infrared and Visible Images. Entropy, 23(2), 239. https://doi.org/10.3390/e23020239