Conditional Random Field-Guided Multi-Focus Image Fusion
Abstract
:1. Introduction
- the development of the novel EAC method, which preserves the strong edges of the input images, instead of the typical centering method.
- the design of a novel framework, based on a CRF model, that is suitable for transform-domain image fusion.
- the design of a novel transform-domain fusion method that produces fused images of high visual quality, preserves via CRF optimization, the boundary between well-focused and out-of-focus pixels, and does not introduce artifacts during fusion.
- the introduction of a novel transform-domain fusion rule, based on the labels extracted from the CRF model, that produces fused images of higher image quality without the transform-domain artifacts.
- the robustness of the proposed method against Gaussian noise and the support of denoising during fusion, by applying the transform-domain coefficient shrinkage method [10].
2. Proposed Method Description
2.1. Edge Aware Centering
2.2. Energy Minimization
2.3. Inference -Expansion Method
2.4. Unary Potential Estimation
2.5. Smoothness Term
2.6. Transform-Domain CRF Fusion Rule
3. Fusion and Denoising
4. Experimental Results
4.1. Quantitative Evaluation
4.1.1. Mutual Information—MI
4.1.2. Yang’s Metric Qy
4.1.3. Chen-Blum Metric—
- Contrast sensitivity filtering: Filtered image , where is the CSF filter in polar form and .
- Local contrast computation:
- Contrast preservation calculation: For input image the masked contrast map is estimated as:
- Generation of saliency map: The saliency map for image is:The value of information preservation is:
- The global quality map is defined as:The value of metric is the average of the global quality map:
4.1.4. Gradient Based Methods—,
4.1.5. Structural Similarity Index—SSIM [50]
4.1.6. Niqe [51]
4.1.7. Entropy
4.2. Qualitative Evaluation
4.3. Complexity
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Liu, Y.; Chen, X.; Wang, Z.; Wang, Z.J.; Ward, R.K.; Wang, X. Deep learning for pixel-level image fusion: Recent advances and future prospects. Inf. Fusion 2018, 42, 158–173. [Google Scholar] [CrossRef]
- Bai, X.; Zhang, Y.; Zhou, F.; Xue, B. Quadtree-based multi-focus image fusion using a weighted focus-measure. Inf. Fusion 2015, 22, 105–118. [Google Scholar] [CrossRef]
- Zhang, Y.; Bai, X.; Wang, T. Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inf. Fusion 2017, 35, 81–2535. [Google Scholar] [CrossRef]
- Liu, Y.; Liu, S.; Wang, Z. Multi-focus image fusion with dense SIFT. Inf. Fusion 2015, 23, 139–155. [Google Scholar] [CrossRef]
- Qiu, X.; Li, M.; Zhang, L.; Yuan, X. Guided filter-based multi-focus image fusion through focus region detection. Signal Process. Image Commun. 2019, 72, 35–46. [Google Scholar] [CrossRef]
- Li, M.; Cai, W.; Tan, Z. A region-based multi-sensor image fusion scheme using pulse-coupled neural network. Pattern Recognit. Lett. 2006, 27, 1948–1956. [Google Scholar] [CrossRef]
- Li, S.; Kang, X.; Hu, J.; Yang, B. Image matting for fusion of multi-focus images in dynamic scenes. Inf. Fusion 2013, 14, 147–162. [Google Scholar] [CrossRef]
- Singh, S.; Singh, H.; Mittal, N.; Hussien, A.G.; Sroubek, F. A feature level image fusion for Night-Vision context enhancement using Arithmetic optimization algorithm based image segmentation. Expert Syst. Appl. 2022, 209, 118272. [Google Scholar] [CrossRef]
- Singh, S.; Mittal, N.; Singh, H. A feature level image fusion for IR and visible image using mNMRA based segmentation. Neural Comput. Appl. 2022, 34, 8137–8154. [Google Scholar] [CrossRef]
- Hyvärinen, A.; Hurri, J.; Hoyer, P.O. Independent Component Analysis. In Natural Image Statistics: A Probabilistic Approach to Early Computational Vision; Springer: London, UK, 2009; pp. 151–175. [Google Scholar]
- Mitianoudis, N.; Stathaki, T. Pixel-based and region-based image fusion schemes using ICA bases. Inf. Fusion 2007, 8, 131–142. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Wang, Z. Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Process. 2015, 9, 347–357. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, X.; Ward, R.K.; Wang, Z.J. Image Fusion with convolutional sparse representation. IEEE Signal Process. Lett. 2016, 23, 1882–1886. [Google Scholar] [CrossRef]
- Zhang, Q.; Guo, B.l. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process. 2009, 89, 1334–1346. [Google Scholar] [CrossRef]
- Liu, Y.; Liu, S.; Wang, Z. A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 2015, 24, 147–164. [Google Scholar] [CrossRef]
- Zhou, Z.; Li, S.; Wang, B. Multi-scale weighted gradient-based fusion for multi-focus images. Inf. Fusion 2014, 20, 60–72. [Google Scholar] [CrossRef]
- Shreyamsha Kumar, B.K. Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Process. 2013, 7, 1125–1143. [Google Scholar] [CrossRef]
- Qin, X.; Ban, Y.; Wu, P.; Yang, B.; Liu, S.; Yin, L.; Liu, M.; Zheng, W. Improved Image Fusion Method Based on Sparse Decomposition. Electronics 2022, 11, 2321. [Google Scholar] [CrossRef]
- Jagtap, N.S.; Thepade, S.D. High-quality image multi-focus fusion to address ringing and blurring artifacts without loss of information. Vis. Comput. 2021, 37, 1–9. [Google Scholar] [CrossRef]
- Singh, H.; Cristobal, G.; Bueno, G.; Blanco, S.; Singh, S.; Hrisheekesha, P.N.; Mittal, N. Multi-exposure microscopic image fusion-based detail enhancement algorithm. Ultramicroscopy 2022, 236, 113499. [Google Scholar] [CrossRef]
- Bouzos, O.; Andreadis, I.; Mitianoudis, N. Conditional random field model for robust multi-focus image fusion. IEEE Trans. Image Process. 2019, 28, 5636–5648. [Google Scholar] [CrossRef]
- Chai, Y.; Li, H.; Li, Z. Multifocus image fusion scheme using focused region detection and multiresolution. Opt. Commun. 2011, 284, 4376–4389. [Google Scholar] [CrossRef]
- He, K.; Zhou, D.; Zhang, X.; Nie, R. Multi-focus: Focused region finding and multi-scale transform for image fusion. Neurocomputing 2018, 320, 157–170. [Google Scholar] [CrossRef]
- Singh, S.; Singh, H.; Gehlot, A.; Kaur, J.; Gagandeep, A. IR and visible image fusion using DWT and bilateral filter. Microsyst. Technol. 2022, 28, 1–11. [Google Scholar] [CrossRef]
- Singh, S.; Mittal, N.; Singh, H. Multifocus image fusion based on multiresolution pyramid and bilateral filter. IETE J. Res. 2020, 68, 2476–2487. [Google Scholar] [CrossRef]
- Zhang, X. Deep learning-based Multi-focus image fusion: A survey and a comparative study. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 4819–4838. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, X.; Peng, H.; Wang, Z. Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion 2017, 36, 191–207. [Google Scholar] [CrossRef]
- Amin-Naji, M.; Aghagolzadeh, A.; Ezoji, M. Ensemble of CNN for multi-focus image fusion. Inf. Fusion 2019, 51, 201–214. [Google Scholar] [CrossRef]
- Tang, H.; Xiao, B.; Li, W.; Wang, G. Pixel convolutional neural network for multi-focus image fusion. Inf. Sci. 2018, 433–434, 125–141. [Google Scholar] [CrossRef]
- Zhang, Y.; Liu, Y.; Sun, P.; Yan, H.; Zhao, X.; Zhang, L. IFCNN: A general image fusion framework based on convolutional neural network. Inf. Fusion 2020, 54, 99–118. [Google Scholar] [CrossRef]
- Li, H.; Wu, X.J. DenseFuse: A fusion approach to infrared and visible images. IEEE Trans. Image Process. 2019, 28, 2614–2623. [Google Scholar] [CrossRef] [Green Version]
- Ma, X.; Wang, Z.; Hu, S.; Kan, S. Multi-focus image fusion based on multi-scale generative adversarial network. Entropy 2022, 24, 582. [Google Scholar] [CrossRef] [PubMed]
- Wei, B.; Feng, X.; Wang, K.; Gao, B. The multi-focus-image-fusion method based on convolutional neural network and sparse representation. Entropy 2021, 23, 827. [Google Scholar] [CrossRef] [PubMed]
- Boykov, Y.; Veksler, O.; Zabih, R. Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 1222–1239. [Google Scholar] [CrossRef]
- Nejati, M.; Samavi, S.; Shirani, S. Multi-focus image fusion using dictionary-based sparse representation. Inf. Fusion 2015, 25, 72–84. [Google Scholar] [CrossRef]
- Paul, S.; Sevcenco, I.S.; Agathoklis, P. Multi-exposure and multi-focus image fusion in gradient domain. J. Circuits Syst. Comput. 2016, 25, 1650123. [Google Scholar] [CrossRef]
- Zhu, R.; Li, X.; Huang, S.; Zhang, X. Multimodal medical image fusion using adaptive co-occurrence filter-based decomposition optimization model. Bioinformatics 2021, 38, 818–826. [Google Scholar] [CrossRef]
- Veshki, F.G.; Ouzir, N.; Vorobyov, S.A.; Ollila, E. Multimodal image fusion via coupled feature learning. Signal Process. 2022, 200, 108637. [Google Scholar] [CrossRef]
- Veshki, F.G.; Vorobyov, S.A. Coupled Feature Learning Via Structured Convolutional Sparse Coding for Multimodal Image Fusion. In Proceedings of the ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Singapore, 22–27 May 2022; pp. 2500–2504. [Google Scholar]
- Li, B.; Peng, H.; Wang, J. A novel fusion method based on dynamic threshold neural P systems and nonsubsampled contourlet transform for multi-modality medical images. Signal Process. 2021, 178, 107793. [Google Scholar] [CrossRef]
- Tan, W.; Thitøn, W.; Xiang, P.; Zhou, H. Multi-modal brain image fusion based on multi-level edge-preserving filtering. Biomed. Signal Process. Control. 2021, 64, 102280. [Google Scholar] [CrossRef]
- Li, X.; Zhou, F.; Tan, H. Joint image fusion and denoising via three-layer decomposition and sparse representation. Knowl.-Based Syst. 2021, 224, 107087. [Google Scholar] [CrossRef]
- Singh, S.; Mittal, N.; Singh, H. Review of various image fusion algorithms and image fusion performance metric. Arch. Comput. Methods Eng. 2021, 28, 3645–3659. [Google Scholar] [CrossRef]
- Singh, S.; Mittal, N.; Singh, H. Classification of various image fusion algorithms and their performance evaluation metrics. In Computational Intelligence for Machine Learning and Healthcare Informatics; De Gruyter: Berlin, Germany, 2020; pp. 179–198. [Google Scholar]
- Hossny, M.; Nahavandi, S.; Creighton, D. Comments on ’Information measure for performance of image fusion’. Electron. Lett. 2008, 44, 1066–1067. [Google Scholar] [CrossRef]
- Xydeas, C.S.; Petrovic, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 308–309. [Google Scholar] [CrossRef]
- Xydeas, C.S.; Petrovic, V.S. Objective pixel-level image fusion performance measure. In Proceedings of the Sensor Fusion: Architectures, Algorithms, and Applications IV, Orlando, FL, USA, 24–28 April 2000; SPIE: Bellingham, DC, USA, 2000; Volume 4051, pp. 89–98. [Google Scholar] [CrossRef]
- Yang, C.; Zhang, J.Q.; Wang, X.R.; Liu, X. A novel similarity based quality metric for image fusion. Inf. Fusion 2008, 9, 156–160. [Google Scholar] [CrossRef]
- Chen, Y.; Blum, R.S. A new automated quality assessment algorithm for image fusion. Image Vis. Comput. 2009, 27, 1421–1432. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” image quality analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
Methods | [45] | [47] | [46] | [48] | [49] | [50] | [51] | |
---|---|---|---|---|---|---|---|---|
ASR [12] | 7.1310 | 0.7510 | 0.7013 | 0.9691 | 0.7264 | 0.8437 | 3.4591 | 7.5217 |
NSCT [14] | 7.1986 | 0.7502 | 0.6960 | 0.9649 | 0.7527 | 0.8432 | 3.4479 | 7.5309 |
GBM [36] | 3.8813 | 0.7172 | 0.6202 | 0.8554 | 0.6159 | 0.7932 | 3.0434 | 7.5684 |
ICA [11] | 6.8769 | 0.7393 | 0.6741 | 0.9512 | 0.7088 | 0.8534 | 3.3915 | 7.5267 |
IFCNN [30] | 7.0400 | 0.7337 | 0.6628 | 0.9522 | 0.7292 | 0.8440 | 3.4623 | 7.5319 |
DenseFuse [31] | 6.2048 | 0.5532 | 0.4694 | 0.8141 | 0.6037 | 0.8651 | 3.3953 | 7.4681 |
dchwt [17] | 6.7298 | 0.7184 | 0.6078 | 0.9202 | 0.6924 | 0.8526 | 3.2976 | 7.5205 |
acof [37] | 7.2675 | 0.5287 | 0.5112 | 0.9475 | 0.6387 | 0.8260 | 4.6501 | 7.4901 |
cfl [38] | 5.6254 | 0.6576 | 0.5746 | 0.8827 | 0.6323 | 0.8158 | 3.4033 | 7.5734 |
ConvCFL [39] | 5.9742 | 0.6916 | 0.5864 | 0.8869 | 0.6643 | 0.8396 | 3.7099 | 7.5581 |
DTNP [40] | 6.7854 | 0.7431 | 0.6779 | 0.9566 | 0.7347 | 0.8390 | 3.4198 | 7.5298 |
mlcf [41] | 6.4414 | 0.5377 | 0.5147 | 0.8593 | 0.6259 | 0.8564 | 3.8699 | 7.4906 |
joint [42] | 6.9991 | 0.7435 | 0.6970 | 0.9621 | 0.7176 | 0.8426 | 3.3935 | 7.5200 |
CRFGuided | 7.3639 | 0.7534 | 0.7143 | 0.9851 | 0.7557 | 0.8601 | 3.0336 | 7.5697 |
Methods | [45] | [47] | [46] | [48] | [49] | [50] | [51] | |
---|---|---|---|---|---|---|---|---|
ASR [12] | 6.3790 | 0.7192 | 0.6721 | 0.9541 | 0.7057 | 0.8150 | 5.5111 | 7.3262 |
NSCT [14] | 6.2947 | 0.7074 | 0.6593 | 0.9439 | 0.7284 | 0.8161 | 5.3080 | 7.3451 |
GBM [36] | 3.5292 | 0.6729 | 0.5826 | 0.8275 | 0.6005 | 0.7503 | 5.0053 | 7.5298 |
ICA [11] | 6.0174 | 0.6945 | 0.6507 | 0.9313 | 0.6996 | 0.8302 | 5.2144 | 7.3449 |
IFCNN [30] | 5.9641 | 0.6743 | 0.6074 | 0.9118 | 0.6725 | 0.8230 | 5.4436 | 7.3435 |
DenseFuse [31] | 6.0467 | 0.6139 | 0.5798 | 0.8517 | 0.6275 | 0.8351 | 5.2584 | 7.3739 |
dchwt [17] | 5.9965 | 0.6781 | 0.5810 | 0.8997 | 0.6752 | 0.8244 | 4.9713 | 7.3396 |
acof [37] | 6.5748 | 0.5594 | 0.5543 | 0.8691 | 0.6183 | 0.8098 | 5.1625 | 7.3088 |
cfl [38] | 4.8158 | 0.5985 | 0.5327 | 0.8548 | 0.6138 | 0.7966 | 5.5156 | 7.4403 |
ConvCFL [39] | 5.3014 | 0.6510 | 0.5619 | 0.8640 | 0.6558 | 0.8234 | 5.5023 | 7.3895 |
DTNP [40] | 6.0911 | 0.6966 | 0.6357 | 0.9296 | 0.7056 | 0.8119 | 5.2817 | 7.3496 |
mlcf [41] | 6.3294 | 0.5912 | 0.5890 | 0.9274 | 0.6594 | 0.8040 | 5.2670 | 7.3176 |
joint [42] | 6.6541 | 0.7212 | 0.6775 | 0.9553 | 0.7234 | 0.8102 | 5.4543 | 7.3239 |
CRFGuided | 6.6740 | 0.7290 | 0.6903 | 0.9798 | 0.7337 | 0.8356 | 5.0001 | 7.3928 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bouzos, O.; Andreadis, I.; Mitianoudis, N. Conditional Random Field-Guided Multi-Focus Image Fusion. J. Imaging 2022, 8, 240. https://doi.org/10.3390/jimaging8090240
Bouzos O, Andreadis I, Mitianoudis N. Conditional Random Field-Guided Multi-Focus Image Fusion. Journal of Imaging. 2022; 8(9):240. https://doi.org/10.3390/jimaging8090240
Chicago/Turabian StyleBouzos, Odysseas, Ioannis Andreadis, and Nikolaos Mitianoudis. 2022. "Conditional Random Field-Guided Multi-Focus Image Fusion" Journal of Imaging 8, no. 9: 240. https://doi.org/10.3390/jimaging8090240
APA StyleBouzos, O., Andreadis, I., & Mitianoudis, N. (2022). Conditional Random Field-Guided Multi-Focus Image Fusion. Journal of Imaging, 8(9), 240. https://doi.org/10.3390/jimaging8090240