Fire Segmentation with an Optimized Weighted Image Fusion Method
Abstract
:1. Introduction
2. Overview of the Proposed Segmentation Framework
2.1. Introduction
2.2. Presentation of the Used Image Fusion Methods
2.2.1. GFCE Fusion-Based Method
2.2.2. LatLRR Fusion-Based Method
2.3. Reminder of the Used Fusion Evaluation Criteria
2.4. Data Presentation
2.5. Fusion Experimental Results and Discussions
3. Improvement of LatLRR Fusion Method with Optimal Weighting of the Visible Source Image
3.1. Introduction
3.2. Estimation of the Optimal Weight α with the Least Mean Squares Method Source Image
4. Segmentation of Fire Images Using the Obtained Fused Images by a Major Voting Approach
4.1. Introduction
4.2. Recall of Segmentation Method Using the Major Voting
4.3. Data Training
4.4. Segmentation Results
4.4.1. Presentation of the Segmentation Evaluation Criteria: Accuracy, Precision, Specificity, Recall, F1 Score, and IoU
- TP (True Positive): cases where the assigned class is positive, knowing that the actual value (ground truth) is indeed positive.
- TN (True Negative): cases where the assigned class is negative, and the actual value is indeed negative.
- FP (False Positive): cases where the assigned class is positive, but the actual value is negative.
- FN (False Negative): cases where the assigned class is negative, but the actual value is positive.
4.4.2. Details and Discussion of the Obtained Segmentation Results
5. Conclusions and Perspectives
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- European Science Technology Advisory Group (E-STAG). Evolving Risk of Wildfires in Europe: The Changing Nature of Wildfire Risk Calls for a Shift in Policy Focus from Suppression to Prevention; Rossi, J.-L., Komac, B., Migliorin, M., Schwarze, R., Sigmund, Z., Awad, C., Chatelon, F., Goldammer, J.G., Marcelli, T., Morvan, D., et al., Eds.; United Nations Office for Disaster Risk Reduction: Brussels, Belgium, 2020. [Google Scholar]
- Gaur, A.; Singh, A.; Kumar, A.; Kumar, A.; Kapoor, K. Video Flame and Smoke Based Fire Detection Algorithms: A Literature Review. Fire Technol. 2020, 56, 1943–1980. [Google Scholar] [CrossRef]
- Perez, J. Causes et Consequences of Forest Fires. Available online: https://www.ompe.org/en/causes-et-consequences-of-forest-fires/ (accessed on 16 November 2023).
- National Interagency Fire Center. Statistics. Available online: https://www.nifc.gov/fire-information/statistics (accessed on 15 November 2023).
- Alkhatib, A.A.A. A Review on Forest Fire Detection Techniques. Int. J. Distrib. Sens. Netw. 2014, 10, 597368. [Google Scholar] [CrossRef]
- Li, P.; Zhao, W. Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 2020, 19, 100625. [Google Scholar] [CrossRef]
- Enis, A.Ç.; Dimitropoulos, K.; Gouverneur, B.; Grammalidis, N.; Günay, O.; Habiboğlu, Y.H.; Töreyin, B.U.; Verstockt, S. Video fire detection—Review. Digit. Signal Process. 2013, 23, 1827–1843. [Google Scholar] [CrossRef]
- Cao, Y.; Tang, Q.; Xu, S.; Li, F.; Lu, X. QuasiVSD: Efficient dual-frame smoke detection. Neural Comput. Appl. 2022, 34, 8539–8550. [Google Scholar] [CrossRef]
- Cao, Y.; Tang, Q.; Wu, X.; Lu, X. EFFNet: Enhanced Feature Foreground Network for Video Smoke Source Prediction and Detection. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1820–1833. [Google Scholar] [CrossRef]
- Yang, C.; Pan, Y.; Cao, Y.; Lu, X. CNN-Transformer Hybrid Architecture for Early Fire Detection. In Proceedings of the Artificial Neural Networks and Machine Learning—ICANN 2022, 31st International Conference on Artificial Neural Networks, Bristol, UK, 6–9 September 2022; Part IV. Springer: Berlin/Heidelberg, Germany, 2022; pp. 570–581. [Google Scholar]
- Bouguettaya, A.; Zarzour, H.; Taberkit, A.M.; Kechida, A. A Review on Early Wildfire Detection from Unmanned Aerial Vehicles Using Deep Learning-Based Computer Vision Algorithms. Signal Process. 2022, 190, 108309. [Google Scholar] [CrossRef]
- Wang, G.; Bai, D.; Lin, H.; Zhou, H.; Qian, J. FireViTNet: A Hybrid Model Integrating ViT and CNNs for Forest Fire Segmentation. Comput. Electron. Agric. 2024, 218, 108722. [Google Scholar] [CrossRef]
- Simes, T.; Pádua, L.; Moutinho, A. Wildfire Burnt Area Severity Classification from UAV-Based RGB and Multispectral Imagery. Remote Sens. 2024, 16, 30. [Google Scholar] [CrossRef]
- Ciprián-Sánchez, J.F.; Ochoa-Ruiz, G.; Rossi, L.; Morandini, F. Assessing the Impact of the Loss Function, Architecture and Image Type for Deep Learning-Based Wildfire Segmentation. Appl. Sci. 2021, 11, 7046. [Google Scholar] [CrossRef]
- Vorwerk, P.; Kelleter, J.; Müller, S.; Krause, U. Classification in Early Fire Detection Using Transfer Learning Based on Multi-Sensor Nodes. Proceedings 2024, 97, 20. [Google Scholar] [CrossRef]
- Yuan, C.; Zhang, Y.; Liu, Z. A survey on technologies for automatic forest fire monitoring, detection, and fighting using unmanned aerial vehicles and remote sensing techniques. Can. J. For. Res. 2015, 45, 783–792. [Google Scholar] [CrossRef]
- Yuan, C.; Liu, Z.; Zhang, Y. Fire detection using infrared images for UAV-based forest fire surveillance. In Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 567–572. [Google Scholar]
- Bosch, I.; Gomez, S.; Vergara, L.; Moragues, J. Infrared image processing and its application to forest fire surveillance. In Proceedings of the 2007 IEEE Conference on Advanced Video and Signal Based Surveillance, London, UK, 5–7 September 2007; pp. 283–288. [Google Scholar]
- Nemalidinne, S.M.; Gupta, D. Nonsubsampled contourlet domain visible and infrared image fusion framework for fire detection using pulse coupled neural network and spatial fuzzy clustering. Fire Saf. J. 2018, 101, 84–101. [Google Scholar] [CrossRef]
- Ma, J.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
- Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion: A survey of the state of the art. Inf. Fusion 2016, 33, 100–112. [Google Scholar] [CrossRef]
- Jin, B.; Cruz, L.; Gonçalves, N. Pseudo RGB-D Face Recognition. IEEE Sens. J. 2022, 22, 21780–21794. [Google Scholar] [CrossRef]
- Metwalli, M.R.; Nasr, A.H.; Allah, O.S.F.; El-Rabaie, S. Image fusion based on principal component analysis and high-pass filter. In Proceedings of the International Conference on Computer Engineering Systems, Cairo, Egypt, 14–16 December 2009; pp. 63–70. [Google Scholar]
- Al-Wassai, F.A.; Kalyankar, N.V.; Al-Zuky, A.A. The IHS transformations-based image fusion. arXiv 2011, arXiv:1107.4396. [Google Scholar]
- Zhao, M.; Jha, A.; Liu, Q.; Millis, B.A.; Mahadevan-Jansen, A.; Lu, L.; Landman, B.A.; Tyska, M.J.; Huo, Y. Faster Mean-shift: GPU-accelerated clustering for cosine embedding-based cell segmentation and tracking. Med. Image Anal. 2021, 71, 102048. [Google Scholar] [CrossRef] [PubMed]
- Yao, T.; Qu, C.; Liu, Q.; Deng, R.; Tian, Y.; Xu, J.; Jha, A.; Bao, S.; Zhao, M.; Fogo, A.B.; et al. Compound Figure Separation of Biomedical Images with Side Loss. In Deep Generative Models, and Data Augmentation, Labelling, and Imperfections; Springer: Cham, Switzerland, 2021. [Google Scholar]
- Zheng, Q.; Yang, M.; Yang, J.; Zhang, Q.; Zhang, X. Improvement of Generalization Ability of Deep CNN via Implicit Regularization in Two-Stage Training Process. IEEE Access 2018, 6, 15844–15869. [Google Scholar] [CrossRef]
- Zhou, Z.; Dong, M.; Xie, X.; Gao, Z. Fusion of infrared and visible images for night-vision context enhancement. Appl. Opt. 2016, 55, 6480–6490. [Google Scholar] [CrossRef] [PubMed]
- Zhou, Z.; Wang, B.; Li, S.; Dong, M. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters. Inf. Fusion 2016, 30, 15–26. [Google Scholar] [CrossRef]
- Toulouse, T.; Rossi, L.; Campana, A.; Celik, T.; Akhloufi, M.A. Computer vision for wildfire research: An evolving image dataset for processing and analysis. Fire Saf. J. 2017, 92, 188–194. [Google Scholar] [CrossRef]
- Ren, K.; Xu, F. Super-resolution images fusion via compressed sensing and low-rank matrix decomposition. Infrared Phys. Technol. 2015, 68, 61–68. [Google Scholar] [CrossRef]
- Lu, X.; Zhang, B.; Zhao, Y.; Liu, H.; Pei, H. The infrared and visible image fusion algorithm based on target separation and sparse representation. Infrared Phys. Technol. 2014, 67, 397–407. [Google Scholar] [CrossRef]
- Zhao, C.; Guo, Y.; Wang, Y. A fast fusion scheme for infrared and visible light images in NSCT domain. Infrared Phys. Technol. 2015, 72, 266–275. [Google Scholar] [CrossRef]
- Guo, K.; Li, X.; Zang, H.; Fan, T. Multi-modal medical image fusion based on fusionnet in yiq color space. Entropy 2020, 22, 1423. [Google Scholar] [CrossRef] [PubMed]
- Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef] [PubMed]
- Zhao, Y.; Fu, G.; Wang, H.; Zhang, S. The fusion of unmatched infrared and visible images based on generative adversarial networks. Math. Probl. Eng. 2020, 2020, 3739040. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Xiang, T.; Yan, L.; Gao, R. A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain. Infrared Phys. Technol. 2015, 69, 53–61. [Google Scholar] [CrossRef]
- Zhan, L.; Zhuang, Y.; Huang, L. Infrared and visible images fusion method based on discrete wavelet transform. J. Comput. 2017, 28, 57–71. [Google Scholar] [CrossRef]
- Sun, C.; Zhang, C.; Xiong, N. Infrared and Visible Image Fusion Techniques Based on Deep Learning: A Review. Electronics 2020, 9, 2162. [Google Scholar] [CrossRef]
- Kogan, F.; Fan, A.P.; Gold, G.E. Potential of PET-MRI for imaging of non-oncologic musculoskeletal disease. Quant. Imaging Med. Surg. 2016, 6, 756. [Google Scholar] [CrossRef] [PubMed]
- Gao, S.; Cheng, Y.; Zhao, Y. Method of visual and infrared fusion for moving object detection. Opt. Lett. 2013, 38, 1981–1983. [Google Scholar] [CrossRef] [PubMed]
- Meher, B.; Agrawal, S.; Panda, R.; Abraham, A. A survey on region-based image fusion methods. Inf. Fusion 2019, 48, 119–132. [Google Scholar] [CrossRef]
- Aslantas, V.; Bendes, E. A new image quality metric for image fusion: The sum of the correlations of differences. AEU—Int. J. Electron. Commun. 2015, 69, 1890–1896. [Google Scholar] [CrossRef]
- He, K.; Zhou, D.; Zhang, X.; Nie, R.; Wang, Q.; Jin, X. Infrared and visible image fusion based on target extraction in the nonsubsampled contourlet transform domain. J. Appl. Remote Sens. 2017, 11, 015011. [Google Scholar] [CrossRef]
- Li, S.; Yin, H.; Fang, L. Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Trans. Biomed. Eng. 2012, 59, 3450–3459. [Google Scholar] [CrossRef]
- Haghighat, M.B.A.; Aghagolzadeh, A.; Seyedarabi, H. A Non-Reference Image Fusion Metric Based on Mutual Information of Image Features. Comput. Electr. Eng. 2011, 37, 744–756. [Google Scholar] [CrossRef]
- Wang, W.; He, J.; Liu, H.; Yuan, W. MDC-RHT: Multi-Modal Medical Image Fusion via Multi-Dimensional Dynamic Convolution and Residual Hybrid Transformer. Sensors 2024, 24, 4056. [Google Scholar] [CrossRef] [PubMed]
- Petrovic, V.S.; Xydeas, C.S. Objective evaluation of signal-level image fusion performance. Opt. Eng. 2005, 44, 087003. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Tlig, L.; Bouchouicha, M.; Tlig, M.; Sayadi, M.; Moreau, E. A Fast Segmentation Method for Fire Forest Images Based on Multiscale Transform and PCA. Sensors 2020, 20, 6429. [Google Scholar] [CrossRef] [PubMed]
- Zhao, E.; Liu, Y.; Zhang, J.; Tian, Y. Forest Fire Smoke Recognition Based on Anchor Box Adaptive Generation Method. Electronics 2021, 10, 566. [Google Scholar] [CrossRef]
Used Evaluation Criteria | Definition and Formula |
---|---|
SSIM: mean of the structural similarity index measure between fused and source images [31,32,33] | : the reference image. : the fused image. . . . are used. are used to adjust the proportions. |
FMI: feature mutual information including both fused and source images [34,35] | and B, respectively. |
MCC: mean of the correlation coefficient between fused and source images [36,37,38,39] | where |
PSNR: peak signal-to-noise ratio, including both fused and source images [40,41,42] | where r is the correlation coefficient between fused and source images and MSE is the mean squared error between fused and fused images |
MSS: multi-scale structural similarity between fused and source images [43,44] | |
PM: petrovic metric (or edge preservation measure) including both fused and source images [45,46,47] | |
SCD: sum of the correlation differences between fused and source images [48,49] | with |
Captured Visible Images | IR Corresponding Images | Ground Truth Corresponding Images |
---|---|---|
Bright daylight period (Image 33) Mean of the image = 126 | ||
Low-light period (Image 24) Mean of the image = 66 | ||
Night period (Image 42) Mean of the image = 21 |
Input Images | Fused Images | Fusion Evaluation Metrics | |
---|---|---|---|
Image 1 | (a) Visible | (c) GFCE-based fusion | SSIM = 0.47 FMI = 0.89 MCC = 0.68 PSNR = 11.29 MSS = 0.76 PM = 0.48 SCD = 0.66 |
(b) IR | (d) LatLRR–based fusion | SSIM = 0.87 FMI = 0.93 MCC = 0.79 PSNR = 19.40 MSS = 0.96 PM = 0.40 SCD = 0.71 | |
Image 2 | (a) Visible | (c) GFCE-based fusion | SSIM = 0.47 FMI = 0.88 MCC = 0.70 PSNR = 12.6 MSS = 0.89 PM = 0.49 SCD = 0.67 |
(b) IR | (d) LatLRR–based fusion | SSIM = 0.86 FMI = 0.91 MCC = 0.78 PSNR = 16.05 MSS = 0.94 PM = 0.48 SCD = 0.68 | |
Image 3 | (a) Visible | (c) GFCE-based fusion | SSIM = 0.63 FMI = 0.88 MCC = 0.72 PSNR = 12.85 MSS = 0.86 PM = 0.49 SCD = 0.45 |
(b) IR | (d) LatLRR–based fusion | SSIM = 0.82 FMI = 0.92 MCC = 0.81 PSNR = 16.44 MSS = 0.91 PM = 0.44 SCD = 0.68 | |
Image 4 | (a) Visible | (c) GFCE-based fusion | SSIM = 0.63 FMI = 0.88 MCC = 0.7 PSNR = 12.52 MSS = 0.87 PM = 0.5 SCD = 0.44 |
(b) IR | (d) LatLRR–based fusion | SSIM = 0.82 FMI = 0.91 MCC = 0.78 PSNR = 15.55 MSS = 0.92 PM = 0.44 SCD = 0.64 |
Fusion Methods | SSIM | FMI | MCC | PSNR | MSS | PM | SCD |
---|---|---|---|---|---|---|---|
GFCE | 0.64 | 0.90 | 0.71 | 13.23 | 0.85 | 0.44 | 0.7 |
LatLRR | 0.85 | 0.92 | 0.79 | 17.96 | 0.93 | 0.46 | 0.88 |
Captured Visible Images | SCD Creterion Value versus the Weight α | |
---|---|---|
Bright daylight period (Image 33) Mean of the image = 126 | 0.7 | |
Low-light period (Image 24) Mean of the image = 66 | 1 | |
Night period (Image 42) Mean of the image = 21 | 1.5 |
Without Visible Image Optimization Weighting | With Visible Image Optimization Weighting | |
---|---|---|
Mean of the SCD fusion creterion value | 0.88 | 0.93 |
. |
(a) | (b) | (c) | (d) | (e) |
---|---|---|---|---|
Visible images | Segmentation of visible images only | Segmentation of IR images only | Segmentation of the fused images with classical LatLRR | Segmentation of the fused images with LatLRR after image optimization weighting |
IOU | 89.34% | 87.43% | 92.22% | 96.88% |
Accuracy | 99.78% | 99.74% | 99.83% | 99.94% |
Precision | 96.85% | 97.25% | 92.22% | 96.88% |
Specificity | 99.94% | 99.95% | 99.83% | 99.93% |
Recall | 92.02% | 89.64% | 94.05% | 97.64% |
F1 score | 94.37% | 93.29% | 95.95% | 98.41% |
IOU | 88.42% | 84.21% | 93.02% | 95.66% |
Accuracy | 99.36% | 98.98% | 99.62% | 99.76% |
Precision | 99.38% | 84.40% | 99.14% | 99.08% |
Specificity | 99.97% | 98.93% | 99.95% | 99.95% |
Recall | 88.92% | 99.72% | 93.77% | 96.51% |
F1 score | 93.86% | 91.43% | 96.38% | 97.78% |
IOU | 86.45% | 83.40% | 89.51% | 90.68% |
Accuracy | 99.11% | 98.93% | 99.21% | 99.40% |
Precision | 88.32% | 89.51% | 89.41% | 90.94% |
Specificity | 99.20% | 99.33% | 99.28% | 99.39% |
Recall | 97.61% | 92.43% | 98.06% | 99.68% |
F1 Score | 92.73% | 90.95% | 93.53% | 95.11% |
IOU | 82.93% | 76.93% | 90.86% | 94.16% |
Accuracy | 98.97% | 98.55% | 99.49% | 99.69% |
Precision | 86.69% | 82.99% | 94.56% | 99.04% |
Specificity | 99.19% | 98.96% | 99.69% | 99.95% |
Recall | 95.03% | 91.33% | 95.87% | 95.03% |
F1 score | 90.67% | 86.96% | 95.21% | 96.99% |
Segmentation Criteria | Segmentation of Visible Image Only | Segmentation of IR Images Only | Segmentation of the Fused Images with Classical LatLRR | Segmentation of the Fused Images with LatLRR after Image Optimization Weighting |
---|---|---|---|---|
IOU | 88.81% | 86.42% | 92.05% | 94.52% |
Accuracy | 99.53% | 99.44% | 99.61% | 99.84% |
Precision | 93.55% | 92.30% | 95.12% | 96.62% |
Specificity | 99.70% | 99.63% | 99.77% | 99.88% |
Recall | 94.41% | 93.06% | 95.84% | 97.44% |
F1 score | 94.06% | 92.74% | 95.75% | 97.09% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tlig, M.; Bouchouicha, M.; Sayadi, M.; Moreau, E. Fire Segmentation with an Optimized Weighted Image Fusion Method. Electronics 2024, 13, 3175. https://doi.org/10.3390/electronics13163175
Tlig M, Bouchouicha M, Sayadi M, Moreau E. Fire Segmentation with an Optimized Weighted Image Fusion Method. Electronics. 2024; 13(16):3175. https://doi.org/10.3390/electronics13163175
Chicago/Turabian StyleTlig, Mohamed, Moez Bouchouicha, Mounir Sayadi, and Eric Moreau. 2024. "Fire Segmentation with an Optimized Weighted Image Fusion Method" Electronics 13, no. 16: 3175. https://doi.org/10.3390/electronics13163175
APA StyleTlig, M., Bouchouicha, M., Sayadi, M., & Moreau, E. (2024). Fire Segmentation with an Optimized Weighted Image Fusion Method. Electronics, 13(16), 3175. https://doi.org/10.3390/electronics13163175