Next Article in Journal
Impact of the Exciter and Governor Parameters on Forced Oscillations
Next Article in Special Issue
Enhancing YOLOv8’s Performance in Complex Traffic Scenarios: Optimization Design for Handling Long-Distance Dependencies and Complex Feature Relationships
Previous Article in Journal
Optimal Sizing of Electric Vehicle Charging Stacks Considering a Multiscenario Strategy and User Satisfaction
Previous Article in Special Issue
The Real-Time Image Sequences-Based Stress Assessment Vision System for Mental Health
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fire Segmentation with an Optimized Weighted Image Fusion Method

1
Laboratory SIME, Université de Tunis, ENSIT, Av. Taha Hussein, Tunis 1008, Tunisia
2
LIS-CNRS, Université de Toulon, Université Aix-Marseille, 83130 Toulon, France
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(16), 3175; https://doi.org/10.3390/electronics13163175
Submission received: 6 July 2024 / Revised: 27 July 2024 / Accepted: 6 August 2024 / Published: 11 August 2024
(This article belongs to the Special Issue Applications of Artificial Intelligence in Image and Video Processing)

Abstract

:
In recent decades, earlier fire detection has become a research priority. Since visible and infrared images cannot produce clear and complete information, we propose in this work to combine two images with an appropriate fusion technique to improve the quality of fire detection, segmentation, and localization. The visible image is at first weighted before being used in the fusion process. The value of the optimal weight is estimated from the mean of the visible image with a second-order polynomial model. The parameters of this model are optimized with the least squares method from the curve of optimal weights according to the mean of visible images. Finally, a major voting method based on deep learning models is used. Experiments include an assessment of the framework’s performance not only with respect to its visual appearance but also across a spectrum of predefined evaluation criteria. The experiments show that the proposed model, which includes an optimized weighted image fusion stage before segmentation, has a high Intersection over Union (IoU) score of more than 94%.

1. Introduction

One of the most catastrophic natural disasters that has occurred over the past few years is forest fires. The burning of forests not only contributes to air pollution but also puts people’s lives and the lives of wild animals in jeopardy [1]. Each year, a staggering 60,000 to 80,000 forest fires have ravaged the landscape, resulting in damage spanning from 3 to 10 million hectares. The environmental impact of these fires is highly contingent on their scale and frequency, but what remains consistent is the diverse array of causes behind these destructive blazes. In the past, forest fires were predominantly a natural occurrence, primarily ignited by uncommon events like volcanic eruptions or earthquakes, and typically confined to highly specific geographic regions [2]. Today, natural causes are rarer than human activity, whether voluntarily or not. Human imprudence causes 43% of forest fires (garbage deposits, burning cigarette butts). Surges, electricity line damage, or military incidents can also produce them, as in 2016 and 2017 at Captieux military base in Gironde (1300 hectares of pines destroyed by military fire) [3]. Drawing from comprehensive research conducted by the National Interagency Fire Center (NIFC) in the United States of America, a concerning trend emerges. Over the decade encompassing 2010 to 2019, an alarming annual average of 51,296 fires raged across the nation’s landscapes. These data underscore the urgent need for proactive measures and strategies to mitigate the escalating impact of forest fires on both the natural environment and the economy [4]. As a result of the destructive impacts of fire, various fire detection methods have developed today. In earlier times, the detection of forest fires relied on the watchful eyes of human watchers or the strategic placement of watchtowers [5]. However, these methods included several high-risk elements as well as a significant amount of physical work. Low temporal and spatial resolution are an issue with satellite-based remote sensing technologies [6,7]. These worrying numbers encourage academics to seek innovative strategies for the early detection and control of fires.
Video surveillance has become the primary method for capturing images of forest fires. Forest fire detection methods have evolved into three primary categories: ground-based, aviation-based, and satellite-based detection. Recent advancements in technology have significantly improved both hardware and software. Deep learning techniques, including Convolutional Neural Networks (CNNs) and hybrid models [8,9,10], are now widely used for precise segmentation, often enhanced by transfer learning [11,12]. Multi-spectral and hyperspectral imaging has increased accuracy in challenging conditions [13], while Generative Adversarial Networks (GANs) create robust training datasets through data augmentation and synthesis [14]. Sensor fusion, which integrates data from optical and thermal cameras, enhances detection capabilities [15]. A notable trend is the fusion of visible and infrared images as a pre-segmentation step, combining information from both spectra to improve the accuracy and robustness of flame detection. Infrared images are particularly useful in low-visibility conditions like smoke or night-time, while visible images provide detailed spatial information. This combined approach allows segmentation algorithms to more accurately identify flames in diverse conditions [16]. The choice between visible and infrared systems depends on the specific requirements of the fire detection task and environmental conditions. Visible systems, which capture wavelengths perceivable by the human eye, are effective during daylight hours and for detecting open flames. In contrast, infrared systems detect heat emissions and variations, excelling in low-light conditions, during the night, or when smoke obscures visibility. Infrared cameras can identify fires by their heat signatures even before visible flames emerge, enhancing their effectiveness in swiftly identifying potential fire outbreaks [17,18]. In some situations, a combination of both systems, known as multi-spectral or fused imaging, may be employed to harness the strengths of each, providing a comprehensive and robust fire detection solution. This adaptability underscores the dynamic nature of vision-based fire detection and its capacity to meet the diverse challenges posed by different scenarios, lighting conditions, and environmental factors [19]. An advanced and highly effective technique in the field of fire detection involves the fusion of infrared and visible images, representing a method that goes beyond mere image combination. This sophisticated approach leverages the unique strengths of both infrared and visible imaging to create a synergistic process that yields images of exceptional robustness and informativeness. Image fusion has several uses in pattern recognition, remote sensing, medical image processing, and current military technology. The capacity of the human visual system to detect and identify targets can be considerably enhanced by the fusion of visible and infrared images. Visible images have rich appearance information; however, infrared images lack texture and detail. Infrared images reflect the heat radiation produced by objects and are less influenced by illumination fluctuations or artifacts, allowing for night-time target recognition. Infrared images have poorer spatial resolution than visible ones. As a result, the fusion of thermal radiation data with texture detail information within a single image yields a powerful capability for automated target detection and precise target localization. In the contemporary landscape of image fusion techniques that combine visible and infrared imagery, there are four broad categories that provide diverse approaches to this process. These categories encompass sparse representation, multi-scale transformation, saliency-based methods, and subspace methods. Each of these approaches contributes uniquely to the field, offering a range of tools and methodologies to enhance the accuracy and efficacy of image fusion across various applications, including the vital domain of fire detection and surveillance [20,21,22]. In the latent low-rank representation (LatLRR) technique, decomposing source images at multiple scales enhances resistance to noise and anomalies. The GFCE approach allows for the incorporation of the most pertinent IR spectral characteristics into the visible image.
The LatLRR is a powerful method for image fusion because it can handle multiple sources of data, different types of data, noise, blur, and missing data, and it has been found to have better performance in image fusion tasks when compared to traditional methods. Traditional methods such as IHS and PCA [23,24] may not be suitable for different types of images and may not be able to effectively fuse images with different characteristics and do not fully exploit the spectral information in images, resulting in a lack of data in the fused image that may not be adaptable to changing conditions, such as changes in lighting or image resolution. Image segmentation has many applications in various fields, including medical imaging and computer vision [25,26,27]. In the second stage, the goal of fire image segmentation is to accurately identify and locate fire in images and videos, which can be used for applications such as firefighting and wildfire management. A fire detection model has been developed to improve our ability to identify fire zones. This model achieves this by fusing visible and infrared images using the innovative LatLRR-based approach. Through this fusion process, it strategically combines the unique strengths of these two image types, effectively using thermal data and visual information to significantly improve fire detection accuracy. Following the fusion step, the resulting images undergo a segmentation phase using a major voting method based on deep learning models, specifically utilizing the UNet-ResNet50, UNet-VGG16, and UNet-EfficientNet models. Furthermore, since visible images vary in brightness, depending on the acquisition period, we propose to weigh the visible image in grayscale before using it in the fusion process with the infrared image, with an optimal weight, which is variable according to the brightness mean of the image. The visible image is at first weighted before being used in the fusion process. The value of the optimal weight is estimated from the mean of the visible image with a second-order polynomial model. The parameters of this model are optimized with the least-squares method from the curve of optimal weights according to the mean of visible images. Finally, the segmentation procedure makes it possible to further refine the detection process by precisely delimiting the boundaries of the fire zones. The combination of these techniques represents a significant advancement in fire detection technology, where the fusion of multiple imaging modalities and the application of advanced segmentation models collaboratively contribute to increased accuracy and efficiency in the early fire detection and mitigation of potential fires.
This paper is structured as follows: in Section 2, we introduce our proposed framework for fire segmentation; then, in Section 3 we give a detailed description of our proposed approach to improve the latLRR. In Section 4, we show how to use the major voting mechanism to refine the segmentation process. The paper concludes with Section 5.

2. Overview of the Proposed Segmentation Framework

2.1. Introduction

In this section, we begin by introducing the two methods chosen for our study. First, we explore the LatLRR technique developed by Hui et al. [28], which leverages multi-scale decomposition to enhance robustness against noise and anomalies in source images. The second approach is the infrared and visible image fusion method (GFCE) [29]. This technique adeptly integrates crucial IR spectral features into the visible spectrum, ensuring that essential perceptual cues from the background scenery and visible image details are either retained or suitably enhanced. Following this, we detail the evaluation metrics used in our assessment. We conclude with an overview of the Corsican fire database, as documented by Toulouse et al. [30], is necessary and Figure 1 provides an outline of our proposed fire semantic segmentation structure.

2.2. Presentation of the Used Image Fusion Methods

2.2.1. GFCE Fusion-Based Method

Zhiqiang Zhou and his team [29] introduced a unique algorithm for enhancing the night-vision context by fusing infrared and visible images through a guided filter (see Figure 2). The process begins with the development of an adaptive enhancement technique. Following that, a hybrid multi-scale decomposition method, rooted in the guided filter, is utilized to integrate infrared data into the visible image using a multi-scale fusion strategy. In the concluding step, a perceptual-based regularization parameter is employed to ascertain the proportion of infrared spectral features to be incorporated, which is achieved by assessing the perceptual saliency of both infrared and visible image data. One advantage of a guided filter-based context enhancement method is that it can effectively preserve fine details and edges in an image while also enhancing the overall contrast and brightness.
This can be useful for improving the visibility and interpretability of an image, particularly in situations where the original image is of low quality or has poor lighting conditions. Additionally, it is simple to implement and computationally efficient. One disadvantage of a guided filter-based context enhancement method is that it can introduce artifacts or distortions in the enhanced image, particularly in areas with a high texture or fine details. Additionally, it may not perform as well on images with low contrast or under-exposed areas.

2.2.2. LatLRR Fusion-Based Method

In their work, Hui et al. [28] introduced a straightforward yet robust image fusion technique based on LatLRR (see Figure 3). Initially, input images undergo decomposition into low-rank segments, denoted as I C _ l r r (representing global structure), and saliency segments, represented as I C _ s (indicative of local structure) using LatLRR, with C 1,2 . The researchers utilized a weighted-average fusion approach and a sum strategy for the amalgamation of low-rank and saliency segments. This results in the formation of the fused low-rank segment, F l r r , and the fused saliency segment, F s . The ultimate fused image is the summation of F l r r and F s .
LatLRR has several advantages. One advantage is that LatLRR can effectively capture the underlying low-dimensional structure of the data. This allows for the fusion of infrared and visible images to be performed in a compact and efficient manner without the need for excessive computational resources or large amounts of training data. Additionally, LatLRR can be used to perform feature extraction and dimensionality reduction, which can be useful for image classification and detection tasks. Another advantage of LLRR is that it can preserve the spectral and spatial information of the original images, which can be useful for applications such as target recognition and scene analysis. One disadvantage of fusion using LatLRR is that it can be sensitive to the initial estimate of the low-rank matrix and the sparse matrix, which can affect the final fused image. Another disadvantage of LatLRR is that it requires many images to produce a good result, and it may not be suitable for small datasets. Finally, it may be sensitive to the choice of parameters, such as the rank of the low-rank matrix and the sparsity constraint and finding the optimal values for these parameters can be challenging.

2.3. Reminder of the Used Fusion Evaluation Criteria

The quality of the fused images reflects the success of the fusion methods. Consequently, the evaluation of the quality of the fused images is a crucial issue in this area. The quality of the fused images can be assessed using either subjective or objective evaluations. Most existing evaluation techniques rely on human intervention, which comes with the drawbacks of being slow, not conducted in real-time, costly, and difficult to replicate. As a result, a metric for objective and automatic evaluation is necessary. While all these metrics take a variety of approaches, they all have the goal of imitating human assessment as precisely as possible. Various quality metrics have been developed in the literature. By assessing the fused image, quality indicators such as “standard deviation” and “spatial frequency” estimate the image’s quality. Typically, these metrics evaluate the intensity dispersion of the fused image. Furthermore, the source images can affect the quality of the fused image. Therefore, these methods are typically unable to offer sufficient information about product quality. Table 1 outlines details of the objective and commonly used fusion evaluation metrics. A higher value of each criterion indicates superior fusion performance.

2.4. Data Presentation

For this study, we made use of the visible-infrared image pairs sourced from the Corsican Fire Dataset. We carried out a series of experiments to evaluate the efficacy of our fusion technique. Drawing from the Corsican fire database, we analyzed 640 reference image pairs to evaluate our method’s potential. Each set within this collection comprised three related images: a visible domain image, its manually curated ground truth provided by a specialist, and a corresponding infrared image obtained concurrently. Table 2 displays a couple of such image trios from the dataset. These images were captured by the UMR CNRS 6134 SPE—University of Corsica team and their collaborators— [30], which include researchers, foresters, civil security personnel, and firefighters during their studies on fire spread, controlled burns, and actual forest fires.
These images were captured across diverse locations, at varying times of the day, from multiple perspectives, and under different environmental conditions and lighting intensities. These multi-modal images sourced directly from the JAI AD-080GE camera manufactured by JAI A/S, a company based in Skovlunde, Denmark are not in alignment, primarily because the visible and NIR sensors do not perfectly overlap. To bring these multi-modal images into alignment within the database, an image registration process using the homography matrix transform was employed. All images in the database are preserved in a lossless PNG format.

2.5. Fusion Experimental Results and Discussions

In this section, we assess the chosen fusion method. For this evaluation, we gathered 640 image pairs, with Table 3a,b showcasing four examples. The resultant fused images can be observed in Table 3c,d. When VIS/IR images are fused, the resulting fire images closely resemble the VIS image. A comprehensive analysis of the results produced by the chosen fusion method is presented. To evaluate the effectiveness of these fusion methods, we analyzed various performance metrics to determine the efficacy of the fusion techniques. The fused images from both methods are illustrated in Table 3c,d.
The results presented in Table 4 show that LatLRR consistently outperforms GFCE across all evaluated criteria. This indicates that LatLRR is more effective at maintaining structural integrity, preserving feature information, and producing higher quality-fused images. These advantages make LatLRR a superior choice for applications requiring precise and reliable image fusion, such as forest fire detection and segmentation.

3. Improvement of LatLRR Fusion Method with Optimal Weighting of the Visible Source Image

3.1. Introduction

Since visible images vary in brightness, and depending on the acquisition period, we proposed weighting the visible images in grayscale before using them in the fusion process with the infrared image, and a weight noted α. The value of this weight affects the fusion result. We chose the SCD criterion to quantify this effect. For each one of the 640 used visible images, the SCD criterion was plotted versus the mean of the visible image brightness. The optimal value of the weight was calculated. This operation is summarized in the framework of Figure 4. Some examples of visible images captured in different day periods and the plot of their SCD criteria versus applied weight α are presented in Table 5. The optimal value of α is also shown. In this Table, three periods of the day are considered: the bright daylight period, the low-light period, and the night period. We remarked that this optimal value of α depends on the brightness of the visible image.

3.2. Estimation of the Optimal Weight α with the Least Mean Squares Method Source Image

In Figure 5, we plot all the obtained optimal weights versus the mean of the corresponding visible image. The shape of this curve is modeled by a polynomial function of order two. So, we propose estimating the optimal value of the weight α with a second-order polynomial model whose parameters are optimized with the least squares method from the curve of optimal weights with respect to the mean values of visible images. The second-order polynomial model can be expressed with the following equation: α opt   = a × μ 2 + b × μ + c , where µ is the mean value of the visible image. We note that xi is the mean of the ith visible image, yi is the corresponding ith optimal value of α, and n is the number of used images (640). The image index, i, varies from 1 to n. The optimal estimated values of coefficients a, b, and c can be obtained by the resolution of the following linear system:
n a + b i = 1 n     x i + c i = 1 n     x i 2 = i = 1 n     y i a i = 1 n     x i + b i = 1 n     x i 2 + c i = 1 n     x i 3 = i = 1 n     x i y i a i = 1 n     x i 2 + b i = 1 n     x i 3 + c i = 1 n     x i 4 = i = 1 n     x i 2 y i
The obtained results show that the estimated values of coefficients a, b, and c are, respectively, a = 24, b = −0.12, and c = 0.02. The use of the obtained optimal weight   α opt   to improves the LalLRR fusion technique depending on the mean of the gray level visible image is described in Figure 6. The improvement of the SCD criterion for all tested images is illustrated in Figure 7. The mean of the SCD criterion improvement is 0.05, as shown in Table 6.

4. Segmentation of Fire Images Using the Obtained Fused Images by a Major Voting Approach

4.1. Introduction

In this study, we combined three UNet backbones using a major voting scheme to process the segmentation task, as illustrated in Figure 8. Among various architectures, the UNet model, combined with powerful backbones like ResNet50, VGG16, and EfficientNet, showed promising results. This study explores these combinations, utilizing a major voting mechanism to assess their effectiveness in fire segmentation tasks.

4.2. Recall of Segmentation Method Using the Major Voting

The UNet-ResNet50 model integrates the UNet architecture with a ResNet50 backbone. Known for its deep residual learning framework, ResNet50 serves as the encoder, enhancing the model’s ability to learn complex features from fire images. Special adaptations for fire segmentation include the use of pre-trained weights on ImageNet for faster convergence and the introduction of dropout layers to prevent overfitting. The UNet-VGG16 architecture utilizes VGG16 as its backbone. VGG16’s architecture is appreciated for its simplicity and depth, providing robust feature extraction capabilities. The model incorporates batch normalization after each convolutional layer to stabilize learning and improve generalization. Combining UNet with EfficientNet, the UNet-EfficientNet model benefits from EfficientNet’s scalable architecture, which balances network depth, width, and resolution, making it computationally efficient. EfficientNet’s compound scaling method is fine-tuned to optimize performance on the fire dataset, ensuring effective feature extraction at various scales. The major voting scheme is a decision-making process that aggregates predictions from the three models to improve the reliability and accuracy of fire segmentation. For each pixel in the output mask, if at least two models predict the pixel as ‘fire’, the pixel is classified as ‘fire’. If the models do not agree, the pixel classification is deferred to the model with the highest confidence score for that pixel. This scheme leverages the strengths of each model, mitigating individual weaknesses and improving the overall segmentation performance.

4.3. Data Training

Within this section, we provide a detailed assessment of our selected segmentation strategy, informed by a succession of systematically designed experiments. To quantify the effectiveness of this segmentation technique, we employed three models, UNet-ResNet50, UNet-VGG16, and UNet-EfficientNet, followed by a major voting scheme for final segmentation. Implemented using the Keras library with TensorFlow as the backend, the models were trained on Google Colab Pro with a powerful GPU. Our dataset was partitioned into 80% for training, 10% for validation, and 10% for testing.
The training process involved batch sizes of 16 and 150 epochs, optimized using the Adam optimizer with a learning rate of 0.0001. After training each model individually, we employed a major voting scheme to enhance segmentation accuracy. This process aggregates predictions from the three models to improve reliability and accuracy. For each pixel in the output mask, if at least two models predict ‘fire’, the pixel is classified as ‘fire’. If the models do not agree, the pixel classification is deferred to the model with the highest confidence score. This scheme leverages the strengths of each model, mitigating individual weaknesses and improving overall segmentation performance. The inference time for a single image was about 100 ms.

4.4. Segmentation Results

After devising a semantic segmentation method, the predominant question was, “How can we determine if our segmentation is accurately executed?” In the realm of image segmentation, evaluating performance using metrics is vital to quantifiably assess the classification outcomes [50,51,52].

4.4.1. Presentation of the Segmentation Evaluation Criteria: Accuracy, Precision, Specificity, Recall, F1 Score, and IoU

We classified data into two classes, labeled as positive class and negative class (ground truth). A classification method assigned a label to each data point, either estimated positive (predicted as positive) or estimated negative. The evaluation of the performance of the classification method was linked to the following metrics:
  • TP (True Positive): cases where the assigned class is positive, knowing that the actual value (ground truth) is indeed positive.
  • TN (True Negative): cases where the assigned class is negative, and the actual value is indeed negative.
  • FP (False Positive): cases where the assigned class is positive, but the actual value is negative.
  • FN (False Negative): cases where the assigned class is negative, but the actual value is positive.
Based on these four quantities, we defined the values of the most used classification performance criteria, as shown in Table 7.

4.4.2. Details and Discussion of the Obtained Segmentation Results

The results from both Table 8 and Table 9 clearly demonstrate that the segmentation of fused images with LatLRR after image optimization weighting outperforms other methods across all evaluated criteria. This method achieves higher IOU, accuracy, precision, specificity, recall, and F1 scores, indicating its effectiveness in accurately identifying fire regions and minimizing false positives and negatives. The experiments show that the proposed model, which includes an optimized weighted image fusion stage before segmentation, provides a high Intersection over Union (IoU) score of more than 94% and an F1 score and recall of more than 97%.
The optimized fusion method shows the strengths of both visible and IR images, providing a robust solution for fire detection. By combining the high-resolution details from visible images with the thermal information from IR images, the optimized fusion method offers a comprehensive and reliable approach to fire segmentation.

5. Conclusions and Perspectives

We propose in this work to combine both visible and infrared images with an appropriate fusion technique to improve the quality of fire detection, segmentation, and localization. The visible image is weighted before being used in the fusion process. The value of the optimal weight is estimated from the mean of the visible image with a second-order polynomial model. We optimized the model’s parameters using the least-squares method based on the relationship between optimal weights and the mean of visible images. Finally, a major voting method based on deep learning models was used. By combining the rich details and sharp contrasts of visible images with the precise thermal information provided by IR images, this optimized fusion method offered a comprehensive and reliable approach for detecting and segmenting forest fires. The results highlight the significant potential of advanced image fusion techniques, as well as optimization processes, to enhance the performance of automated forest fire segmentation systems. The substantial improvements observed across all evaluation metrics underscore the importance of integrating multiple image sources and applying sophisticated optimization techniques to achieve significantly superior segmentation results. This method could thus revolutionize forest fire monitoring and management systems, providing increased accuracy and reliability, which is essential in critical applications such as fire safety and disaster management. More precisely, in cases of forest fires spreading rapidly, with our optimized fusion method, emergency responders can quickly identify the fire’s location and intensity, enabling faster and more effective action.

Author Contributions

Conceptualization, M.T.; methodology, M.T. and M.B.; software, M.T.; validation, M.B. and M.T.; formal analysis, M.T.; investigation, M.T.; resources, M.T. and M.B.; data curation, M.T. and M.B.; writing—original draft preparation, M.T.; writing—review & editing, M.T.; visualization, M.T.; supervision, M.S. and E.M.; project administration, M.S. and E.M.; funding acquisition, M.S. and E.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The Corsican Fire Database is available upon request to the University of Corsica at http://cfdb.univ-corse.fr/ (accessed on 10 March 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. European Science Technology Advisory Group (E-STAG). Evolving Risk of Wildfires in Europe: The Changing Nature of Wildfire Risk Calls for a Shift in Policy Focus from Suppression to Prevention; Rossi, J.-L., Komac, B., Migliorin, M., Schwarze, R., Sigmund, Z., Awad, C., Chatelon, F., Goldammer, J.G., Marcelli, T., Morvan, D., et al., Eds.; United Nations Office for Disaster Risk Reduction: Brussels, Belgium, 2020. [Google Scholar]
  2. Gaur, A.; Singh, A.; Kumar, A.; Kumar, A.; Kapoor, K. Video Flame and Smoke Based Fire Detection Algorithms: A Literature Review. Fire Technol. 2020, 56, 1943–1980. [Google Scholar] [CrossRef]
  3. Perez, J. Causes et Consequences of Forest Fires. Available online: https://www.ompe.org/en/causes-et-consequences-of-forest-fires/ (accessed on 16 November 2023).
  4. National Interagency Fire Center. Statistics. Available online: https://www.nifc.gov/fire-information/statistics (accessed on 15 November 2023).
  5. Alkhatib, A.A.A. A Review on Forest Fire Detection Techniques. Int. J. Distrib. Sens. Netw. 2014, 10, 597368. [Google Scholar] [CrossRef]
  6. Li, P.; Zhao, W. Image fire detection algorithms based on convolutional neural networks. Case Stud. Therm. Eng. 2020, 19, 100625. [Google Scholar] [CrossRef]
  7. Enis, A.Ç.; Dimitropoulos, K.; Gouverneur, B.; Grammalidis, N.; Günay, O.; Habiboğlu, Y.H.; Töreyin, B.U.; Verstockt, S. Video fire detection—Review. Digit. Signal Process. 2013, 23, 1827–1843. [Google Scholar] [CrossRef]
  8. Cao, Y.; Tang, Q.; Xu, S.; Li, F.; Lu, X. QuasiVSD: Efficient dual-frame smoke detection. Neural Comput. Appl. 2022, 34, 8539–8550. [Google Scholar] [CrossRef]
  9. Cao, Y.; Tang, Q.; Wu, X.; Lu, X. EFFNet: Enhanced Feature Foreground Network for Video Smoke Source Prediction and Detection. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1820–1833. [Google Scholar] [CrossRef]
  10. Yang, C.; Pan, Y.; Cao, Y.; Lu, X. CNN-Transformer Hybrid Architecture for Early Fire Detection. In Proceedings of the Artificial Neural Networks and Machine Learning—ICANN 2022, 31st International Conference on Artificial Neural Networks, Bristol, UK, 6–9 September 2022; Part IV. Springer: Berlin/Heidelberg, Germany, 2022; pp. 570–581. [Google Scholar]
  11. Bouguettaya, A.; Zarzour, H.; Taberkit, A.M.; Kechida, A. A Review on Early Wildfire Detection from Unmanned Aerial Vehicles Using Deep Learning-Based Computer Vision Algorithms. Signal Process. 2022, 190, 108309. [Google Scholar] [CrossRef]
  12. Wang, G.; Bai, D.; Lin, H.; Zhou, H.; Qian, J. FireViTNet: A Hybrid Model Integrating ViT and CNNs for Forest Fire Segmentation. Comput. Electron. Agric. 2024, 218, 108722. [Google Scholar] [CrossRef]
  13. Simes, T.; Pádua, L.; Moutinho, A. Wildfire Burnt Area Severity Classification from UAV-Based RGB and Multispectral Imagery. Remote Sens. 2024, 16, 30. [Google Scholar] [CrossRef]
  14. Ciprián-Sánchez, J.F.; Ochoa-Ruiz, G.; Rossi, L.; Morandini, F. Assessing the Impact of the Loss Function, Architecture and Image Type for Deep Learning-Based Wildfire Segmentation. Appl. Sci. 2021, 11, 7046. [Google Scholar] [CrossRef]
  15. Vorwerk, P.; Kelleter, J.; Müller, S.; Krause, U. Classification in Early Fire Detection Using Transfer Learning Based on Multi-Sensor Nodes. Proceedings 2024, 97, 20. [Google Scholar] [CrossRef]
  16. Yuan, C.; Zhang, Y.; Liu, Z. A survey on technologies for automatic forest fire monitoring, detection, and fighting using unmanned aerial vehicles and remote sensing techniques. Can. J. For. Res. 2015, 45, 783–792. [Google Scholar] [CrossRef]
  17. Yuan, C.; Liu, Z.; Zhang, Y. Fire detection using infrared images for UAV-based forest fire surveillance. In Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 567–572. [Google Scholar]
  18. Bosch, I.; Gomez, S.; Vergara, L.; Moragues, J. Infrared image processing and its application to forest fire surveillance. In Proceedings of the 2007 IEEE Conference on Advanced Video and Signal Based Surveillance, London, UK, 5–7 September 2007; pp. 283–288. [Google Scholar]
  19. Nemalidinne, S.M.; Gupta, D. Nonsubsampled contourlet domain visible and infrared image fusion framework for fire detection using pulse coupled neural network and spatial fuzzy clustering. Fire Saf. J. 2018, 101, 84–101. [Google Scholar] [CrossRef]
  20. Ma, J.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
  21. Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion: A survey of the state of the art. Inf. Fusion 2016, 33, 100–112. [Google Scholar] [CrossRef]
  22. Jin, B.; Cruz, L.; Gonçalves, N. Pseudo RGB-D Face Recognition. IEEE Sens. J. 2022, 22, 21780–21794. [Google Scholar] [CrossRef]
  23. Metwalli, M.R.; Nasr, A.H.; Allah, O.S.F.; El-Rabaie, S. Image fusion based on principal component analysis and high-pass filter. In Proceedings of the International Conference on Computer Engineering Systems, Cairo, Egypt, 14–16 December 2009; pp. 63–70. [Google Scholar]
  24. Al-Wassai, F.A.; Kalyankar, N.V.; Al-Zuky, A.A. The IHS transformations-based image fusion. arXiv 2011, arXiv:1107.4396. [Google Scholar]
  25. Zhao, M.; Jha, A.; Liu, Q.; Millis, B.A.; Mahadevan-Jansen, A.; Lu, L.; Landman, B.A.; Tyska, M.J.; Huo, Y. Faster Mean-shift: GPU-accelerated clustering for cosine embedding-based cell segmentation and tracking. Med. Image Anal. 2021, 71, 102048. [Google Scholar] [CrossRef] [PubMed]
  26. Yao, T.; Qu, C.; Liu, Q.; Deng, R.; Tian, Y.; Xu, J.; Jha, A.; Bao, S.; Zhao, M.; Fogo, A.B.; et al. Compound Figure Separation of Biomedical Images with Side Loss. In Deep Generative Models, and Data Augmentation, Labelling, and Imperfections; Springer: Cham, Switzerland, 2021. [Google Scholar]
  27. Zheng, Q.; Yang, M.; Yang, J.; Zhang, Q.; Zhang, X. Improvement of Generalization Ability of Deep CNN via Implicit Regularization in Two-Stage Training Process. IEEE Access 2018, 6, 15844–15869. [Google Scholar] [CrossRef]
  28. Zhou, Z.; Dong, M.; Xie, X.; Gao, Z. Fusion of infrared and visible images for night-vision context enhancement. Appl. Opt. 2016, 55, 6480–6490. [Google Scholar] [CrossRef] [PubMed]
  29. Zhou, Z.; Wang, B.; Li, S.; Dong, M. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with gaussian and bilateral filters. Inf. Fusion 2016, 30, 15–26. [Google Scholar] [CrossRef]
  30. Toulouse, T.; Rossi, L.; Campana, A.; Celik, T.; Akhloufi, M.A. Computer vision for wildfire research: An evolving image dataset for processing and analysis. Fire Saf. J. 2017, 92, 188–194. [Google Scholar] [CrossRef]
  31. Ren, K.; Xu, F. Super-resolution images fusion via compressed sensing and low-rank matrix decomposition. Infrared Phys. Technol. 2015, 68, 61–68. [Google Scholar] [CrossRef]
  32. Lu, X.; Zhang, B.; Zhao, Y.; Liu, H.; Pei, H. The infrared and visible image fusion algorithm based on target separation and sparse representation. Infrared Phys. Technol. 2014, 67, 397–407. [Google Scholar] [CrossRef]
  33. Zhao, C.; Guo, Y.; Wang, Y. A fast fusion scheme for infrared and visible light images in NSCT domain. Infrared Phys. Technol. 2015, 72, 266–275. [Google Scholar] [CrossRef]
  34. Guo, K.; Li, X.; Zang, H.; Fan, T. Multi-modal medical image fusion based on fusionnet in yiq color space. Entropy 2020, 22, 1423. [Google Scholar] [CrossRef] [PubMed]
  35. Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef] [PubMed]
  36. Zhao, Y.; Fu, G.; Wang, H.; Zhang, S. The fusion of unmatched infrared and visible images based on generative adversarial networks. Math. Probl. Eng. 2020, 2020, 3739040. [Google Scholar] [CrossRef]
  37. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  38. Xiang, T.; Yan, L.; Gao, R. A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain. Infrared Phys. Technol. 2015, 69, 53–61. [Google Scholar] [CrossRef]
  39. Zhan, L.; Zhuang, Y.; Huang, L. Infrared and visible images fusion method based on discrete wavelet transform. J. Comput. 2017, 28, 57–71. [Google Scholar] [CrossRef]
  40. Sun, C.; Zhang, C.; Xiong, N. Infrared and Visible Image Fusion Techniques Based on Deep Learning: A Review. Electronics 2020, 9, 2162. [Google Scholar] [CrossRef]
  41. Kogan, F.; Fan, A.P.; Gold, G.E. Potential of PET-MRI for imaging of non-oncologic musculoskeletal disease. Quant. Imaging Med. Surg. 2016, 6, 756. [Google Scholar] [CrossRef] [PubMed]
  42. Gao, S.; Cheng, Y.; Zhao, Y. Method of visual and infrared fusion for moving object detection. Opt. Lett. 2013, 38, 1981–1983. [Google Scholar] [CrossRef] [PubMed]
  43. Meher, B.; Agrawal, S.; Panda, R.; Abraham, A. A survey on region-based image fusion methods. Inf. Fusion 2019, 48, 119–132. [Google Scholar] [CrossRef]
  44. Aslantas, V.; Bendes, E. A new image quality metric for image fusion: The sum of the correlations of differences. AEU—Int. J. Electron. Commun. 2015, 69, 1890–1896. [Google Scholar] [CrossRef]
  45. He, K.; Zhou, D.; Zhang, X.; Nie, R.; Wang, Q.; Jin, X. Infrared and visible image fusion based on target extraction in the nonsubsampled contourlet transform domain. J. Appl. Remote Sens. 2017, 11, 015011. [Google Scholar] [CrossRef]
  46. Li, S.; Yin, H.; Fang, L. Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Trans. Biomed. Eng. 2012, 59, 3450–3459. [Google Scholar] [CrossRef]
  47. Haghighat, M.B.A.; Aghagolzadeh, A.; Seyedarabi, H. A Non-Reference Image Fusion Metric Based on Mutual Information of Image Features. Comput. Electr. Eng. 2011, 37, 744–756. [Google Scholar] [CrossRef]
  48. Wang, W.; He, J.; Liu, H.; Yuan, W. MDC-RHT: Multi-Modal Medical Image Fusion via Multi-Dimensional Dynamic Convolution and Residual Hybrid Transformer. Sensors 2024, 24, 4056. [Google Scholar] [CrossRef] [PubMed]
  49. Petrovic, V.S.; Xydeas, C.S. Objective evaluation of signal-level image fusion performance. Opt. Eng. 2005, 44, 087003. [Google Scholar]
  50. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
  51. Tlig, L.; Bouchouicha, M.; Tlig, M.; Sayadi, M.; Moreau, E. A Fast Segmentation Method for Fire Forest Images Based on Multiscale Transform and PCA. Sensors 2020, 20, 6429. [Google Scholar] [CrossRef] [PubMed]
  52. Zhao, E.; Liu, Y.; Zhang, J.; Tian, Y. Forest Fire Smoke Recognition Based on Anchor Box Adaptive Generation Method. Electronics 2021, 10, 566. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed segmentation framework.
Figure 1. Overview of the proposed segmentation framework.
Electronics 13 03175 g001
Figure 2. Overview of GFCE fusion method.
Figure 2. Overview of GFCE fusion method.
Electronics 13 03175 g002
Figure 3. Overview of the LatLRR fusion method.
Figure 3. Overview of the LatLRR fusion method.
Electronics 13 03175 g003
Figure 4. Framework of the estimation of the optimal weight depending on the brightness of the visible image based on the SCD fusion criterion.
Figure 4. Framework of the estimation of the optimal weight depending on the brightness of the visible image based on the SCD fusion criterion.
Electronics 13 03175 g004
Figure 5. Plot of optimal weight versus the mean of the visible image.
Figure 5. Plot of optimal weight versus the mean of the visible image.
Electronics 13 03175 g005
Figure 6. The scheme using the obtained optimized model to improve the LalLRR fusion technique depending on the mean of the gray level visible image.
Figure 6. The scheme using the obtained optimized model to improve the LalLRR fusion technique depending on the mean of the gray level visible image.
Electronics 13 03175 g006
Figure 7. Improvement of the SCD criterion for all tested images.
Figure 7. Improvement of the SCD criterion for all tested images.
Electronics 13 03175 g007
Figure 8. The proposed architecture used for segmentation.
Figure 8. The proposed architecture used for segmentation.
Electronics 13 03175 g008
Table 1. Commonly used fusion evaluation criteria, their references and formula.
Table 1. Commonly used fusion evaluation criteria, their references and formula.
Used Evaluation CriteriaDefinition and Formula
SSIM: mean of the structural similarity index measure between fused and source images [31,32,33] S S I M ( x , y ) = 2 μ x μ y + c 1 μ x 2 + μ y 2 + c 1 α × 2 σ x σ y + c 2 σ x 2 + σ y 2 + c 2 β × σ x y + c 3 σ x σ y + c 3 γ
x : the reference image.
y : the fused image.
μ x , μ y :   mean   values   of   images   x     and   y .
σ x 2 , σ y 2 :   variances   of   images   x     and   y .
σ x y :   covariance   of   images   x     and   y .
To   prevent   division   by   zero ,   small   constants ,   c 1 , c 2 ,   and   c 3   are used.
Parameters   α , β , γ are used to adjust the proportions.
FMI: feature mutual information including both fused and source images [34,35] F M I = M I A , F + M I B , F 2
M I ( F , A )   a n d   M I ( F , B )   are   the   mutual   information   between   the   fused   image   F   and   source   images   A and B, respectively.
MCC: mean of the correlation coefficient between fused and source images [36,37,38,39] M C C = r I , F + r V , F 22
where r X P = i = 1 M     j = 1 N     ( X ( i , j ) X ) ( F ( i , j ) μ ) i = 1 M     j = 1 N     ( X ( i , j ) X ) 2 i = 1 M     j = 1 N     ( F ( i , j ) μ ) 2
PSNR: peak signal-to-noise ratio, including both fused and source images [40,41,42] P S N R = 10 l o g 10 r 2 M S E
where r is the correlation coefficient between fused and source images
and MSE is the mean squared error between fused and fused images
MSS: multi-scale structural similarity between fused and source images [43,44] M S S I M ( X , Y ) = 1 M j = 1 M S S I M ( x j , y j )
PM: petrovic metric (or edge preservation measure) including both fused and source images [45,46,47] Q A B / F = n = 1 N     m = 1 M     Q A F ( n , m ) w A ( n , m ) i = 1 N     j = 1 M     w A ( i , j ) + w ( i , j )
SCD: sum of the correlation differences between fused and source images [48,49] S C D = r D 1 , S 1 + r D 2 , S 2
with
r D k , S k = i     j     D k ( i , j ) D k ¯ S k ( i , j ) S k ¯ i     j     D k ( i , j ) D k 2 i     j     S k ( i , j ) S k 2
Table 2. Example of visible images captured in different periods of the day and their IR and ground truth corresponding images.
Table 2. Example of visible images captured in different periods of the day and their IR and ground truth corresponding images.
Captured Visible ImagesIR Corresponding ImagesGround Truth Corresponding Images
Bright daylight period (Image 33)
Electronics 13 03175 i001
Mean of the image = 126
Electronics 13 03175 i002Electronics 13 03175 i003
Low-light period (Image 24)
Electronics 13 03175 i004
Mean of the image = 66
Electronics 13 03175 i005Electronics 13 03175 i006
Night period (Image 42)
Electronics 13 03175 i007
Mean of the image = 21
Electronics 13 03175 i008Electronics 13 03175 i009
Table 3. Examples of some fusion results and criteria. (a) Visible images, (b) IR images, (c) result of GFCE method, and (d) result of LatLRR method.
Table 3. Examples of some fusion results and criteria. (a) Visible images, (b) IR images, (c) result of GFCE method, and (d) result of LatLRR method.
Input ImagesFused ImagesFusion Evaluation Metrics
Image 1Electronics 13 03175 i010
(a) Visible
Electronics 13 03175 i011
(c) GFCE-based fusion
SSIM = 0.47
FMI = 0.89
MCC = 0.68
PSNR = 11.29
MSS = 0.76
PM = 0.48
SCD = 0.66
Electronics 13 03175 i012
(b) IR
Electronics 13 03175 i013
(d) LatLRR–based fusion
SSIM = 0.87
FMI = 0.93
MCC = 0.79
PSNR = 19.40
MSS = 0.96
PM = 0.40
SCD = 0.71
Image 2Electronics 13 03175 i014
(a) Visible
Electronics 13 03175 i015
(c) GFCE-based fusion
SSIM = 0.47
FMI = 0.88
MCC = 0.70
PSNR = 12.6
MSS = 0.89
PM = 0.49
SCD = 0.67
Electronics 13 03175 i016
(b) IR
Electronics 13 03175 i017
(d) LatLRR–based fusion
SSIM = 0.86
FMI = 0.91
MCC = 0.78
PSNR = 16.05
MSS = 0.94
PM = 0.48
SCD = 0.68
Image 3Electronics 13 03175 i018
(a) Visible
Electronics 13 03175 i019
(c) GFCE-based fusion
SSIM = 0.63
FMI = 0.88
MCC = 0.72
PSNR = 12.85
MSS = 0.86
PM = 0.49
SCD = 0.45
Electronics 13 03175 i020
(b) IR
Electronics 13 03175 i021
(d) LatLRR–based fusion
SSIM = 0.82
FMI = 0.92
MCC = 0.81
PSNR = 16.44
MSS = 0.91
PM = 0.44
SCD = 0.68
Image 4Electronics 13 03175 i022
(a) Visible
Electronics 13 03175 i023
(c) GFCE-based fusion
SSIM = 0.63
FMI = 0.88
MCC = 0.7
PSNR = 12.52
MSS = 0.87
PM = 0.5
SCD = 0.44
Electronics 13 03175 i024
(b) IR
Electronics 13 03175 i025
(d) LatLRR–based fusion
SSIM = 0.82
FMI = 0.91
MCC = 0.78
PSNR = 15.55
MSS = 0.92
PM = 0.44
SCD = 0.64
Table 4. Averaged values of each criterion for each fusion approach using all database.
Table 4. Averaged values of each criterion for each fusion approach using all database.
Fusion MethodsSSIMFMIMCCPSNRMSSPMSCD
GFCE0.640.900.7113.230.850.440.7
LatLRR0.850.920.7917.960.930.460.88
Table 5. Examples of visible images captured in different day periods and the plot of their SCD criteria versus α.
Table 5. Examples of visible images captured in different day periods and the plot of their SCD criteria versus α.
Captured Visible Images SCD Creterion Value versus the Weight α α o p t
Bright daylight period
(Image 33)
Electronics 13 03175 i026
Mean of the image = 126
Electronics 13 03175 i0270.7
Low-light period
(Image 24)
Electronics 13 03175 i028
Mean of the image = 66
Electronics 13 03175 i0291
Night period
(Image 42)
Electronics 13 03175 i030
Mean of the image = 21
Electronics 13 03175 i0311.5
Table 6. Mean of the SCD fusion criterion value of all fused images.
Table 6. Mean of the SCD fusion criterion value of all fused images.
Without Visible Image Optimization WeightingWith Visible Image Optimization Weighting
Mean of the SCD fusion creterion value0.880.93
Table 7. Formulas of the used classification performance metrics.
Table 7. Formulas of the used classification performance metrics.
A c c u r a c y = T P + T N T P + T N + F P + F N
Precision = T P T P + F P , .
S p e c i f i c i t y = T N F P + T N
Recall = T P T P + F N
F 1   Score = 2 × Precision × Recall Precision + Recall
I o U = T P T P + F P + F N
Table 8. Some examples of segmentation results and performances for four images.
Table 8. Some examples of segmentation results and performances for four images.
(a)(b)(c)(d)(e)
Visible imagesSegmentation of visible images onlySegmentation of IR images onlySegmentation of the fused images with
classical LatLRR
Segmentation of the fused images with LatLRR after image optimization weighting
Electronics 13 03175 i032Electronics 13 03175 i033Electronics 13 03175 i034Electronics 13 03175 i035Electronics 13 03175 i036
IOU89.34%87.43%92.22%96.88%
Accuracy99.78%99.74%99.83%99.94%
Precision96.85%97.25%92.22%96.88%
Specificity99.94%99.95%99.83%99.93%
Recall92.02%89.64%94.05%97.64%
F1 score94.37%93.29%95.95%98.41%
Electronics 13 03175 i037Electronics 13 03175 i038Electronics 13 03175 i039Electronics 13 03175 i040Electronics 13 03175 i041
IOU88.42%84.21%93.02%95.66%
Accuracy99.36%98.98%99.62%99.76%
Precision99.38%84.40%99.14%99.08%
Specificity99.97%98.93%99.95%99.95%
Recall88.92%99.72%93.77%96.51%
F1 score93.86%91.43%96.38%97.78%
Electronics 13 03175 i042Electronics 13 03175 i043Electronics 13 03175 i044Electronics 13 03175 i045Electronics 13 03175 i046
IOU86.45%83.40%89.51%90.68%
Accuracy99.11%98.93%99.21%99.40%
Precision88.32%89.51%89.41%90.94%
Specificity99.20%99.33%99.28%99.39%
Recall97.61%92.43%98.06%99.68%
F1 Score92.73%90.95%93.53%95.11%
Electronics 13 03175 i047Electronics 13 03175 i048Electronics 13 03175 i049Electronics 13 03175 i050Electronics 13 03175 i051
IOU82.93%76.93%90.86%94.16%
Accuracy98.97%98.55%99.49%99.69%
Precision86.69%82.99%94.56%99.04%
Specificity99.19%98.96%99.69%99.95%
Recall95.03%91.33%95.87%95.03%
F1 score90.67%86.96%95.21%96.99%
The value displayed in bold denotes the method with the highest score.
Table 9. Mean of the segmentation criteria values computed for all test database images.
Table 9. Mean of the segmentation criteria values computed for all test database images.
Segmentation Criteria Segmentation of Visible Image Only Segmentation of IR Images Only Segmentation of the Fused Images with Classical LatLRR Segmentation of the Fused Images with LatLRR after Image Optimization Weighting
IOU 88.81%86.42%92.05%94.52%
Accuracy 99.53%99.44%99.61%99.84%
Precision 93.55%92.30%95.12%96.62%
Specificity 99.70%99.63%99.77%99.88%
Recall 94.41%93.06%95.84%97.44%
F1 score 94.06%92.74%95.75%97.09%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tlig, M.; Bouchouicha, M.; Sayadi, M.; Moreau, E. Fire Segmentation with an Optimized Weighted Image Fusion Method. Electronics 2024, 13, 3175. https://doi.org/10.3390/electronics13163175

AMA Style

Tlig M, Bouchouicha M, Sayadi M, Moreau E. Fire Segmentation with an Optimized Weighted Image Fusion Method. Electronics. 2024; 13(16):3175. https://doi.org/10.3390/electronics13163175

Chicago/Turabian Style

Tlig, Mohamed, Moez Bouchouicha, Mounir Sayadi, and Eric Moreau. 2024. "Fire Segmentation with an Optimized Weighted Image Fusion Method" Electronics 13, no. 16: 3175. https://doi.org/10.3390/electronics13163175

APA Style

Tlig, M., Bouchouicha, M., Sayadi, M., & Moreau, E. (2024). Fire Segmentation with an Optimized Weighted Image Fusion Method. Electronics, 13(16), 3175. https://doi.org/10.3390/electronics13163175

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop