Automatic Recognition of Indoor Fire and Combustible Material with Material-Auxiliary Fire Dataset
Abstract
:1. Introduction
- (1)
- We present an efficient deep learning semantic segmentation framework based on a dual attention mechanism, which involves position attention and channel attention and assigns pixels with object class and attribute labels.
- (2)
- We first simultaneously estimate the fire object and fire load in indoor scenes and explore a multi-task learning strategy to learn the correlations between fire burning degree and combustible material statistics. The segmentation accuracy levels of fire and combustible material can be significantly enhanced for detailed scene analysis.
- (3)
- We introduce and collect a new database, the Material-Auxiliary Fire Dataset (MAFD), with attribute labels for combustible material and class labels for fire objects, which provides a benchmark to encourage automatic applications in indoor fire scenes.
2. Literature Review
3. Dual Attention Fire Recognition Methodology
3.1. Architecture of Semantic Segmentation Model
3.2. Attention Modules for Feature Representation
3.2.1. Position Attention Module
- (1)
- Calculating the similarity matrix between pixels, the process is to obtain a similarity matrix between pixels with a size of through , that is, the () matrix multiplied by the () matrix;
- (2)
- Perform a softmax operation on the similarity matrix to obtain each relative factor that affects the pixel;
- (3)
- Multiply the similarity matrix S after softmax with the V matrix, that is, multiply the () matrix by the () matrix, and finally obtain the recoded feature representation, and its size is also , where the generation formula of is shown in Equation (1). The purpose of multiplying the original matrix by the similarity matrix is to amplify the influence of pixels that are similar to it and reduce the influence of pixels that are not similar to it, which can also be called a re-encoding operation;
- (4)
- Perform the reshape operation on the finally obtained new feature matrix to obtain a recoded feature map with a size of ;
- (5)
- Add the feature map to the features extracted from the upper network to obtain the output of the final position attention module, whose size is still , where the generation formula of is shown in Equation (2). The scaling factor initially begins at 0 and gradually adjusts to attain higher weights.
3.2.2. Channel Attention Module
- (1)
- Calculating the similarity matrix between pixels, the process is to obtain a similarity matrix between pixels with a size of through , that is, multiplying the () matrix by the () matrix;
- (2)
- Perform a softmax operation on the similarity matrix to obtain each relative factor affecting the channel;
- (3)
- Multiply the similarity matrix and the matrix after softmax, that is, the () matrix multiplied by the () matrix, and finally obtain the recoded feature representation, and its size is also , where the generation formula of is shown in Equation (3). The purpose of multiplying the original matrix by the similarity matrix is to amplify the influence of similar channels and reduce the influence of dissimilar channels;
- (4)
- Perform the reshape operation on the finally obtained new feature matrix to obtain a recoded feature map with a size of ;
- (5)
- Add the feature map to the features extracted from the upper network to obtain the output of the final channel attention module, whose size is still , where the generation formula of is shown in Equation (4). The initial value of the scaling factor is set to 0 and incrementally adapts to gain higher weights.
3.3. BaseNet Selection
4. Experiments and Results
4.1. Experimental Settings and Evaluation Metrics
4.2. Material-Auxiliary Fire Dataset
4.3. Experiment 1: Selection of Optimal Model
4.4. Experiment 2: Visualization Results of the Proposed Model
4.5. Experiment 3: Comparison with State-of-the-Art Methods
5. Conclusions and Discussion
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhang, L.; Wang, G.X.; Yuan, T.; Peng, K.M. Research on Indoor Map. Geom. Spat. Inf. Technol. 2013, 43–47. [Google Scholar] [CrossRef]
- Kuti, R.; Zólyomi, G.; László, G.; Hajdu, C.; Környei, L.; Hajdu, F. Examination of Effects of Indoor Fires on Building Structures and People. Heliyon 2023, 9, e12720. [Google Scholar] [CrossRef] [PubMed]
- Kodur, V.; Kumar, P.; Rafi, M.M. Fire Hazard in Buildings: Review, Assessment and Strategies for Improving Fire Safety. PSU Res. Rev. 2020, 4, 1–23. [Google Scholar] [CrossRef]
- Li, S.; Yun, J.; Feng, C.; Gao, Y.; Yang, J.; Sun, G.; Zhang, D. An Indoor Autonomous Inspection and Firefighting Robot Based on SLAM and Flame Image Recognition. Fire 2023, 6, 93. [Google Scholar] [CrossRef]
- Xie, Y.; Zhu, J.; Guo, Y.; You, J.; Feng, D.; Cao, Y. Early Indoor Occluded Fire Detection Based on Firelight Reflection Characteristics. Fire Saf. J. 2022, 128, 103542. [Google Scholar] [CrossRef]
- Wu, X.; Lu, X.; Leung, H. A Video Based Fire Smoke Detection Using Robust AdaBoost. Sensors 2018, 18, 3780. [Google Scholar] [CrossRef] [PubMed]
- Russo, A.U.; Deb, K.; Tista, S.C.; Islam, A. Smoke Detection Method Based on LBP and SVM from Surveillance Camera. In Proceedings of the 2018 International Conference on Computer, Communication, Chemical, Material and Electronic Engineering (IC4ME2), Rajshahi, Bangladesh, 8–9 February 2018. [Google Scholar]
- Wang, H.; Zhang, Y.; Fan, X. Rapid Early Fire Smoke Detection System Using Slope Fitting in Video Image Histogram. Fire Technol. 2020, 56, 695–714. [Google Scholar] [CrossRef]
- Wu, X.; Cao, Y.; Lu, X.; Leung, H. Patchwise Dictionary Learning for Video Forest Fire Smoke Detection in Wavelet Domain. Neural Comput. Appl. 2021, 33, 7965–7977. [Google Scholar] [CrossRef]
- Gagliardi, A.; Saponara, S. AdViSED: Advanced Video SmokE Detection for Real-Time Measurements in Antifire Indoor and Outdoor Systems. Energies 2020, 13, 2098. [Google Scholar] [CrossRef]
- Hossain, F.M.A.; Zhang, Y.M.; Tonima, M.A. Forest Fire Flame and Smoke Detection from UAV-Captured Images Using Fire-Specific Color Features and Multi-Color Space Local Binary Pattern. J. Unmanned Veh. Syst. 2020, 8, 285–309. [Google Scholar] [CrossRef]
- Jia, Y.; Chen, W.; Yang, M.; Wang, L.; Liu, D.; Zhang, Q. Video Smoke Detection with Domain Knowledge and Transfer Learning from Deep Convolutional Neural Networks. Optik 2021, 240, 166947. [Google Scholar] [CrossRef]
- Peng, Y.; Wang, Y. Real-Time Forest Smoke Detection Using Hand-Designed Features and Deep Learning. Comput. Electron. Agric. 2019, 167, 105029. [Google Scholar] [CrossRef]
- Cheng, S.; Ma, J. Smoke Detection and Trend Prediction Method Based on Deeplabv3+ and Generative Adversarial Network. J. Electron. Imaging 2019, 28, 1. [Google Scholar] [CrossRef]
- Yuan, F.; Zhang, L.; Xia, X.; Wan, B.; Huang, Q.; Li, X. Deep Smoke Segmentation. Neurocomputing 2019, 357, 248–260. [Google Scholar] [CrossRef]
- Lin, G.; Zhang, Y.; Xu, G.; Zhang, Q. Smoke Detection on Video Sequences Using 3D Convolutional Neural Networks. Fire Technol. 2019, 55, 1827–1847. [Google Scholar] [CrossRef]
- Li, J.; Zhou, G.; Chen, A.; Wang, Y.; Jiang, J.; Hu, Y.; Lu, C. Adaptive Linear Feature-Reuse Network for Rapid Forest Fire Smoke Detection Model. Ecol. Inform. 2022, 68, 101584. [Google Scholar] [CrossRef]
- Liu, H.; Lei, F.; Tong, C.; Cui, C.; Wu, L. Visual Smoke Detection Based on Ensemble Deep CNNs. Displays 2021, 69, 102020. [Google Scholar] [CrossRef]
- Zhan, J.; Hu, Y.; Zhou, G.; Wang, Y.; Cai, W.; Li, L. A High-Precision Forest Fire Smoke Detection Approach Based on ARGNet. Comput. Electron. Agric. 2022, 196, 106874. [Google Scholar] [CrossRef]
- Hu, Y.; Zhan, J.; Zhou, G.; Chen, A.; Cai, W.; Guo, K.; Hu, Y.; Li, L. Fast Forest Fire Smoke Detection Using MVMNet. Knowl.-Based Syst. 2022, 241, 108219. [Google Scholar] [CrossRef]
- Hosseini, A.; Hashemzadeh, M.; Farajzadeh, N. UFS-Net: A Unified Flame and Smoke Detection Method for Early Detection of Fire in Video Surveillance Applications Using CNNs. J. Comput. Sci. 2022, 61, 101638. [Google Scholar] [CrossRef]
- Khan, S.; Muhammad, K.; Mumtaz, S.; Baik, S.W.; de Albuquerque, V.H.C. Energy-Efficient Deep CNN for Smoke Detection in Foggy IoT Environment. IEEE Internet Things J. 2019, 6, 9237–9245. [Google Scholar] [CrossRef]
- He, L.; Gong, X.; Zhang, S.; Wang, L.; Li, F. Efficient Attention Based Deep Fusion CNN for Smoke Detection in Fog Environment. Neurocomputing 2021, 434, 224–238. [Google Scholar] [CrossRef]
- Muhammad, K.; Khan, S.; Palade, V.; Mehmood, I.; de Albuquerque, V.H.C. Edge Intelligence-Assisted Smoke Detection in Foggy Surveillance Environments. IEEE Trans. Industr. Inform. 2020, 16, 1067–1075. [Google Scholar] [CrossRef]
- Strese, M.; Schuwerk, C.; Iepure, A.; Steinbach, E. Multimodal Feature-Based Surface Material Classification. IEEE Trans. Haptics 2017, 10, 226–239. [Google Scholar] [CrossRef] [PubMed]
- Zhang, H.; Jiang, Z.; Xiong, Q.; Wu, J.; Yuan, T.; Li, G.; Huang, Y.; Ji, D. Gathering Effective Information for Real-Time Material Recognition. IEEE Access 2020, 8, 159511–159529. [Google Scholar] [CrossRef]
- Lee, S.; Lee, D.; Kim, H.-C.; Lee, S. Material Type Recognition of Indoor Scenes via Surface Reflectance Estimation. IEEE Access 2022, 10, 134–143. [Google Scholar] [CrossRef]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual Attention Network for Scene Segmentation. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Yu, F.; Koltun, V.; Funkhouser, T. Dilated Residual Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Zhuang, F.; Qi, Z.; Duan, K.; Xi, D.; Zhu, Y.; Zhu, H.; Xiong, H.; He, Q. A Comprehensive Survey on Transfer Learning. Proc. IEEE Inst. Electr. Electron. Eng. 2021, 109, 43–76. [Google Scholar] [CrossRef]
- GitHub—Open-Mmlab/Mmsegmentation: OpenMMLab Semantic Segmentation Toolbox and Benchmark. Available online: https://github.com/open-mmlab/mmsegmentation (accessed on 14 March 2023).
- Wilson, A.C.; Roelofs, R.; Stern, M.; Srebro, N.; Recht, B. The Marginal Value of Adaptive Gradient Methods in Machine Learning. arXiv 2017, arXiv:1705.08292. [Google Scholar]
- Zhou, Y.-C.; Hu, Z.-Z.; Yan, K.-X.; Lin, J.-R. Deep Learning-Based Instance Segmentation for Indoor Fire Load Recognition. IEEE Access 2021, 9, 148771–148782. [Google Scholar] [CrossRef]
- Torralba, A.; Russell, B.C.; Yuen, J. LabelMe: Online Image Annotation and Applications. Proc. IEEE Inst. Electr. Electron. Eng. 2010, 98, 1467–1484. [Google Scholar] [CrossRef]
- Everingham, M.; Van Gool, L.; Williams, C.K.I.; Winn, J.; Zisserman, A. The Pascal Visual Object Classes (VOC) Challenge. Int. J. Comput. Vis. 2010, 88, 303–338. [Google Scholar] [CrossRef]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; Liu, W. CCNet: Criss-Cross Attention for Semantic Segmentation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
- Huang, L.; Yuan, Y.; Guo, J.; Zhang, C.; Chen, X.; Wang, J. Interlaced Sparse Self-Attention for Semantic Segmentation. arXiv 2019, arXiv:1907.12273. [Google Scholar]
- Yuan, Y.; Chen, X.; Wang, J. Object-Contextual Representations for Semantic Segmentation. In Computer Vision—ECCV 2020; Lecture Notes in Computer Science; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 173–190. ISBN 9783030585389. [Google Scholar]
Setting | Value |
---|---|
Batch size | 1 |
Crop size | 512 |
Momentum | 0.975 |
Initial learning rate | 0.0005 |
Weight decay | 0.0004 |
Category | Number of Instances |
---|---|
Fire | 3071 |
Fabric | 4055 |
Wood | 4195 |
Method | Total Number of Training Iterations | aAcc | mIoU | mAcc |
---|---|---|---|---|
DANet-50 | 20 k | 82.15 | 59.46 | 71.27 |
DANet-50 | 40 k | 81.84 | 60.63 | 73.98 |
DANet-101 | 20 k | 82.99 | 60.95 | 72.25 |
DANet-101 | 40 k | 83.19 | 61.73 | 73.33 |
Method | Total Number of Training Iterations | aAcc | mIoU | mAcc |
---|---|---|---|---|
DANet-101 | 20 k | 82.99 | 60.95 | 72.25 |
DANet-101 | 40 k | 83.19 | 61.73 | 73.33 |
DANet-101 | 60 k | 82.50 | 61.11 | 73.73 |
DANet-101 | 80 k | 82.71 | 61.59 | 73.74 |
DANet-101 | 100 k | 84.26 | 64.85 | 77.05 |
DANet-101 | 120 k | 83.04 | 60.03 | 70.53 |
Model | mIoU | IoU.background | IoU.fire | IoU.fabric |
---|---|---|---|---|
PSPNet | 60.20 | 78.07 | 65.73 | 62.54 |
CCNet | 60.42 | 77.86 | 67.30 | 62.90 |
FCN | 61.45 | 77.35 | 65.02 | 61.7 |
ISANet | 61.37 | 78.18 | 65.42 | 65.33 |
OCRNet | 53.48 | 73.48 | 63.96 | 39.14 |
The proposed method | 64.85 | 79.43 | 70.61 | 64.53 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hou, F.; Zhao, W.; Fan, X. Automatic Recognition of Indoor Fire and Combustible Material with Material-Auxiliary Fire Dataset. Mathematics 2024, 12, 54. https://doi.org/10.3390/math12010054
Hou F, Zhao W, Fan X. Automatic Recognition of Indoor Fire and Combustible Material with Material-Auxiliary Fire Dataset. Mathematics. 2024; 12(1):54. https://doi.org/10.3390/math12010054
Chicago/Turabian StyleHou, Feifei, Wenqing Zhao, and Xinyu Fan. 2024. "Automatic Recognition of Indoor Fire and Combustible Material with Material-Auxiliary Fire Dataset" Mathematics 12, no. 1: 54. https://doi.org/10.3390/math12010054
APA StyleHou, F., Zhao, W., & Fan, X. (2024). Automatic Recognition of Indoor Fire and Combustible Material with Material-Auxiliary Fire Dataset. Mathematics, 12(1), 54. https://doi.org/10.3390/math12010054