A Fire Detection Method for Aircraft Cargo Compartments Utilizing Radio Frequency Identification Technology and an Improved YOLO Model
Abstract
:1. Introduction
- By incorporating the Faster Block from the FasterNet backbone into the C3 module, this paper reduces the number of parameters in the YOLOv5 framework and significantly decreases the model’s inference time.
- To enhance the detection of small-scale smoke and flames during the initial stages of a fire, this paper replaces the upsampling module with DySample, which better preserves feature details and enhances detection accuracy.
- This paper employs the MPDIoU loss function to enhance the accuracy of identifying small-scale flame and smoke targets during the initial stages of a fire and improve the effectiveness of multi-scale target detection, which facilitates the identification of targets, such as flames and smoke, when they are relatively small during the initial stages of a fire.
- Multiple cameras are typically used for image acquisition in aircraft cargo hold areas. This paper proposes managing these surveillance cameras in aircraft cargo holds using RFID (radio frequency identification) technology, with the aim of identifying the location of the camera that triggers the fire alarm in the event of a fire.
2. Aviation Fire Detection and Image-Based Fire Detection Methods
3. Improved Network
3.1. FDY-YOLO
3.2. FaB-C3 Network
3.3. DySample
3.4. MPDIoU
3.5. RFID
3.6. RFID System Application Process
- The crew install the corresponding equipment tags for the fire detectors.
- The crew use handheld mobile devices to scan and record the tags and utilize software to bind the tags with the corresponding detectors to ensure consistency between them. They also synchronize the detector data in a timely manner, aggregate the data information of the detectors, and upload it to the onboard management platform for review by the crew.
- The various data that have passed the review are synchronized to the onboard management system, and the detectors are counted and recorded regularly. If any detectors are found to be malfunctioning, they are repaired or replaced promptly to ensure the normal operation of all detectors. This ensures that the detectors can sound an alarm in the event of a fire accident, avoiding unnecessary casualties caused by detector failures.
4. Results and Discussion
4.1. Experimental Setup and Data Collection
4.2. Comparative Analysis of Heatmaps
4.3. Evaluation Index
4.4. Ablation Experiments
4.5. Comparative Analysis of Different Models
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ai, H.-z.; Han, D.; Wang, X.-z.; Liu, Q.-y.; Wang, Y.; Li, M.-y.; Zhu, P. Early fire detection technology based on improved transformers in aircraft cargo compartments. J. Saf. Sci. Resil. 2024, 5, 194–203. [Google Scholar] [CrossRef]
- Part, F.A.R. 25: Airworthiness Standards: Transport Category Airplanes; Federal Aviation Administration: Washington, DC, USA, 2002; Volume 7. [Google Scholar]
- Zhou, Y.; Shi, L.; Li, C.; Zhang, H.; Zheng, R. Scattering Characteristics of Fire Smoke and Dust Aerosol in Aircraft Cargo Compartment. Fire Technol. 2023, 59, 2543–2565. [Google Scholar] [CrossRef]
- Zhang, Q.; Wang, Y.C.; Soutis, C.; Gresil, M. Development of a fire detection and suppression system for a smart air cargo container. Aeronaut. J. 2021, 125, 205–222. [Google Scholar] [CrossRef]
- Krüll, W.; Willms, I.; Zakrzewski, R.R.; Sadok, M.; Shirer, J.; Zeliff, B. Design and test methods for a video-based cargo fire verification system for commercial aircraft. Fire Saf. J. 2006, 41, 290–300. [Google Scholar] [CrossRef]
- Bai, Y.; Wang, D.; Li, Q.; Liu, T.; Ji, Y. Advanced Multi-Label Fire Scene Image Classification via BiFormer, Domain-Adversarial Network and GCN. Fire 2024, 7, 322. [Google Scholar] [CrossRef]
- Buriboev, A.S.; Rakhmanov, K.; Soqiyev, T.; Choi, A.J. Improving Fire Detection Accuracy through Enhanced Convolutional Neural Networks and Contour Techniques. Sensors 2024, 24, 5184. [Google Scholar] [CrossRef]
- Qu, N.; Li, Z.; Li, X.; Zhang, S.; Zheng, T. Multi-parameter fire detection method based on feature depth extraction and stacking ensemble learning model. Fire Saf. J. 2022, 128, 103541. [Google Scholar] [CrossRef]
- Yongbo, H.; Wenjie, Z.; Wei, Y.; Yongqing, L. Research on multi-sensor smoke detection method foraircraft cargo compartment. China Saf. Sci. J. 2019, 29, 43. [Google Scholar]
- Kaiyuan, L.; Hongyong, Y.; Tao, C.; Huang, L. Tunable diode laser absorption spectroscopy (TDLAS)-based optical probe initial fire detection system. J. Tsinghua Univ. (Sci. Technol.) 2023, 63, 910–916. [Google Scholar]
- Wu, A.; Li, M.; Chen, Y. Research for image fire detection technology in large space. Comput. Meas. Control 2006, 14, 869–871. [Google Scholar]
- Fang, S.; Qi, L.; Yu, L. Video smoke detection with multi-feature analysis. Comput. Eng. Appl. 2016, 52, 222–227. [Google Scholar]
- Wang, Y.; Hua, C.; Ding, W.; Wu, R. Real-time detection of flame and smoke using an improved YOLOv4 network. Signal Image Video Process. 2022, 16, 1109–1116. [Google Scholar] [CrossRef]
- Zheng, H.; Duan, J.; Dong, Y.; Liu, Y. Real-time fire detection algorithms running on small embedded devices based on MobileNetV3 and YOLOv4. Fire Ecol. 2023, 19, 31. [Google Scholar] [CrossRef]
- Chen, D.; Xing, W.; Zengshou, D.; Yilei, W.; Zhonghao, J. Improved YOLOv5s Flame and Smoke Detection Method for Underground Garage. J. Comput. Eng. Appl. 2024, 60, 298. [Google Scholar]
- Wang, A.; Liang, G.; Wang, X.; Song, Y. Application of the YOLOv6 combining CBAM and CIoU in forest fire and smoke detection. Forests 2023, 14, 2261. [Google Scholar] [CrossRef]
- Chen, X.; Xue, Y.; Hou, Q.; Fu, Y.; Zhu, Y. RepVGG-YOLOv7: A modified YOLOv7 for fire smoke detection. Fire 2023, 6, 383. [Google Scholar] [CrossRef]
- Titu, M.F.S.; Pavel, M.A.; Michael, G.K.O.; Babar, H.; Aman, U.; Khan, R. Real-Time Fire Detection: Integrating Lightweight Deep Learning Models on Drones with Edge Computing. Drones 2024, 8, 483. [Google Scholar] [CrossRef]
- Chen, J.; Kao, S.-h.; He, H.; Zhuo, W.; Wen, S.; Lee, C.-H.; Chan, S.-H.G. Run, don’t walk: Chasing higher FLOPS for faster neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 12021–12031. [Google Scholar]
- Liu, W.; Lu, H.; Fu, H.; Cao, Z. Learning to upsample by learning to sample. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 6027–6037. [Google Scholar]
- Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; Ren, D. Distance-IoU loss: Faster and better learning for bounding box regression. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI-20), New York, NY, USA, 12 February 2020; Volume 4, pp. 12993–13000. [Google Scholar]
- Ma, S.; Xu, Y. Mpdiou: A loss for efficient and accurate bounding box regression. arXiv 2023, arXiv:2307.07662. [Google Scholar]
- Shen, E.; Duan, S.; Guo, S.; Yang, W. Object Localization and Sensing in Non-Line-of-Sight Using RFID Tag Matrices. Electronics 2024, 13, 341. [Google Scholar] [CrossRef]
- Ali, M.; Hendriks, P.; Popping, N.; Levi, S.; Naveed, A. A Comparison of Machine Learning Algorithms for Wi-Fi Sensing Using CSI Data. Electronics 2023, 12, 3935. [Google Scholar] [CrossRef]
- Wang, L.; Luo, Z.; Guo, R.; Li, Y. A Review of Tags Anti-Collision Identification Methods Used in RFID Technology. Electronics 2023, 12, 3644. [Google Scholar] [CrossRef]
- Xie, S.; Ma, C.; Feng, R.; Xiang, X.; Jiang, P. Wireless glucose sensing system based on dual-tag RFID technology. IEEE Sens. J. 2022, 22, 13632–13639. [Google Scholar] [CrossRef]
- Feng, R.H.; Li, J.H.; Xie, S.; Mao, X.R. Efficient Training Method for Memristor-Based Array Using 1T1M Synapse. IEEE Trans. Circuits Syst. Ii-Express Briefs 2023, 70, 2410–2414. [Google Scholar] [CrossRef]
- Feng, R.; Xiang, X.; Xie, S.; Jiang, P. Sensing System for Mixed Inorganic Salt Solution Based on Improved Double Label Coupling RFID. IEEE Sens. J. 2023, 23, 13565–13574. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- de Venâncio, P.V.A.; Lisboa, A.C.; Barbosa, A.V. An automatic fire detection system based on deep convolutional neural networks for low-power, resource-constrained devices. Neural Comput. Appl. 2022, 34, 15349–15368. [Google Scholar] [CrossRef]
Parameter Name | Configuration |
---|---|
CPU | Intel(R) Core(TM) i5-4590 CPU @ 3.30 GHz |
RAM | 48 G |
GPU | NVIDIA Quadro RTX 8000 |
Operation system | Windows 10 64 bit |
Language | Python3.8 |
Deep learning architecture | Torch 1.13 deep learning framework |
CUDA | 11.7 |
Data Type | Quantity |
---|---|
Only flame | 4396 |
Only smoke | 3784 |
Both flame and smoke | 2787 |
YOLOv5s | FaB-C3 | DySample | MPDIoU | P/% | R/% | [email protected]/% | GFLOPs/G |
---|---|---|---|---|---|---|---|
√ | × | × | × | 89.9 | 90.1 | 91.8 | 16.0 |
√ | √ | × | × | 87.4 | 85.4 | 90.4 | 9.6↓ |
√ | × | √ | × | 88.9 | 88.1 | 92.1 | 16.5 |
√ | × | × | √ | 89.9 | 90.2↑ | 92.2 | 16.0 |
√ | √ | √ | × | 88.4 | 89.5 | 91.6 | 10.2 |
√ | √ | × | √ | 88.7 | 89.8 | 91.7 | 9.6↓ |
√ | × | √ | √ | 92.1 | 86.7 | 92.9↑ | 16.5 |
√ | √ | √ | √ | 90.1 | 88.9 | 92.6 | 10.6 |
Method | P/% | R/% | [email protected]/% | GFLOPs |
---|---|---|---|---|
FDY-YOLO | 90.1 | 88.9 | 92.6 | 10.6 |
YOLOv4 | 91.4 | 87.4 | 91.2 | 154.9 |
YOLOv5s | 91.5 | 87.5 | 91.6 | 16.0 |
YOLOv6s | 90.6 | 89.1 | 92.2 | 13.9 |
YOLOv7-tiny | 90.4 | 89.2 | 92.3 | 13.1 |
YOLOv8s | 88.8 | 88.1 | 92.5 | 28.8 |
Faster R-CNN | 71.4 | 90.4 | 93.4 | 370.2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, K.; Zhang, W.; Song, X. A Fire Detection Method for Aircraft Cargo Compartments Utilizing Radio Frequency Identification Technology and an Improved YOLO Model. Electronics 2025, 14, 106. https://doi.org/10.3390/electronics14010106
Wang K, Zhang W, Song X. A Fire Detection Method for Aircraft Cargo Compartments Utilizing Radio Frequency Identification Technology and an Improved YOLO Model. Electronics. 2025; 14(1):106. https://doi.org/10.3390/electronics14010106
Chicago/Turabian StyleWang, Kai, Wei Zhang, and Xiaosong Song. 2025. "A Fire Detection Method for Aircraft Cargo Compartments Utilizing Radio Frequency Identification Technology and an Improved YOLO Model" Electronics 14, no. 1: 106. https://doi.org/10.3390/electronics14010106
APA StyleWang, K., Zhang, W., & Song, X. (2025). A Fire Detection Method for Aircraft Cargo Compartments Utilizing Radio Frequency Identification Technology and an Improved YOLO Model. Electronics, 14(1), 106. https://doi.org/10.3390/electronics14010106