ADFireNet: An Anchor-Free Smoke and Fire Detection Network Based on Deformable Convolution
Abstract
:1. Introduction
- (1)
- Compared with traditional convolution, DCN is more suitable for extracting features of complex instance shapes such as fires and smoke.
- (2)
- The simple metric, pseudo-IoU, is added to improve the problem of poor training caused by a lack of label assignment in common anchor-free detection networks, and there is no need to add an additional network structure.
- (3)
- Compared with other fire detection algorithms, ADFireNet has fewer parameters and enhances the sampling ability of complex shapes, which can meet the real-time detection requirements and also has high accuracy.
2. Related Work
2.1. Deformable Convolution Network
2.2. Label Assignment and Pseudo-IoU
3. Approach
3.1. Architecture of ADFireNet
3.2. Loss Function
4. Experimental Section
4.1. Experimental Dataset
4.2. Implementation Details
4.3. Evaluation Methodology
4.4. Flame and Smoke Detection Results
- (1)
- The anchor-based method, Faster-RCNN, lags behind anchor-free methods in terms of detection speed. It also cannot meet real-time requirements and has worse performance in terms of accuracy.
- (2)
- The FCOS and ADFireNet networks are both anchor-free frame networks. The ADFireNet network does not have a center-less branch, and the classification network adopts a single-branch multi-classification method while the FCOS network adopts a multi-branch parallel binary classification method. Therefore, its detection speed is faster than that of the FCOS network, and its detection accuracy, mAP, is 2.3% better than that of the FCOS network.
- (3)
- The number of layers in the backbone network has a significant impact on the overall performance of the network. When the backbone network is changed from ResNet-101 to ResNet-50, the mAP of ADFireNet decreases by 1.8%, but the FPS value increases by 8%. Generally speaking, using a backbone network with fewer layers can improve detection speed, but reducing the number of layers can decrease the ability to extract features and affect detection accuracy. Therefore, the backbone network can be selected based on the actual requirement of the detection task.
4.5. Ablation Studies
4.6. Visual Effects
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Yang, Z.; Bu, L.; Wang, T.; Yuan, P.; Jineng, O. Indoor Video Flame Detection Based on Lightweight Convolutional Neural Network. Pattern Recognit. Image Anal. 2020, 30, 551–564. [Google Scholar] [CrossRef]
- Ye, S.; Bai, Z.; Chen, H.; Bohush, R.; Ablameyko, S. An effective algorithm to detect both smoke and flame using color and wavelet analysis. Pattern Recognit. Image Anal. 2017, 27, 131–138. [Google Scholar] [CrossRef]
- Yuan, J.; Wang, L.; Wu, P.; Gao, C.; Sun, L. Detection of wildfires along transmission lines using deep time and space features. Pattern Recognit. Image Anal. 2018, 28, 805–812. [Google Scholar] [CrossRef]
- Xiong, S.; Li, B.; Zhu, S. DCGNN: A single–stage 3D object detection network based on density clustering and graph neural network. Complex Intell. Syst. 2023, 9, 3399–3408. [Google Scholar] [CrossRef]
- Zeng, J.; Lin, Z.; Qi, C.; Zhao, X.; Wang, F. An improved object detection method based on deep convolution neural network for smoke detection. In Proceedings of the 2018 International Conference on Machine Learning and Cybernetics (ICMLC), Chengdu, China, 15–18 July 2018; pp. 184–189. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r–cnn: Towards real–time object detection with region proposal networks. In Proceedings of the 29th Annual Conference on Neural Information Processing Systems, Montreal, QC, Canada, 11–12 December 2015. [Google Scholar]
- Yu, L.; Liu, J. Flame image recognition algorithm based on improved mask R–CNN. Comput. Eng. Appl. 2020, 56, 194–198. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r–cnn. In Proceedings of the Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Li, B.; Lu, Y.; Pang, W.; Xu, H. Image Colorization using CycleGAN with semantic and spatial rationality. Multimed. Tools Appl. 2023, 82, 21641–21655. [Google Scholar] [CrossRef]
- Barmpoutis, P.; Dimitropoulos, K.; Kaza, K.; Nikos, G. Fire Detection from Images using Faster R–CNN and Multidimensional Texture Analysis. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 8301–8305. [Google Scholar]
- Jiao, Z.; Zhang, Y.; Xin, J.; Mu, L.; Yi, Y.; Liu, H.; Liu, A. Deep learning based forest fire detection approach using UAV and YOLOv3. In Proceedings of the 2019 1st International Conference on Industrial Artificial Intelligence (IAI), Shenyang, China, 23–27 July 2019; pp. 1–5. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Yue, C.; Ye, J. Research on Improved YOLOv3 Fire Detection Based on Enlarged Feature Map Resolution and Cluster Analysis. In Proceedings of the International Conference on Computer Big Data and Artificial Intelligence (ICCBDAI 2020), Changsha, China, 24–25 October 2020; p. 012094. [Google Scholar]
- Qin, Y.-Y.; Cao, J.-T.; Ji, X.-F. Fire detection method based on depthwise separable convolution and yolov3. Int. J. Autom. Comput. 2021, 18, 300–310. [Google Scholar] [CrossRef]
- Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Tian, Z.; Shen, C.; Chen, H.; He, T. Fcos: A simple and strong anchor–free object detector. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 1922–1933. [Google Scholar] [CrossRef] [PubMed]
- Law, H.; Deng, J. Cornernet: Detecting objects as paired keypoints. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 734–750. [Google Scholar]
- Zhu, C.; He, Y.; Savvides, M. Feature selective anchor–free module for single–shot object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach Convention & Entertainment Center, Los Angeles, CA, USA, 15–21 June 2019; pp. 840–849. [Google Scholar]
- Li, J.; Cheng, B.; Feris, R.; Xiong, J.; Huang, T.S.; Hwu, W.-M.; Shi, H. Pseudo–iou: Improving label assignment in anchor–free object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 2378–2387. [Google Scholar]
- Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 764–773. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Yu, J.; Jiang, Y.; Wang, Z.; Cao, Z.; Huang, T. UnitBox: An Advanced Object Detection Network. In Proceedings of the 24th ACM International Conference on Multimedia, Amsterdam, The Netherlands, 15–19 October 2016; pp. 516–520. [Google Scholar]
Method | Backbone | mAP 0.5 (%) | FPS |
---|---|---|---|
Faster-RCNN | ResNet-101 | 78.6 | 12 |
FCOS_101 | ResNet-101 | 83.1 | 17 |
FCOS_50 | ResNet-50 | 80.0 | 39 |
ADFireNet_101(ours) | ResNet-101 | 85.4 | 24 |
ADFireNet_50 (ours) | ResNet-50 | 83.6 | 32 |
Method | mAP 0.5 (%) |
---|---|
Without DCN_101 layers | 83.1 |
Without DCN_50 layers | 79.3 |
Without pseudo-IoU_101 layers | 83.4 |
Without pseudo-IoU_50 layers | 80.5 |
ADFireNet_101 | 85.4 |
ADFireNet_50 | 83.6 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, B.; Liu, P. ADFireNet: An Anchor-Free Smoke and Fire Detection Network Based on Deformable Convolution. Sensors 2023, 23, 7086. https://doi.org/10.3390/s23167086
Li B, Liu P. ADFireNet: An Anchor-Free Smoke and Fire Detection Network Based on Deformable Convolution. Sensors. 2023; 23(16):7086. https://doi.org/10.3390/s23167086
Chicago/Turabian StyleLi, Bin, and Peng Liu. 2023. "ADFireNet: An Anchor-Free Smoke and Fire Detection Network Based on Deformable Convolution" Sensors 23, no. 16: 7086. https://doi.org/10.3390/s23167086
APA StyleLi, B., & Liu, P. (2023). ADFireNet: An Anchor-Free Smoke and Fire Detection Network Based on Deformable Convolution. Sensors, 23(16), 7086. https://doi.org/10.3390/s23167086