Fast Object Detection Leveraging Global Feature Fusion in Boundary-Aware Convolutional Networks
Abstract
:1. Introduction
- We introduce a boundary-aware convolution technique, denominated as BAC, meticulously crafted for the efficient detection of objects within the realm of endoscopy.
- We proffer a stratagem to elevate the attributes residing in the shallow layers, thus engendering equilibrium in the domain of features. We optimize multi-tier features by judiciously harmonizing the influence of superficial and profound informational strata.
- We execute comprehensive assessments of the envisaged framework across three distinct datasets. These evaluations exhibit unwaveringly noteworthy enhancements in comparison to the most advanced detectors, encompassing both singular-stage and dual-stage detectors.
2. Related Work
2.1. YOLO
2.2. Object Detection
2.3. Bounding Boxes in Clinical Endoscopy
3. Materials and Methods
3.1. Context-Enhanced Feature Fusion
3.2. Boundary Aware Convolution
Algorithm 1. Boundary aware convolution | |
1 | Extract object proposals or bounding boxes |
2 | for each bounding_box in bounding_boxes: |
3 | Extract features from the bounding box region |
4 | Predict side boundaries using the features |
5 | Refine the bounding box based on side boundaries |
6 | Replace the original bounding box with the refined one |
7 | Output the refined bounding boxes |
3.3. Loss Design
4. Results
4.1. Dataset
4.2. Performance Metrics
4.3. Implementation Details
4.4. Comparison with State-of-the-Art Methods
4.5. Ablation Study
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Min, J.K.; Kwak, M.S.; Cha, J.M. Overview of deep learning in gastrointestinal endoscopy. Gut Liver 2019, 13, 388. [Google Scholar] [CrossRef] [PubMed]
- Jain, S.; Seal, A.; Ojha, A.; Yazidi, A.; Bures, J.; Tacheci, I.; Krejcar, O. A deep CNN model for anomaly detection and localization in wireless capsule endoscopy images. Comput. Biol. Med. 2021, 137, 104789. [Google Scholar] [CrossRef] [PubMed]
- Hashimoto, R.; Requa, J.; Dao, T.; Ninh, A.; Tran, E.; Mai, D.; Lugo, M.; Chehade, N.E.H.; Chang, K.J.; Karnes, W.E.; et al. Artificial intelligence using convolutional neural networks for real-time detection of early esophageal neoplasia in Barrett’s esophagus (with video). Gastrointest. Endosc. 2020, 91, 1264–1271.e1. [Google Scholar] [CrossRef]
- Li, K.; Boyd, P.; Zhou, Y.; Ju, Z.; Liu, H. Electrotactile feedback in a virtual hand rehabilitation platform: Evaluation and implementation. IEEE Trans. Autom. Sci. Eng. 2018, 16, 1556–1565. [Google Scholar] [CrossRef]
- Liu, H.; Ju, Z.; Ji, X.; Chan, C.S.; Khoury, M. Human Motion Sensing and Recognition; Springer: Berlin, Germany, 2017. [Google Scholar]
- Yu, J.; Gao, H.; Chen, Y.; Zhou, D.; Liu, J.; Ju, Z. Deep object detector with attentional spatiotemporal LSTM for space human–robot interaction. IEEE Trans. Hum. Mach. Syst. 2022, 52, 784–793. [Google Scholar] [CrossRef]
- Montero-Valverde, J.A.; Organista-Vázquez, V.D.; Martínez-Arroyo, M.; de la Cruz-Gámez, E.; HernándezHernández, J.L.; Hernández-Bravo, J.M.; Hernández-Hernández, M. Automatic Detection of Melanoma in Human Skin Lesions. In Proceedings of the International Conference on Technologies and Innovation; Guayaquil, Ecuador, 13–16 November 2023, Springer Nature: Cham, Switzerland, 2023; pp. 220–234. [Google Scholar]
- Sarda, A.; Dixit, S.; Bhan, A. Object detection for autonomous driving using yolo [you only look once] algorithm. In Proceedings of the 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV); Tirunelveli, India, 4–6 February 2021, IEEE: Piscataway, NJ, USA, 2021; pp. 1370–1374. [Google Scholar]
- George, J.; Skaria, S.; Varun, V.V. Using YOLO based deep learning network for real time detection and localization of lung nodules from low dose CT scans. In Medical Imaging 2018: Computer-Aided Diagnosis; SPIE: Bellingham, DC, USA, 2018; Volume 10575, pp. 347–355. [Google Scholar]
- Mirzaei, B.; Nezamabadi-Pour, H.; Raoof, A.; Derakhshani, R. Small Object Detection and Tracking: A Comprehensive Review. Sensors 2023, 23, 6887. [Google Scholar] [CrossRef]
- Simony, M.; Milzy, S.; Amendey, K.; Gross, H.M. Complex-yolo: An euler-region-proposal for real-time 3d object detection on point clouds. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Poon, Y.S.; Lin, C.C.; Liu, Y.H.; Fan, C.P. YOLO-based deep learning design for in-cabin monitoring system with fisheye-lens camera. In Proceedings of the 2022 IEEE International Conference on Consumer Electronics (ICCE), Virtual, 7–9 January 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–4. [Google Scholar]
- Pathak, A.R.; Pandey, M.; Rautaray, S. Application of deep learning for object detection. Procedia Comput. Sci. 2018, 132, 1706–1717. [Google Scholar] [CrossRef]
- Bharati, S.P.; Wu, Y.; Sui, Y.; Padgett, C.; Wang, G. Real-time obstacle detection and tracking for sense-and-avoid mechanism in UAVs. IEEE Trans. Intell. Veh. 2018, 3, 185–197. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 July 2015; pp. 3431–3440. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
- Fu, C.Y.; Liu, W.; Ranga, A.; Tyagi, A.; Berg, A.C. Dssd: Deconvolutional single shot detector. arXiv 2017, arXiv:1701.06659. [Google Scholar]
- Noh, H.; Hong, S.; Han, B. Learning deconvolution network for semantic segmentation. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 11–18 December 2015; pp. 1520–1528. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–16 July 2017; pp. 2117–2125. [Google Scholar]
- Cai, Z.; Fan, Q.; Feris, R.S.; Vasconcelos, N. A unified multi-scale deep convolutional neural network for fast object detection. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; pp. 354–370. [Google Scholar]
- Kong, T.; Yao, A.; Chen, Y.; Sun, F. Hypernet: Towards accurate region proposal generation and joint object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 845–853. [Google Scholar]
- Ghiasi, G.; Fowlkes, C.C. Laplacian pyramid reconstruction and refinement for semantic segmentation. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; pp. 519–534. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W.; et al. YOLOv6: A single-stage object detection framework for industrial applications. arXiv 2022, arXiv:2209.02976. [Google Scholar]
- Simon, M.; Amende, K.; Kraus, A.; Honer, J.; Samann, T.; Kaulbersch, H.; Milz, S.; Michael Gross, H. Complexer-yolo: Real-time 3d object detection and tracking on semantic point clouds. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Han, X.; Chang, J.; Wang, K. Real-time object detection based on YOLO-v2 for tiny vehicle object. Procedia Comput. Sci. 2021, 183, 61–72. [Google Scholar] [CrossRef]
- Chen, W.; Huang, H.; Peng, S.; Zhou, C.; Zhang, C. YOLO-face: A real-time face detector. Vis. Comput. 2021, 37, 805–813. [Google Scholar] [CrossRef]
- Jang, J.Y. The past, present, and future of image-enhanced endoscopy. Clin. Endosc. 2015, 48, 466–475. [Google Scholar] [CrossRef] [PubMed]
- Banerjee, S.; Cash, B.D.; Dominitz, J.A.; Baron, T.H.; Anderson, M.A.; Ben-Menachem, T.; Fisher, L.; Fukami, N.; Harrison, M.E.; Ikenberry, S.O.; et al. The role of endoscopy in the management of patients with peptic ulcer disease. Gastrointest. Endosc. 2010, 71, 663–668. [Google Scholar] [CrossRef] [PubMed]
- Zaidi, S.S.A.; Ansari, M.S.; Aslam, A.; Kanwal, N.; Asghar, M.; Lee, B. A survey of modern deep learning based object detection models. Digit. Signal Process. 2022, 126, 103514. [Google Scholar] [CrossRef]
- Yu, J.; Ma, T.; Chen, H.; Lai, M.; Ju, Z.; Xu, Y. Marrying Global–Local Spatial Context for Image Patches in Computer-Aided Assessment. IEEE Trans. Syst. Man Cybern. Syst. 2023, 53, 7099–7111. [Google Scholar] [CrossRef]
- Zou, Z.; Chen, K.; Shi, Z.; Guo, Y.; Ye, J. Object detection in 20 years: A survey. Proc. IEEE 2023, 111. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
- Lazebnik, S.; Schmid, C.; Ponce, J. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; IEEE: Piscataway, NJ, USA, 2006; Volume 2, pp. 2169–2178. [Google Scholar]
- Perronnin, F.; Sánchez, J.; Mensink, T. Improving the fisher kernel for large-scale image classification. In Proceedings of the Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, 5–11 September 2010; pp. 143–156. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. Available online: https://proceedings.neurips.cc/paper_files/paper/2015/hash/14bfa6bb14875e45bba028a21ed38046-Abstract.html (accessed on 15 November 2023). [CrossRef] [PubMed]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Chen, S.; Urban, G.; Baldi, P. Weakly Supervised Polyp Segmentation in Colonoscopy Images Using Deep Neural Networks. J. Imaging 2022, 8, 121. [Google Scholar] [CrossRef]
- Fan, W.; Ma, T.; Gao, H.; Yu, J.; Ju, Z. Deep Learning-Powered Multiple-Object Segmentation for Computer-Aided Diagnosis. In Proceedings of the 2023 42nd Chinese Control Conference (CCC), Tianjin, China, 24–26 July 2023; pp. 7895–7900. [Google Scholar]
- Yu, J.; Ma, T.; Fu, Y.; Chen, H.; Lai, M.; Zhuo, C.; Xu, Y. Local-to-global spatial learning for whole-slide image representation and classification. Comput. Med. Imaging Graph. 2023, 107, 102230. [Google Scholar] [CrossRef] [PubMed]
- Cai, Z.; Vasconcelos, N. Cascade r-cnn: Delving into high quality object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6154–6162. [Google Scholar]
- Jiang, B.; Luo, R.; Mao, J.; Xiao, T.; Jiang, Y. Acquisition of localization confidence for accurate object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 784–799. [Google Scholar]
- Wang, J.; Chen, K.; Yang, S.; Loy, C.C.; Lin, D. Region proposal by guided anchoring. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2965–2974. [Google Scholar]
- Yu, J.; Gao, H.; Zhou, D.; Liu, J.; Gao, Q.; Ju, Z. Deep temporal model-based identity-aware hand detection for space human–robot interaction. IEEE Trans. Cybern. 2021, 52, 13738–13751. [Google Scholar] [CrossRef] [PubMed]
- Bernal, J.; Sánchez, F.J.; Fernández-Esparrach, G.; Gil, D.; Rodríguez, C.; Vilariño, F. WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians. Comput. Med. Imaging Graph. 2015, 43, 99–111. [Google Scholar] [CrossRef]
- Jha, D.; Smedsrud, P.H.; Riegler, M.A.; Halvorsen, P.; de Lange, T.; Johansen, D.; Johansen, H.D. Kvasir-seg: A segmented polyp dataset. In Proceedings of the MultiMedia Modeling: 26th International Conference, MMM 2020, Daejeon, Republic of Korea, 5–8 January 2020; pp. 451–462. [Google Scholar]
- Ali, S.; Ghatwary, N.; Braden, B.; Lamarque, D.; Bailey, A.; Realdon, S.; Cannizzaro, R.; Rittscher, J.; Daul, C.; East, J. Endoscopy disease detection challenge 2020. arXiv 2020, arXiv:2003.03376. [Google Scholar]
- Carrinho, P.; Falcao, G. Highly Accurate and Fast YOLOv4-Based Polyp Detection. Available at SSRN 4227573. 2022. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4227573 (accessed on 15 November 2023).
- Ma, C.; Jiang, H.; Ma, L.; Chang, Y. A Real-Time Polyp Detection Framework for Colonoscopy Video. In Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision (PRCV), Shenzhen, China, 4–7 November 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 267–278. [Google Scholar]
- Yu, T.; Lin, N.; Zhang, X.; Pan, Y.; Hu, H.; Zheng, W.; Liu, J.; Hu, W.; Duan, H.; Si, J. An end-to-end tracking method for polyp detectors in colonoscopy videos. Artif. Intell. Med. 2022, 131, 102363. [Google Scholar] [CrossRef] [PubMed]
- Lima, A.C.D.M.; De Paiva, L.F.; Bráz, G.; De Almeida, J.D.S.; Silva, A.C.; Coimbra, M.T.; De Paiva, A.C. A two-stage method for polyp detection in colonoscopy images based on saliency object extraction and transformers. IEEE Access 2023, 11, 2169–3536. [Google Scholar] [CrossRef]
- Souaidi, M.; Lafraxo, S.; Kerkaou, Z.; El Ansari, M.; Koutti, L. A Multiscale Polyp Detection Approach for GI Tract Images Based on Improved DenseNet and Single-Shot Multibox Detector. Diagnostics 2023, 13, 733. [Google Scholar] [CrossRef]
- Neto, A.; Couto, D.; Coimbra, M.; Cunha, A. Colonoscopic Polyp Detection with Deep Learning Assist. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2023), Virtual, 8–10 February 2023. [Google Scholar]
- Ali, S.; Dmitrieva, M.; Ghatwary, N.; Bano, S.; Polat, G.; Temizel, A.; Krenzer, A.; Hekalo, A.; Guo, Y.B.; Matuszewski, B.; et al. Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy. Med. Image Anal. 2021, 70, 102002. [Google Scholar] [CrossRef]
Model | Dataset | mAP50 | AP50 | P | R | F1 |
---|---|---|---|---|---|---|
YOLOv4 [50] | CVC-ClinicDB | - | - | 80.5 ± 0.3 | 73.6 ± 0.1 | 76.9 ± 0.1 |
STYOLOv5 [51] | CVC-ClinicDB | - | - | 83.6 ± 0.3 | 73.1 ± 0.2 | 78 ± 0.1 |
ITH [52] | CVC-ClinicDB | - | - | 92.6 ± 0.2 | 80.7 ± 0.1 | 86.2 ± 0.1 |
soet [53] | CVC-ClinicDB | 89.5 ± 0.1 | 89.5 ± 0.1 | 88.3 ± 0.1 | 92.3 ± 0.1 | 89.8 ± 0.2 |
DC-SSDNet [54] | CVC-ClinicDB | 92.2 ± 0.3 | 92.2 ± 0.3 | 91 ± 0.1 | 92.2 ± 0.1 | 88.4 ± 0.2 |
Ours | CVC-ClinicDB | 94.8 ± 0.1 | 94.8 ± 0.1 | 93.5 ± 0.2 | 92.7 ± 0.1 | 90.9 ± 0.1 |
Model | Dataset | mAP50 | AP50 | P | R | F1 |
---|---|---|---|---|---|---|
YOLOv4 [55] | Kvasir-SEG | 71.0 ± 0.1 | 71.0 ± 0.1 | 65.0 ± 0.2 | 66.0 ± 0.2 | 63.0 ± 0.1 |
YOLOv5l [55] | Kvasir-SEG | 81.0 ± 0.1 | 68.0 ± 0.1 | 65.0 ± 0.2 | 65.0 ± 0.1 | 64.0 ± 0.1 |
YOLOv5m [55] | Kvasir-SEG | 81.0 ± 0.2 | 80.0 ± 0.2 | 65.0 ± 0.0 | 65.0 ± 0.1 | 64.0 ± 0.3 |
YOLOv5n [55] | Kvasir-SEG | 75.0 ± 0.0 | 75.0 ± 0.0 | 64.0 ± 0.1 | 64.0 ± 0.1 | 62.0 ± 0.2 |
YOLOv5s [55] | Kvasir-SEG | 74.0 ± 0.1 | 74.0 ± 0.1 | 63.0 ± 0.2 | 62.0 ± 0.1 | 61.0 ± 0.1 |
DETR [55] | Kvasir-SEG | 80.0 ± 0.3 | 80.0 ± 0.3 | 65.0 ± 0.1 | 69.0 ± 0.2 | 66.0 ± 0.1 |
soet | Kvasir-SEG | 92.6 ± 0.1 | 92.6 ± 0.1 | 95.1 ± 0.1 | 93.1 ± 0.1 | 94.0 ± 0.2 |
Ours | Kvasir-SEG | 93.5 ± 0.1 | 93.5 ± 0.1 | 95.5 ± 0.2 | 93.2 ± 0.1 | 94.7 ± 0.1 |
Team Names | Dataset | mAP25 | mAP50 | mAP75 | Overall mAP |
---|---|---|---|---|---|
sahadate [56] | EDD2020 | 37.6 ± 0.1 | 23.3 ± 0.1 | 15.8 ± 0.1 | 26.8 ± 0.1 |
VinBDI [56] | EDD2020 | 43.2 ± 0.1 | 27.0 ± 0.1 | 17.0 ± 0.1 | 30.2 ± 0.1 |
adrian [56] | EDD2020 | 48.3 ± 0.1 | 33.6 ± 0.1 | 27.1 ± 0.2 | 37.6 ± 0.1 |
YOLOv4 | EDD2020 | 53.1 ± 0.1 | 41.2 ± 0.1 | 32.3 ± 0.2 | 42.2 ± 0.2 |
YOLOv5 | EDD2020 | 54.7 ± 0.1 | 42.7 ± 0.1 | 32.9 ± 0.1 | 43.4 ± 0.3 |
Ours | EDD2020 | 59.7 ± 0.1 | 44.1 ± 0.1 | 35.6 ± 0.2 | 46.5 ± 0.1 |
Class | Dataset | mAP50 | mAP50-95 | Precision | Recall |
---|---|---|---|---|---|
BE | EDD2020 | 66.3 ± 0.2 | 48.4 ± 0.1 | 59.4 ± 0.2 | 68.2 ± 0.1 |
suspicious | EDD2020 | 25.5 ± 0.1 | 16.9 ± 0.2 | 35.6 ± 0.3 | 21.9 ± 0.1 |
HGD | EDD2020 | 35.3 ± 0.2 | 22.9 ± 0.2 | 47.4 ± 0.2 | 27.8 ± 0.2 |
cancer | EDD2020 | 34.8 ± 0.2 | 16.3 ± 0.2 | 64.0 ± 0.1 | 25.0 ± 0.1 |
polyp | EDD2020 | 58.4 ± 0.3 | 40.2 ± 0.1 | 62.9 ± 0.2 | 57.7 ± 0.2 |
BAC | BSF | PA | mAP50 | AP50 | P | R | F1 |
---|---|---|---|---|---|---|---|
89.7 ± 0.2 | 89.7 ± 0.2 | 90.5 ± 0.2 | 86.3 ± 0.2 | 87.2 ± 0.2 | |||
√ | 90.9 ± 0.2 | 90.9 ± 0.2 | 91.4 ± 0.2 | 87.2 ± 0.2 | 87.9 ± 0.2 | ||
√ | 91.4 ± 0.2 | 91.4 ± 0.2 | 90.8 ± 0.2 | 87.4 ± 0.2 | 87.6 ± 0.2 | ||
√ | √ | 92.4 ± 0.2 | 92.4 ± 0.2 | 91.7 ± 0.2 | 88.2 ± 0.2 | 88.6 ± 0.2 | |
√ | 94.1 ± 0.2 | 94.1 ± 0.2 | 92.6 ± 0.2 | 92.2 ± 0.2 | 89.4 ± 0.2 | ||
√ | √ | 94.3 ± 0.2 | 94.3 ± 0.2 | 93.4 ± 0.2 | 92.7 ± 0.2 | 90.2 ± 0.2 | |
√ | √ | √ | 94.8 ± 0.1 | 94.8 ± 0.1 | 93.5 ± 0.2 | 92.7 ± 0.1 | 90.9 ± 0.1 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fan, W.; Yu, J.; Ju, Z. Fast Object Detection Leveraging Global Feature Fusion in Boundary-Aware Convolutional Networks. Information 2024, 15, 53. https://doi.org/10.3390/info15010053
Fan W, Yu J, Ju Z. Fast Object Detection Leveraging Global Feature Fusion in Boundary-Aware Convolutional Networks. Information. 2024; 15(1):53. https://doi.org/10.3390/info15010053
Chicago/Turabian StyleFan, Weiming, Jiahui Yu, and Zhaojie Ju. 2024. "Fast Object Detection Leveraging Global Feature Fusion in Boundary-Aware Convolutional Networks" Information 15, no. 1: 53. https://doi.org/10.3390/info15010053
APA StyleFan, W., Yu, J., & Ju, Z. (2024). Fast Object Detection Leveraging Global Feature Fusion in Boundary-Aware Convolutional Networks. Information, 15(1), 53. https://doi.org/10.3390/info15010053