Automated Detection of Atypical Aviation Obstacles from UAV Images Using a YOLO Algorithm
Abstract
:1. Introduction
2. Related Works
2.1. Object Classification Based on RGB Imagery
2.2. Research Purpose
3. Methods
3.1. Detection of Atypical Aviation Obstacles Using YOLOv3
3.2. Determination of the Centroid of an Atypical Aviation Obstacle
3.3. Estimation of Height of Temporary Aviation Obstacle
3.4. Classification of Point Cloud
4. Materials and Experimental Results
4.1. Study Area
4.2. Description of Data Sets
4.2.1. EPLK
4.2.2. EPRA
4.3. Atypical Aviation Obstacles
4.4. Surfaces of Obtaining Data about Obstacles
4.5. Experimental Results
4.6. Detecting Atypical Aviation Obstacles in the Orthophotomap
4.7. Accuracy Evaluation of YOLOv3 Algorithm
4.8. Detection of Atypical Aviation Obstacles Based on Point Cloud
4.8.1. Generating a Dense Point Cloud
4.8.2. Classification of Point Cloud
4.9. Analysis of the Matching Accuracy of the Point Cloud
5. Discussion
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Eurocontrol. Terrain and Obstacle Data Manual, 3rd ed.; Eurocontrol: Brussels, Belgium, 2021. [Google Scholar]
- Nex, F.; Remondino, F. UAV for 3D mapping applications: A review. Appl. Geomat. 2014, 6, 1–15. [Google Scholar] [CrossRef]
- Everaerts, J. The use of unmanned aerial vehicles (UAVs) for remote sensing and mapping. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, XXXVII, 1187–1192. [Google Scholar]
- Carvajal-Ramírez, F.; Agüera-Vega, F.; Martínez-Carricondo, P.J. Effects of image orientation and ground control points distribution on unmanned aerial vehicle photogrammetry projects on a road cut slope. J. Appl. Remote Sens. 2016, 10, 34004. [Google Scholar] [CrossRef]
- Zein, T. Fit-For-Purpose Land Administration: An implementation model for cadastre and land administration systems. In Proceedings of the Land and Poverty Conference 2016: Scaling up Responsible Land Governance, Washington, DC, USA, 14–18 March 2016. [Google Scholar]
- Stöcker, C.; Bennett, R.; Nex, F.; Gerke, M.; Zevenbergen, J. Review of the Current State of UAV Regulations. Remote Sens. 2017, 9, 459. [Google Scholar] [CrossRef]
- Zeybek, Z.; Şanlıoğlu, İ. Point cloud filtering on UAV based point cloud. Measurement 2019, 133, 99–111. [Google Scholar] [CrossRef]
- Gevaert, C.M.; Persello, C.; Sliuzas, R.; Vosselman, G. Informal settlement classification using point-cloud and image-based features from UAV data. ISPRS J. Photogramm. Remote Sens. 2017, 125, 225–236. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- Girshick, R. Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. In Proceedings of the Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, Montreal, QC, Canada, 7–12 December 2015. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherland, 8–16 October 2016. [Google Scholar]
- Fu, C.Y.; Liu, W.; Ranga, A.; Tyagi, A.; Berg, A.C. DSSD: Deconvolutional single shot detector. arXiv 2017, arXiv:1701.06659. Available online: https://arxiv.org/abs/1701.06659 (accessed on 9 August 2022).
- Redmon, J.; Divvala, S.; Girshick, R. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Erhan, D.; Szegedy, C.; Toshev, A.; Anguelov, D. Scalable object detection using deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. Available online: https://arxiv.org/abs/1409.1556 (accessed on 9 August 2022).
- Esetlili, M.; Bektas Balcik, F.; Balik Sanli, F.; Kalkan, K.; Ustuner, M.; Goksel, Ç.; Gazioğlu, C.; Kurucu, Y. Comparison of Object and Pixel-Based Classifications for Mapping Crops Using Rapideye Imagery: A Case Study of Menemen Plain. Int. J. Environ. Geoinformatics 2018, 5, 231–243. [Google Scholar] [CrossRef]
- Çelik, O.; Gazioğlu, C. Coastline Difference Measurement (CDM) Method. Int. J. Environ. Geoinformatics 2020, 7, 1–5. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Lin, J.; Jing, W.; Song, H.; Chen, G. ESFNet: Efficient Network for Building Extraction From High-Resolution Aerial Images. IEEE Access 2019, 7, 54285–54294. [Google Scholar] [CrossRef]
- Huang, W.; Xiao, L.; Wei, Z.; Liu, H.; Tang, S. A new pan sharpening method with deep neural networks. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1037–1041. [Google Scholar] [CrossRef]
- Chen, X.; Xiang, S.; Liu, C.L.; Pan, C.H. Vehicle detection in satellite images by hybrid deep convolutional neural networks. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1797–1801. [Google Scholar] [CrossRef]
- Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015, 258619. [Google Scholar] [CrossRef]
- Zhang, F.; Du, B.; Zhang, L. Scene classification via a gradient boosting random convolutional network framework. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1793–1802. [Google Scholar] [CrossRef]
- Yang, H.; Wu, P.; Yao, X.; Wu, Y.; Wang, B.; Xu, Y. Building extraction in very high resolution imagery by dense-attention networks. Remote Sens. 2018, 10, 1768. [Google Scholar] [CrossRef]
- Li, X.; Yao, X.; Fang, Y. Building-A-Nets: Robust Building Extraction from High-Resolution Remote Sensing Images With Adversarial Networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3680–3687. [Google Scholar] [CrossRef]
- Li, L.; Liang, J.; Weng, M.; Zhu, H. A multiple-feature reuse network to extract buildings from remote sensing imagery. Remote Sens. 2018, 10, 1350. [Google Scholar] [CrossRef]
- Lu, T.; Ming, D.; Lin, X.; Hong, Z.; Bai, X.; Fang, J. Detecting building edges from high spatial resolution remote sensing imagery using richer convolution features network. Remote Sens. 2018, 10, 1496. [Google Scholar] [CrossRef]
- Bittner, K.; Adam, F.; Cui, S.; Körner, M.; Reinartz, P. Building footprint extraction from VHR remote sensing images combined with normalized DSMs using fused fully convolutional networks. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2615–2629. [Google Scholar] [CrossRef]
- Xu, Y.; Wu, L.; Xie, Z.; Chen, Z. Building extraction in very high resolution remote sensing imagery using deep learning and guided filters. Remote Sens. 2018, 10, 144. [Google Scholar] [CrossRef]
- Boonpook, W.; Tan, Y.; Ye, Y.; Torteeka, P.; Torsri, K.; Dong, S. A Deep Learning Approach on Building Detection from Unmanned Aerial Vehicle-Based Images in Riverbank Monitoring. Sensors 2018, 18, 3921. [Google Scholar] [CrossRef] [PubMed]
- Liu, H.; Luo, J.; Huang, B.; Hu, X.; Sun, Y.; Yang, Y.; Zhou, N. DE-Net: Deep Encoding Network for Building Extraction from High-Resolution Remote Sensing Imagery. Remote Sens. 2019, 11, 2380. [Google Scholar] [CrossRef] [Green Version]
- ICAO. Annex 15 to the Convention on International Civil Aviation—Aeronautical Information Services, 15th ed.; International Standards and Recommended Practices; ICAO: Montreal, QC, Canada, 2016. [Google Scholar]
- ICAO. Annex 4 to the Convention on International Civil Aviation, 11th ed.; Aeronautical Charts; ICAO: Montreal, QC, Canada, 2009. [Google Scholar]
- ICAO. DOC-9674, World Geodetic System-1984 (WGS84) Manual, 2nd ed.; ICAO: Montreal, QC, Canada, 2002. [Google Scholar]
- ICAO. DOC-1006, Aeronautical Information Management, 1st ed.; ICAO: Montreal, QC, Canada, 2018. [Google Scholar]
- ICAO. Annex 14 to the Convention on International Civil Aviation, 8th ed.; Aerodrome Design and Operations; ICAO: Montreal, QC, Canada, 2018; Volume I. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Al-Saffar, A.A.M.; Tao, H.; Talab, M.A. Review of deep convolution neural network in image classification. In Proceedings of the 2017 International Conference on Radar, Antenna, Microwave, Electronics, and Telecommunications (ICRAMET), Jakarta, Indonesia, 23–24 October 2017. [Google Scholar]
- Li, Y.; Zhang, H.; Xue, X.; Jiang, Y.; Shen, Q. Deep learning for remote sensing image classification: A survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1264. [Google Scholar] [CrossRef]
- Ranjbar, M.; Mori, G.; Yang, W. Optimizing complex loss functions in structured prediction. In Proceedings of the European Conference on Computer Vision, Heraklion Crete, Greece, 5–11 September 2010. [Google Scholar]
- Praca zbiorowa, Geodezja inżynieryjna, Tom II.; Państwowe Przedsiębiorstwo Wydawnictw Kartograficznych: Warszawa, Poland, 1980. (In Polish)
- Kaggle. Wind Turbine Detection. Available online: https://www.kaggle.com/datasets/saurabhshahane/wind-turbine-obj-detection (accessed on 9 August 2022).
- Afonso, M.; Fonteijn, H.; Fiorentin, F.; Lensink, D.; Mooij, M.; Faber, N. Tomato fruit detection and counting in greenhouses using deep learning. Front. Plant Sci. 2020, 11, 571299. [Google Scholar] [CrossRef]
- He, H.; Garcia, E.A. Learning from imbalanced data. IEEE Trans. Knowl. Data Eng. 2009, 21, 1263–1284. [Google Scholar]
- Csurka, G.; Larlus, D.; Perronnin, F. What is a good evaluation measure for semantic segmentation? In Proceedings of the 24th BMVC British Machine Vision Conference, Bristol, UK, 9–13 September 2013. [Google Scholar]
- Liu, M.; Wang, X.; Zhou, A.; Fu, X.; Ma, Y.; Piao, C. Uav-yolo: Small object detection on unmanned aerial vehicle perspective. Sensors 2020, 20, 2238. [Google Scholar] [CrossRef]
- Nguyen, N.D.; Do, T.; Ngo, T.D.; Le, D.D. An Evaluation of Deep Learning Methods for Small Object Detection. J. Electr. Comput. Eng. 2020, 2020, 3189691. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Zitnick, C.L. Microsoft COCO: Common objects in context. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Fei-Fei, L. Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Kharchenko, V.; Chyrka, I. Detection of airplanes on the ground using YOLO neural network. In Proceedings of the IEEE 17th International Conference on Mathematical Methods in Electromagnetic Theory (MMET), Kyiv, Ukraine, 2–5 July 2018. [Google Scholar]
- Junos, M.H.; Mohd Khairuddin, A.S.; Thannirmalai, S.; Dahari, M. Automatic detection of oil palm fruits from UAV images using an improved YOLO model. Vis. Comput. 2021, 38, 2341–2355. [Google Scholar] [CrossRef]
- Mitsevich, L. 3D Aerodrome Obstacle Assessment Using Stereo Remote Sensing Imagery. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B2-2020, 1115–1119. [Google Scholar] [CrossRef]
- Wierzbicki, D.; Matuk, O.; Bielecka, E. Polish Cadastre Modernization with Remotely Extracted Buildings from High-Resolution Aerial Orthoimagery and Airborne LiDAR. Remote Sens. 2021, 13, 611. [Google Scholar] [CrossRef]
- Rottensteiner, F.; Trinder, J.; Clode, S.; Kubik, K. Using the Dempster–Shafer method for the fusion of LIDAR data and multispectral images for building detection. Inf. Fusion 2005, 6, 283–300. [Google Scholar] [CrossRef]
- Sohn, G.; Dowman, I. Data fusion of high-resolution satellite imagery and LIDAR data for automatic building extraction. ISPRS J. Photogramm. Remote Sens. 2007, 62, 43–63. [Google Scholar]
- Khoshboresh-Masouleh, M.; Alidoost, F.; Hossein, A. Multiscale building segmentation based on deep learning for remote sensing RGB images from different sensors. J. Appl. Remote Sens. 2020, 14, 034503. [Google Scholar] [CrossRef]
- Kocur-Bera, K.; Stachelek, M. Geo-Analysis of Compatibility Determinants for Data in the Land and Property Register (LPR). Geosciences 2019, 9, 303. [Google Scholar] [CrossRef]
- Hanus, P.; Benduch, P.; Pęska-Siwik, A. Budynek na mapie ewidencyjnej, kontur budynku i bloki budynku. Przegląd Geod. 2017, 7, 15–20. (In Polish) [Google Scholar] [CrossRef]
- Buśko, M. Modernization of the Register of Land and Buildings with Reference to Entering Buildings into the Real Estate Cadastre in Poland. In Proceedings of the International Conference on Environmental Engineering, Vilnius, Lithuania, 27–28 April 2017. [Google Scholar]
- Lalak, M.; Wierzbicki, D. Methodology of Detection and Classification of Selected Aviation Obstacles Based on UAV Dense Image Matching. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1869–1883. [Google Scholar] [CrossRef]
Type of Aviation Obstacle | |
---|---|
Mast | |
Wind turbine | |
Chimney | |
Tower | |
Energy pole | |
Construction crane |
Data Set | Category | Number of Images | Number of Objects |
---|---|---|---|
Training set | Construction crane | 143 | 184 |
Energy pole | 141 | 196 | |
Wind turbine | 140 | 188 | |
Mast | 136 | 148 | |
Validation set | Construction crane | 44 | 52 |
Energy pole | 39 | 63 | |
Wind turbine | 41 | 49 | |
Mast | 36 | 41 | |
Test set | Construction crane | 22 | 27 |
Energy pole | 19 | 30 | |
Wind turbine | 21 | 26 | |
Mast | 18 | 19 |
Category | IoU (%) | AP (%) | mAP (%) |
---|---|---|---|
Construction crane | 69.4 | 74.8 | 70.7 |
Energy pole | 78.2 | 67.6 | |
Wind turbine | 74.6 | 65.2 | |
Mast | 64.9 | 75.3 |
Obstacle | Average Difference in Coordinate X (m) | Average Difference in Coordinate Y (m) | Average Difference in Height H (m) | Mean Error (m) | Standard Deviation (m) |
---|---|---|---|---|---|
Construction crane | 0.6 | 0.7 | 0.4 | 0.7 | 0.5 |
Energy pole | 0.4 | 0.3 | 0.5 | 0.5 | 0.4 |
Wind turbine | 0.3 | 0.4 | 0.5 | 0.5 | 0.6 |
Mast | 0.6 | 0.5 | 0.6 | 0.6 | 0.5 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lalak, M.; Wierzbicki, D. Automated Detection of Atypical Aviation Obstacles from UAV Images Using a YOLO Algorithm. Sensors 2022, 22, 6611. https://doi.org/10.3390/s22176611
Lalak M, Wierzbicki D. Automated Detection of Atypical Aviation Obstacles from UAV Images Using a YOLO Algorithm. Sensors. 2022; 22(17):6611. https://doi.org/10.3390/s22176611
Chicago/Turabian StyleLalak, Marta, and Damian Wierzbicki. 2022. "Automated Detection of Atypical Aviation Obstacles from UAV Images Using a YOLO Algorithm" Sensors 22, no. 17: 6611. https://doi.org/10.3390/s22176611
APA StyleLalak, M., & Wierzbicki, D. (2022). Automated Detection of Atypical Aviation Obstacles from UAV Images Using a YOLO Algorithm. Sensors, 22(17), 6611. https://doi.org/10.3390/s22176611