Damage-Map Estimation Using UAV Images and Deep Learning Algorithms for Disaster Management System
Abstract
:1. Introduction
- An automated approach to postfire mapping, using DL algorithms and UAV images;
- A dual-segmentation network with high accuracy and precise information, compared to a single DL segmented image model.
2. Study Area
3. Proposed Approach
3.1. Unet and Unet++ for Image Segmentation
3.2. Loss Functions and Evaluation Metrics
3.2.1. Loss Functions
3.2.2. Evaluation Metrics
3.3. Proposed Approach
- First, the images are collected using a drone. Then, they are cropped to 912 × 912 × 3 pixels and labeled using the Labelme image-annotation tool [26]. UNet++ is used for the patch-level 1 network;
- The patch-level 2 network is used as a model for refining. This model is based on the network 1 prediction results. It repredicts the area containing only burnt pixels on the patch-level input images of 128×128×3 pixels. The result of this model is considered to be the final prediction;
- Finally, the final prediction mask is resized, converted to RGB, copied onto the original information, and uploaded to the DroneDeploy platform for orthophoto generation and further processing.
4. Validation Results
5. Conclusions
- The dual patch-level models worked better than the single-image-segmentation models. The dice coefficients when testing on locations 2 and 1 were 0.6924 and 0.7639, respectively;
- The FL as a loss function showed its effectiveness in optimizing the model and increasing the model performance on the test set;
- A pipeline step-by-step approach for pre- and postprocessing UAV images is introduced and made publicly available.
- The dual patch-level models need to train on different locations with different weather conditions to improve their performance;
- The approach is now processed locally. Its need to be converted into an online platform to increase it practicality and reduce its time consumption.
Author Contributions
Funding
Conflicts of Interest
References
- Bowman, D.M.J.S.; Balch, J.K.; Artaxo, P.; Bond, W.J.; Carlson, J.M.; Cochrane, M.A.; D’Antonio, C.M.; DeFries, R.S.; Doyle, J.C.; Harrison, S.P.; et al. Fire in the Earth System. Science 2009, 324, 481–484. [Google Scholar] [CrossRef] [PubMed]
- Forest fire damage status: Detailed indicator screen. Available online: https://www.index.go.kr/potal/stts/idxMain/selectPoSttsIdxMainPrint.do?idx_cd=1309&board_cd=INDX_001 (accessed on 18 December 2020).
- Leblon, B.; Bourgeau-Chavez, L.; San-Miguel-Ayanz, J. Use of remote sensing in wildfire management. In Sustainable Development-Authoritative and Leading Edge Content for Environmental Management; IntechOpen: Croatia, Yugoslavia, 2012; pp. 55–81. [Google Scholar]
- Di Biase, V.; Laneve, G. Geostationary Sensor Based Forest Fire Detection and Monitoring: An Improved Version of the SFIDE Algorithm. Remote Sens. 2018, 10, 741. [Google Scholar] [CrossRef] [Green Version]
- Jang, E.; Kang, Y.; Im, J.; Lee, D.W.; Yoon, J.; Kim, S.K. Detection and Monitoring of Forest Fires Using Himawari-8 Geostationary Satellite Data in South Korea. Remote Sens. 2019, 11, 271. [Google Scholar] [CrossRef] [Green Version]
- Khodaee, M.; Hwang, T.; Kim, J.; Norman, S.P.; Robeson, S.M.; Song, C. Monitoring Forest Infestation and Fire Disturbance in the Southern Appalachian Using a Time Series Analysis of Landsat Imagery. Remote Sens. 2020, 12, 2412. [Google Scholar] [CrossRef]
- Fraser, R.H.; Van der Sluijs, J.; Hall, R.J. Calibrating Satellite-Based Indices of Burn Severity from UAV-Derived Metrics of a Burned Boreal Forest in NWT, Canada. Remote Sens. 2017, 9, 279. [Google Scholar] [CrossRef] [Green Version]
- Fernández-Guisuraga, J.; Sanz-Ablanedo, E.; Suárez-Seoane, S.; Calvo, L. Using Unmanned Aerial Vehicles in Postfire Vegetation Survey Campaigns through Large and Heterogeneous Areas: Opportunities and Challenges. Sensors 2018, 18, 586. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Shin, J.I.; Seo, W.W.; Kim, T.; Park, J.; Woo, C.S. Using UAV Multispectral Images for Classification of Forest Burn Severity—A Case Study of the 2019 Gangneung Forest Fire. Forests 2019, 10, 1025. [Google Scholar] [CrossRef] [Green Version]
- Carvajal-Ramírez, F.; Marques da Silva, J.R.; Agüera-Vega, F.; Martínez-Carricondo, P.; Serrano, J.; Moral, F.J. Evaluation of Fire Severity Indices Based on Pre- and Post-Fire Multispectral Imagery Sensed from UAV. Remote Sens. 2019, 11, 993. [Google Scholar] [CrossRef] [Green Version]
- Samiappan, S.; Hathcock, L.; Turnage, G.; McCraine, C.; Pitchford, J.; Moorhead, R. Remote Sensing of Wildfire Using a Small Unmanned Aerial System: Post-Fire Mapping, Vegetation Recovery and Damage Analysis in Grand Bay, Mississippi/Alabama, USA. Drones 2019, 3, 43. [Google Scholar] [CrossRef] [Green Version]
- Pérez-Rodríguez, L.A.; Quintano, C.; Marcos, E.; Suarez-Seoane, S.; Calvo, L.; Fernández-Manso, A. Evaluation of Prescribed Fires from Unmanned Aerial Vehicles (UAVs) Imagery and Machine Learning Algorithms. Remote Sens. 2020, 12, 1295. [Google Scholar] [CrossRef] [Green Version]
- Park, M.; Tran, D.Q.; Jung, D.; Park, S. Wildfire-Detection Method Using DenseNet and CycleGAN Data Augmentation-Based Remote Camera Imagery. Remote Sens. 2020, 12, 3715. [Google Scholar] [CrossRef]
- Jung, D.; Tran Tuan, V.; Dai Tran, Q.; Park, M.; Park, S. Conceptual Framework of an Intelligent Decision Support System for Smart City Disaster Management. Appl. Sci. 2020, 10, 666. [Google Scholar] [CrossRef] [Green Version]
- Xiang, T.; Xia, G.; Zhang, L. Mini-Unmanned Aerial Vehicle-Based Remote Sensing: Techniques, applications, and prospects. IEEE Geosci. Remote Sens. Mag. 2019, 7, 29–63. [Google Scholar] [CrossRef] [Green Version]
- Matese, A.; Toscano, P.; Di Gennaro, S.F.; Genesio, L.; Vaccari, F.P.; Primicerio, J.; Belli, C.; Zaldei, A.; Bianconi, R.; Gioli, B. Intercomparison of UAV, aircraft and satellite remote sensing platforms for precision viticulture. Remote Sens. 2015, 7, 2971–2990. [Google Scholar] [CrossRef] [Green Version]
- Bhatnagar, S.; Gill, L.; Ghosh, B. Drone Image Segmentation Using Machine and Deep Learning for Mapping Raised Bog Vegetation Communities. Remote Sens. 2020, 12, 2602. [Google Scholar] [CrossRef]
- Yang, M.D.; Tseng, H.H.; Hsu, Y.C.; Tsai, H.P. Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-date UAV Visible Images. Remote Sens. 2020, 12, 633. [Google Scholar] [CrossRef] [Green Version]
- Drone & UAV Mapping Platform | DroneDeploy. Available online: https://www.dronedeploy.com/ (accessed on 18 December 2020).
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
- Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: New York, NY, USA, 2018; pp. 3–11. [Google Scholar]
- Jadon, S.; Leary, O.P.; Pan, I.; Harder, T.J.; Wright, D.W.; Merck, L.H.; Merck, D.L. A comparative study of 2D image segmentation algorithms for traumatic brain lesions using CT data from the ProTECTIII multicenter clinical trial. In Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications; International Society for Optics and Photonics: Bellingham, WA, USA, 2020; Volume 11318, p. 113180Q. [Google Scholar]
- Yan, Z.; Han, X.; Wang, C.; Qiu, Y.; Xiong, Z.; Cui, S. Learning mutually local-global u-nets for high-resolution retinal lesion segmentation in fundus images. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 597–600. [Google Scholar]
- Pham, Q.; Ahn, S.; Song, S.J.; Shin, J. Automatic Drusen Segmentation for Age-Related Macular Degeneration in Fundus Images Using Deep Learning. Electronics 2020, 9, 1617. [Google Scholar] [CrossRef]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Russell, B.C.; Torralba, A.; Murphy, K.P.; Freeman, W.T. LabelMe: A database and web-based tool for image annotation. Int. J. Comput. Vis. 2008, 77, 157–173. [Google Scholar] [CrossRef]
- Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
Location | Patch-Level 1 Network Only | Patch-Level 2 Network Only | Patch-Level 2 Network Proposed Method |
---|---|---|---|
1 | 1032 | 16,512 | 7660 |
2 | 1056 | 16,896 | 4140 |
Evaluation Metrics | Patch-Level 1 Network Only | Patch-Level 2 Network Only | Proposed Method |
---|---|---|---|
Dice coefficient | 0.1712 | 0.5697 | 0.6924 |
Sensitivity | 0.1750 | 0.2643 | 0.2340 |
Specificity | 0.9274 | 0.8384 | 0.9177 |
Evaluation Metrics | Patch-Level 1 Network Only | Patch-Level 2 Network Only | Proposed Method |
---|---|---|---|
Dice coefficient | 0.3346 | 0.5590 | 0.7639 |
Sensitivity | 0.2313 | 0.2038 | 0.3808 |
Specificity | 0.8571 | 0.8545 | 0.8311 |
Evaluation Metrics | Net 1 BCE-Net 2 BCE | Net 1 BCE-Net 2 FL | Net 1 FL-Net 2 BCE | Net 1 FL-Net 2 FL |
---|---|---|---|---|
Dice coefficient | 0.6568 | 0.5658 | 0.6924 | 0.6170 |
Sensitivity | 0.2348 | 0.2589 | 0.2340 | 0.2574 |
Specificity | 0.9109 | 0.877 | 0.9177 | 0.8901 |
Evaluation Metrics | Net 1 BCE-Net 2 BCE | Net 1 BCE-Net 2 FL | Net 1 FL-Net 2 BCE | Net 1 FL-Net 2 FL |
---|---|---|---|---|
Dice coefficient | 0.7381 | 0.7043 | 0.7639 | 0.7322 |
Sensitivity | 0.3815 | 0.2994 | 0.3808 | 0.3010 |
Specificity | 0.8244 | 0.8405 | 0.8311 | 0.8436 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tran, D.Q.; Park, M.; Jung, D.; Park, S. Damage-Map Estimation Using UAV Images and Deep Learning Algorithms for Disaster Management System. Remote Sens. 2020, 12, 4169. https://doi.org/10.3390/rs12244169
Tran DQ, Park M, Jung D, Park S. Damage-Map Estimation Using UAV Images and Deep Learning Algorithms for Disaster Management System. Remote Sensing. 2020; 12(24):4169. https://doi.org/10.3390/rs12244169
Chicago/Turabian StyleTran, Dai Quoc, Minsoo Park, Daekyo Jung, and Seunghee Park. 2020. "Damage-Map Estimation Using UAV Images and Deep Learning Algorithms for Disaster Management System" Remote Sensing 12, no. 24: 4169. https://doi.org/10.3390/rs12244169
APA StyleTran, D. Q., Park, M., Jung, D., & Park, S. (2020). Damage-Map Estimation Using UAV Images and Deep Learning Algorithms for Disaster Management System. Remote Sensing, 12(24), 4169. https://doi.org/10.3390/rs12244169