Detection of Windthrown Tree Stems on UAV-Orthomosaics Using U-Net Convolutional Networks
Abstract
:1. Introduction
Related Research
2. Materials and Methods
2.1. Investigation Area
2.2. Workflow
2.3. UAV-Orthophotos and Extraction of the Raw Input Data
2.4. Data Preparation
- In training stage 1, datasets are used to train the general spectral and morphological features of single windthrown stems. Therefore, a sophisticated data augmentation strategy was applied creating slightly different copies of each stem and combining them with a random background. The resulting training datasets are called generic training datasets, hereinafter.
- In training stage 2, training samples were used that showed a particular windthrow situation, for learn the arrangement of the stems and their dimensions, called specific training dataset. No data augmentation was applied while creating this dataset.
- For the evaluation, independently collected test samples were used, hereafter called the test dataset.
2.5. Network Architecture of the Adapted U-Net
2.6. Training Strategy
- The models were pre-trained with extensive generic training datasets (GenDS10, GenDS50 and GenDS100) to train low level features on single stems.
- Afterwards, the pre-trained models were trained with the SpecDS to learn high level features, such as the pattern of several windthrown stems arranged by the storm and to fine tune the weights to the particular appearance of the stems, as the SpecDS was created without any data augmentation which morphologically or spectrally manipulates data.
2.7. Evaluation
2.8. Inference
2.9. Hardware and Software Environment
3. Results
3.1. Training Stage 1
3.2. Training Stage 2
3.3. Evaluation
3.4. Inference of Orthomosaics
4. Discussion
4.1. Training Samples
4.2. Augmentation Strategy
4.3. Training Strategy
4.4. Network Architecture
4.5. Calculative Costs
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Training Stage 1
References
- Forzieri, G.; Girardello, M.; Ceccherini, G.; Spinoni, J.; Feyen, L.; Hartmann, H.; Beck, P.S.; Camps-Valls, G.; Chirici, G.; Mauri, A.; et al. Emergent vulnerability to climate-driven disturbances in European forests. Nat. Commun. 2021, 12, 1081. [Google Scholar] [CrossRef]
- Safonova, A.; Guirado, E.; Maglinets, Y.; Alcaraz-Segura, D.; Tabik, S. Olive Tree Biovolume from UAV Multi-Resolution Image Segmentation with Mask R-CNN. Sensors 2021, 21, 1617. [Google Scholar] [CrossRef]
- Gardiner, B.; Schuck, A.R.T.; Schelhaas, M.J.; Orazio, C.; Blennow, K.; Nicoll, B. Living with Storm Damage to Forests; European Forest Institute Joensuu: Joensuu, Finland, 2013; Volume 3. [Google Scholar]
- Moore, J.R.; Manley, B.R.; Park, D.; Scarrott, C.J. Quantification of wind damage to New Zealand’s planted forests. For. Int. J. For. Res. 2012, 86, 173–183. [Google Scholar] [CrossRef] [Green Version]
- Schadholzanfall 2018 in Zentraleuropa. Available online: https://www.forstpraxis.de/schadholzanfall-2018-in-zentraleuropa/ (accessed on 25 February 2021).
- Land- und Forstwirtschaft, Fischerei. Forstwirtschaftliche Bodennutzung: Holzeinschlagsstatistik 2018: Fachserie 3, Reihe 3.3.1. Available online: https://www.destatis.de/DE/Themen/Branchen-Unternehmen/Landwirtschaft-Forstwirtschaft-Fischerei/Wald-Holz/Publikationen/Downloads-Wald-und-Holz/holzeinschlag-2030331187004.html (accessed on 16 March 2021).
- Die Größten Windwürfe Seit 1990. Available online: https://www.holzkurier.com/blog/groesste-windwuerfe.html (accessed on 14 March 2021).
- Herausforderung Klimawandel. Available online: https://www.gdv.de/resource/blob/22784/a2756482fdf54e7768a93d30789506b7/publikation-herausforderung-klimawandel-data.pdf (accessed on 7 November 2021).
- Gardiner, B.; Blennow, K.; Carnus, J.M.; Fleischer, P.; Ingemarsson, F.; Landmann, G.; Lindner, M.; Marzano, M.; Nicoll, B.; Orazio, C.; et al. Destructive Storms in European Forests: Past and Forthcoming Impacts; European Forest Institute: Joensuu, Finland, 2010. [Google Scholar]
- Osco, L.P.; Junior, J.M.; Ramos, A.P.M.; Jorge, L.A.d.C.; Fatholahi, S.N.; Silva, J.d.A.; Matsubara, E.T.; Pistori, H.; Gonçalves, W.N.; Li, J. A review on deep learning in UAV remote sensing. arXiv 2021, arXiv:2101.10861. [Google Scholar] [CrossRef]
- Egli, S.; Höpke, M. CNN-Based Tree Species Classification Using High Resolution RGB Image Data from Automated UAV Observations. Remote Sens. 2020, 12, 3892. [Google Scholar] [CrossRef]
- Flores, D.; González-Hernández, I.; Lozano, R.; Vazquez-Nicolas, J.M.; Hernandez Toral, J.L. Automated Agave Detection and Counting Using a Convolutional Neural Network and Unmanned Aerial Systems. Drones 2021, 5, 4. [Google Scholar] [CrossRef]
- Nezami, S.; Khoramshahi, E.; Nevalainen, O.; Pölönen, I.; Honkavaara, E. Tree species classification of drone hyperspectral and rgb imagery with deep learning convolutional neural networks. Remote Sens. 2020, 12, 1070. [Google Scholar] [CrossRef] [Green Version]
- Hamdi, Z.M.; Brandmeier, M.; Straub, C. Forest Damage Assessment Using Deep Learning on High Resolution Remote Sensing Data. Remote Sens. 2019, 11, 1976. [Google Scholar] [CrossRef] [Green Version]
- Kislov, D.E.; Korznikov, K.A. Automatic windthrow detection using very-high-resolution satellite imagery and deep learning. Remote Sens. 2020, 12, 1145. [Google Scholar] [CrossRef] [Green Version]
- Polewski, P.; Shelton, J.; Yao, W.; Heurich, M. Instance segmentation of fallen trees in aerial color infrared imagery using active multi-contour evolution with fully convolutional network-based intensity priors. arXiv 2021, arXiv:2105.01998. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015. [Google Scholar]
- Sarvamangala, D.; Kulkarni, R.V. Convolutional neural networks in medical image understanding: A survey. Evol. Intell. 2021, 1–22. [Google Scholar] [CrossRef]
- Ibtehaz, N.; Rahman, M.S. MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation. Neural Netw. 2020, 121, 74–87. [Google Scholar] [CrossRef]
- Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric segmentation from sparse annotation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 17–21 October 2016; pp. 424–432. [Google Scholar]
- Francia, G.A.; Pedraza, C.; Aceves, M.; Tovar-Arriaga, S. Chaining a U-net with a residual U-net for retinal blood vessels segmentation. IEEE Access 2020, 8, 38493–38500. [Google Scholar] [CrossRef]
- Maxwell, A.E.; Bester, M.S.; Guillen, L.A.; Ramezan, C.A.; Carpinello, D.J.; Fan, Y.; Hartley, F.M.; Maynard, S.M.; Pyron, J.L. Semantic Segmentation Deep Learning for Extracting Surface Mine Extents from Historic Topographic Maps. Remote Sens. 2020, 12, 4145. [Google Scholar] [CrossRef]
- Giang, T.L.; Dang, K.B.; Le, Q.T.; Nguyen, V.G.; Tong, S.S.; Pham, V.M. U-Net convolutional networks for mining land cover classification based on high-resolution UAV imagery. IEEE Access 2020, 8, 186257–186273. [Google Scholar] [CrossRef]
- Wagner, F.H.; Sanchez, A.; Tarabalka, Y.; Lotte, R.G.; Ferreira, M.P.; Aidar, M.P.; Gloor, E.; Phillips, O.L.; Aragao, L.E. Using the U-net convolutional network to map forest types and disturbance in the Atlantic rainforest with very high resolution images. Remote Sens. Ecol. Conserv. 2019, 5, 360–375. [Google Scholar] [CrossRef] [Green Version]
- Wagner, F.H.; Sanchez, A.; Aidar, M.P.; Rochelle, A.L.; Tarabalka, Y.; Fonseca, M.G.; Phillips, O.L.; Gloor, E.; Aragão, L.E. Mapping Atlantic rainforest degradation and regeneration history with indicator species using convolutional network. PLoS ONE 2020, 15, e0229448. [Google Scholar] [CrossRef] [Green Version]
- Schiefer, F.; Kattenborn, T.; Frick, A.; Frey, J.; Schall, P.; Koch, B.; Schmidtlein, S. Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 2020, 170, 205–215. [Google Scholar] [CrossRef]
- Kattenborn, T.; Eichel, J.; Fassnacht, F.E. Convolutional Neural Networks enable efficient, accurate and fine-grained segmentation of plant species and communities from high-resolution UAV imagery. Sci. Rep. 2019, 9, 17656. [Google Scholar] [CrossRef] [PubMed]
- Kattenborn, T.; Eichel, J.; Wiser, S.; Burrows, L.; Fassnacht, F.E.; Schmidtlein, S. Convolutional Neural Networks accurately predict cover fractions of plant species and communities in Unmanned Aerial Vehicle imagery. Remote Sens. Ecol. Conserv. 2020, 6, 472–486. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef] [Green Version]
- Sofla, R.A.D.; Alipour-Fard, T.; Arefi, H. Road extraction from satellite and aerial image using SE-Unet. J. Appl. Remote Sens. 2021, 15, 014512. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
- Wei, J.; Zou, K. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv 2019, arXiv:1901.11196. [Google Scholar]
- Taylor, L.; Nitschke, G. Improving deep learning with generic data augmentation. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bengaluru, India, 18–21 November 2018; pp. 1542–1547. [Google Scholar]
- Ghiasi, G.; Cui, Y.; Srinivas, A.; Qian, R.; Lin, T.Y.; Cubuk, E.D.; Le, Q.V.; Zoph, B. Simple copy-paste is a strong data augmentation method for instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 19–25 June 2021; pp. 2918–2928. [Google Scholar]
- Zhong, Z.; Zheng, L.; Kang, G.; Li, S.; Yang, Y. Random Erasing Data Augmentation. Proc. AAAI Conf. Artif. Intell. 2020, 34, 13001–13008. [Google Scholar] [CrossRef]
- Mikolajczyk, A.; Grochowski, M. Data augmentation for improving deep learning in image classification problem. In Proceedings of the 2018 International Interdisciplinary PhD Workshop (IIPhDW), Piscataway, NJ, USA, 9–12 May 2018; pp. 117–122. [Google Scholar] [CrossRef]
- Hofmann, G.; Pommer, U.; Großer, K.H. Die Waldvegetation Nordostdeutschlands, 1st ed.; Eberswalder Forstliche Schriftenreihe, Landesbetrieb Forst Brandenburg: Eberswalde, Germany, 2013; Volume 54. [Google Scholar]
- Klimadaten Eberswalde. Available online: https://meteostat.net/de/place/DE-NBFT (accessed on 15 July 2021).
- LFB. Forstliche Standortskarte im Land Brandenburg (STOK): Digitale Daten der Forstlichen Standorts- und Bodenkartierung des Landes Brandenburg. Diese Geodaten Enthalten Angaben zu Substrattypen, Bodentypen, Nährkraft, Wasserhaushalt, Grundwasserstufen. Available online: https://www.brandenburg-forst.de/LFB/client/ (accessed on 8 November 2021).
- LFB. Datenspeicher Wald 2. Available online: https://dsw2.de/index.html (accessed on 8 November 2021).
- Susanne Haeseler. Sturmtief XAVIER Zieht am 5. Oktober 2017 mit Orkanböen überDeutschland. Available online: https://www.dwd.de/DE/leistungen/besondereereignisse/stuerme/20171009_sturmtief_xavier_deutschland.pdf (accessed on 20 October 2021).
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA; London, UK, 2016. [Google Scholar]
- Zayegh, A.; Bassam, N. Neural Network Principles and Applications; Pearson: London, UK, 2018; Volume 10. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Jadon, S. A survey of loss functions for semantic segmentation. In Proceedings of the 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Viña del Mar, Chile, 27–29 October 2020; pp. 1–7. [Google Scholar]
- Sutskever, I.; Martens, J.; Dahl, G.; Hinton, G. On the importance of initialization and momentum in deep learning. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 1139–1147. [Google Scholar]
- Pires de Lima, R.; Marfurt, K. Convolutional Neural Network for Remote-Sensing Scene Classification: Transfer Learning Analysis. Remote Sens. 2020, 12, 86. [Google Scholar] [CrossRef] [Green Version]
- Käding, C.; Rodner, E.; Freytag, A.; Denzler, J. Fine-tuning deep neural networks in continuous learning scenarios. In Proceedings of the Asian Conference on Computer Vision, Taipei, Taiwan, 20–24 November 2016; pp. 588–605. [Google Scholar]
- Liu, J.J.; Hou, Q.; Cheng, M.M.; Wang, C.; Feng, J. Improving convolutional networks with self-calibrated convolutions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10096–10105. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R.B. Mask R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Forst, D.; Holz, P. (Eds.) Rahmenvereinbarung für den Rohholzhandel in Deutschland (RVR); Deutscher Forstwirtschaftsrat e.V.: Berlin, Germany, 2020. [Google Scholar]
- Thompson, N.C.; Greenewald, K.; Lee, K.; Manso, G.F. The computational limits of deep learning. arXiv 2020, arXiv:2007.05558. [Google Scholar]
Dataset | Sample Size | Type | Number of Stems | Data Augmentation | Utilization |
---|---|---|---|---|---|
GenDS10 | 4540 | generic training dataset | single | 10 per stem | training stage 1 |
GenDS50 | 22,700 | generic training dataset | single | 50 per stem | training stage 1 |
GenDS100 | 45,400 | generic training dataset | single | 100 per stem | training stage 1 |
SpecDS | 454 | specific training dataset | multiple | no | training stage 2 |
TestDS | 106 | test dataset | multiple | no | evaluation |
Model | Dataset | Sample Size | Epochs | Time per Epoch | Training Time |
---|---|---|---|---|---|
S1Mod10 | GenDS10 | 4540 | 24 | 0:02:31 h | 1:00:43 h |
S1Mod50 | GenDS50 | 9080 | 20 | 0:12:12 h | 4:03:52 h |
S1Mod100 | GenDS100 | 45,400 | 22 | 0:19:39 h | 7:12:23 h |
Model | Dataset | Sample Size | Epochs | Time per Epoch | Training Time |
---|---|---|---|---|---|
Baseline | SpecDS | 454 | 31 | 0:00:17 h | 0:08:59 h |
S1Mod10 | SpecDS | 454 | 27 | 0:00:15 h | 0:06:54 h |
S1Mod50 | GSpecDS | 454 | 32 | 0:00:16 h | 0:08:24 h |
S1Mod100 | SpecDS | 454 | 27 | 0:00:15 h | 0:06:55 h |
Model | Dataset | Sample Size | F1-Score | Precision | Recall |
---|---|---|---|---|---|
Baseline | SpecDS | 106 | 72.6% | 71.8% | 74.2% |
S1Mod10 | SpecDS | 106 | 73.9% | 73.3% | 74.5% |
S1Mod50 | GSpecDS | 106 | 74.3% | 73.9% | 74.7% |
S1Mod100 | SpecDS | 106 | 75.6% | 74.5% | 76.7% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Reder, S.; Mund, J.-P.; Albert, N.; Waßermann, L.; Miranda, L. Detection of Windthrown Tree Stems on UAV-Orthomosaics Using U-Net Convolutional Networks. Remote Sens. 2022, 14, 75. https://doi.org/10.3390/rs14010075
Reder S, Mund J-P, Albert N, Waßermann L, Miranda L. Detection of Windthrown Tree Stems on UAV-Orthomosaics Using U-Net Convolutional Networks. Remote Sensing. 2022; 14(1):75. https://doi.org/10.3390/rs14010075
Chicago/Turabian StyleReder, Stefan, Jan-Peter Mund, Nicole Albert, Lilli Waßermann, and Luis Miranda. 2022. "Detection of Windthrown Tree Stems on UAV-Orthomosaics Using U-Net Convolutional Networks" Remote Sensing 14, no. 1: 75. https://doi.org/10.3390/rs14010075
APA StyleReder, S., Mund, J. -P., Albert, N., Waßermann, L., & Miranda, L. (2022). Detection of Windthrown Tree Stems on UAV-Orthomosaics Using U-Net Convolutional Networks. Remote Sensing, 14(1), 75. https://doi.org/10.3390/rs14010075