Extraction of Olive Crown Based on UAV Visible Images and the U2-Net Deep Learning Model
Abstract
:1. Introduction
2. Methods
2.1. Study Area
2.2. UAV Image Acquisition and OTC Data Set Construction
2.3. Super-Resolution Reconstruction
2.4. Data Augmentation
2.5. Selection of Test Subareas and Measurement of Crown
2.6. U2-Net Deep Learning Model
2.6.1. RSU Structure
- (1)
- The convolution layer located at the outermost layer first transforms the input feature map x(H, W, and Cin) into an intermediate map F1(x) with an output channel of Cout. The role of this convolution layer is for extracting local features of the image;
- (2)
- The transformed intermediate map F1(x) is input into a u-structure symmetric encoder–decoder with a height L. This structure we show it with U(F1(x)) that can extract multi-scale information and reduce the loss of contextual information caused by upsampling. When L is larger, the deeper the number of layers the RSU has, the more pooling operations are performed, the larger range of receptive fields, and the richer the local and global contextual features obtained;
- (3)
- The local features and the multi-scale features are fused by a residual connection: HRSU(x) = U(F1(x)) + F1(x).
2.6.2. Loss
2.7. Accuracy Assessment
3. Results
3.1. ESRGAN Model Performance Deviation
3.2. Evaluation of Tree Crown Number Extraction Results
3.3. Accuracy Evaluation of Different Deep Learning Models
3.4. Evaluation of Tree Crown Area Extraction Accuracy
4. Discussion
- (1)
- Although the proposed model was shown to be accurate and efficient, it does have limitations. By analyzing the extraction results from different experimental subareas (i.e., A–D), it can be found that even if aerial photography is conducted at noon, the UAV images still cannot completely avoid the problem of shadows induced by light. Due to the shadows, it is difficult for the model to extract the olive tree crowns that are obscured by them, resulting in lower predicted values than the measured values. Furthermore, because the UAV images used provide limited spectral information due to the limited bands (only visible light), the phenomenon of “different objects sharing similar spectrum” was obvious. Some other low vegetation often has similar spectral characteristics to a tree crown, which is easily misjudged by the model as a tree crown, thus affecting the accuracy of crown area extraction. Moreover, there are some large olive trees in the study area, which may have several crowns and multiple tree vertices, resulting in recognition error as well. Since the model cannot tell whether the adjacent crowns belong to the same tree, one tree with several subcrowns is often over-segmented and identified as multiple trees. Therefore, more studies are needed to explore the introduction of a crown height model (CHM) with elevation information and image data with visible vegetation indices to increase the recognition accuracy.
- (2)
- When using UAV imagery for crown extraction, factors, such as topography, vegetation type, and crown density of the study area, significantly affect the extraction accuracy. This study achieved good crown extraction results using the U2-Net model mainly due to the relatively flat terrain, regular distribution of olive trees, homogeneous tree species, and low overall crown density, which is similar to the experimental results conducted by Kuikel et al. [45] using banana tree planting areas. However, other studies have demonstrated that when the terrain has a specific slope, the crown parameter extraction accuracy will dramatically decrease. Casapia et al. [46] acknowledged this theory in their work as well: crown extraction can be hampered by complex vegetation types, densely overlapping canopies, and smaller tree spacing. Therefore, future studies should focus on achieving high-precision crown extraction under more complex environmental conditions.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Arampatzis, G.; Hatzigiannakis, E.; Pisinaras, V.; Kourgialas, N.; Psarras, G.; Kinigopoulou, V.; Panagopoulos, A.; Koubouris, G. Soil water content and olive tree yield responses to soil management, irrigation, and precipitation in a hilly Mediterranean area. J. Water Clim. Chang. 2018, 9, 672–678. [Google Scholar] [CrossRef]
- Montealegre, C.; Esteve, C.; Garcia, M.C.; Garcia-Ruiz, C.; Marina, M.L. Proteins in olive fruit and oil. Crit. Rev. Food Sci. Nutr. 2014, 54, 611–624. [Google Scholar] [CrossRef]
- Carletto, C.; Jolliffe, D.; Banerjee, R. From tragedy to renaissance: Improving agricultural data for better policies. J. Dev. Stud. 2015, 51, 133–148. [Google Scholar] [CrossRef]
- Xiong, J.; Thenkabail, P.S.; Gumma, M.K.; Teluguntla, P.; Poehnelt, J.; Congalton, R.G.; Yadav, K.; Thau, D. Automated cropland mapping of continental Africa using Google Earth Engine cloud computing. ISPRS-J. Photogramm. Remote Sens. 2017, 126, 225–244. [Google Scholar] [CrossRef] [Green Version]
- Bokalo, M.; Stadt, K.J.; Comeau, P.G.; Titus, S.J. The Validation of the Mixedwood Growth Model (MGM) for Use in Forest Management Decision Making. Forests 2013, 4, 1–27. [Google Scholar] [CrossRef]
- Li, Y.; Wang, W.; Zeng, W.S.; Wang, J.J.; Meng, J.H. Development of Crown Ratio and Height to Crown Base Models for Masson Pine in Southern China. Forests 2020, 11, 1216. [Google Scholar] [CrossRef]
- Narvaez, F.Y.; Reina, G.; Torres-Torriti, M.; Kantor, G.; Cheein, F.A. A Survey of Ranging and Imaging Techniques for Precision Agriculture Phenotyping. IEEE-ASME Trans. Mechatron. 2017, 22, 2428–2439. [Google Scholar] [CrossRef]
- Gongal, A.; Amatya, S.; Karkee, M.; Zhang, Q.; Lewis, K. Sensors and systems for fruit detection and localization: A review. Comput. Electron. Agric. 2015, 116, 8–19. [Google Scholar] [CrossRef]
- Bargoti, S.; Underwood, J.P. Image Segmentation for Fruit Detection and Yield Estimation in Apple Orchards. J. Field Robot. 2017, 34, 1039–1060. [Google Scholar] [CrossRef] [Green Version]
- Aubry-Kientz, M.; Laybros, A.; Weinstein, B.; Ball, J.G.C.; Jackson, T.; Coomes, D.; Vincent, G. Multisensor Data Fusion for Improved Segmentation of Individual Tree Crowns in Dense Tropical Forests. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3927–3936. [Google Scholar] [CrossRef]
- Bagheri, R.; Jouibary, S.S.; Erfanifard, Y. Canopy based aboveground biomass and carbon stock estimation of wild pistachio trees in arid woodlands using Geoeye-1 images. J. Agric. Sci. Technol. 2021, 23, 107–123. [Google Scholar]
- Cho, M.A.; Malahlela, O.; Ramoelo, A. Assessing the utility WorldView-2 imagery for tree species mapping in South African subtropical humid forest and the conservation implications: Dukuduku forest patch as case study. Int. J. Appl. Earth Obs. Geoinf. 2015, 38, 349–357. [Google Scholar] [CrossRef]
- Ferreira, M.P.; Wagner, F.H.; Aragao, L.; Shimabukuro, Y.E.; de Souza, C.R. Tree species classification in tropical forests using visible to shortwave infrared WorldView-3 images and texture analysis. ISPRS-J. Photogramm. Remote Sens. 2019, 149, 119–131. [Google Scholar] [CrossRef]
- Wu, S.; Wang, J.; Yan, Z.; Song, G.; Chen, Y.; Ma, Q.; Deng, M.; Wu, Y.; Zhao, Y.; Guo, Z.; et al. Monitoring tree-crown scale autumn leaf phenology in a temperate forest with an integration of PlanetScope and drone remote sensing observations. ISPRS-J. Photogramm. Remote Sens. 2021, 171, 36–48. [Google Scholar] [CrossRef]
- Yan, S.; Jing, L.; Wang, H. A new individual tree species recognition method based on a convolutional neural network and high-spatial resolution remote sensing imagery. Remote Sens. 2021, 13, 479. [Google Scholar] [CrossRef]
- Gong, C.; Li, L.; Hu, Y.; Wang, X.; He, Z.; Wang, X. Urban river water quality monitoring with unmanned plane hyperspectral remote sensing data. In Proceedings of the 7th Symposium on Novel Photoelectronic Detection Technology and Applications, Kunming, China, 5–7 November 2020. [Google Scholar]
- Gumma, M.K.; Kadiyala, M.D.M.; Panjala, P.; Ray, S.S.; Akuraju, V.R.; Dubey, S.; Smith, A.P.; Das, R.; Whitbread, A.M. Assimilation of remote sensing data into crop growth model for yield estimation: A case study from India. J. Indian Soc. Remote Sens. 2021. [Google Scholar] [CrossRef]
- Gale, M.G.; Cary, G.J.; Van Dijk, A.I.J.M.; Yebra, M. Forest fire fuel through the lens of remote sensing: Review of approaches, challenges and future directions in the remote sensing of biotic determinants of fire behaviour. Remote Sens. Environ. 2021, 255, 112282. [Google Scholar] [CrossRef]
- Guimaraes, N.; Padua, L.; Marques, P.; Silva, N.; Peres, E.; Sousa, J.J. Forestry remote sensing from unmanned aerial vehicles: A review focusing on the data, processing and potentialities. Remote Sens. 2020, 12, 1046. [Google Scholar] [CrossRef] [Green Version]
- He, H.Q.; Yan, Y.; Chen, T.; Cheng, P.G. Tree height estimation of forest plantation in mountainous terrain from bare-earth points using a dog-coupled radial basis function neural network. Remote Sens. 2019, 11, 1271. [Google Scholar] [CrossRef] [Green Version]
- Jurado, J.M.; Ortega, L.; Cubillas, J.J.; Feito, F.R. Multispectral mapping on 3d models and multi-temporal monitoring for individual characterization of olive trees. Remote Sens. 2020, 12, 1106. [Google Scholar] [CrossRef] [Green Version]
- Torresan, C.; Berton, A.; Carotenuto, F.; Di Gennaro, S.F.; Gioli, B.; Matese, A.; Miglietta, F.; Vagnoli, C.; Zaldei, A.; Wallace, L. Forestry applications of UAVs in europe: A review. Int. J. Remote Sens. 2017, 38, 2427–2447. [Google Scholar] [CrossRef]
- Egli, S.; Hoepke, M. CNN-Based Tree Species Classification Using High Resolution RGB Image Data from Automated UAV Observations. Remote Sens. 2020, 12, 3892. [Google Scholar] [CrossRef]
- Hao, Z.; Lin, L.; Post, C.J.; Mikhailova, E.A.; Li, M.; Yu, K.; Liu, J.; Chen, Y. Automated tree-crown and height detection in a young forest plantation using mask region-based convolutional neural network (Mask R-CNN). ISPRS-J. Photogramm. Remote Sens. 2021, 178, 112–123. [Google Scholar] [CrossRef]
- Onishi, M.; Ise, T. Explainable identification and mapping of trees using UAV RGB image and deep learning. Sci. Rep. 2021, 11, 903. [Google Scholar] [CrossRef]
- Wu, J.; Yang, G.; Yang, H.; Zhu, Y.; Li, Z.; Lei, L.; Zhao, C. Extracting apple tree crown information from remote imagery using deep learning. Comput. Electron. Agric. 2020, 174, 105504. [Google Scholar] [CrossRef]
- Dong, S.; Wang, P.; Abbas, K. A survey on deep learning and its applications. Comput. Sci. Rev. 2021, 40, 100379. [Google Scholar] [CrossRef]
- Xie, S.; Yu, Z.; Lv, Z. Multi-disease prediction based on deep learning: A survey. Cmes-Comput. Modeling Eng. Sci. 2021, 128, 489–522. [Google Scholar] [CrossRef]
- Osco, L.P.; de Arruda, M.d.S.; Marcato Junior, J.; da Silva, N.B.; Marques Ramos, A.P.; Saito Moryia, E.A.; Imai, N.N.; Pereira, D.R.; Creste, J.E.; Matsubara, E.T.; et al. A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery. ISPRS-J. Photogramm. Remote Sens. 2020, 160, 97–106. [Google Scholar] [CrossRef]
- Ferreira, M.P.; Almeida, D.R.A.d.; Papa, D.d.A.; Minervino, J.B.S.; Veras, H.F.P.; Formighieri, A.; Santos, C.A.N.; Ferreira, M.A.D.; Figueiredo, E.O.; Ferreira, E.J.L. Individual tree detection and species classification of Amazonian palms using UAV images and deep learning. For. Ecol. Manag. 2020, 475, 118397. [Google Scholar] [CrossRef]
- Lou, X.W.; Huang, Y.X.; Fang, L.M.; Huang, S.Q.; Gao, H.L.; Yang, L.B.; Weng, Y.H.; Hung, I.K.U. Measuring loblolly pine crowns with drone imagery through deep learning. J. For. Res. 2022, 33, 227–238. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; Laurens, V.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
- Xia, G.-S.; Yang, W.; Delon, J.; Gousseau, Y.; Sun, H.; Maitre, H. Structural high-resolution satellite image indexing. In Proceedings of the ISPRS Technical Commission VII Symposium—100 Years ISPRS—Advancing Remote Sensing Science, Vienna, Austria, 5–7 July 2010; pp. 298–303. [Google Scholar]
- Xia, G.-S.; Hu, J.; Hu, F.; Shi, B.; Bai, X.; Zhong, Y.; Zhang, L.; Lu, X. AID: A benchmark data set for performance evaluation of aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3965–3981. [Google Scholar] [CrossRef] [Green Version]
- Cheng, G.; Han, J.W.; Zhou, P.C.; Guo, L. Multi-class geospatial object detection and geographic image classification based on collection of part detectors. ISPRS-J. Photogramm. Remote Sens. 2014, 98, 119–132. [Google Scholar] [CrossRef]
- Qin, X.; Zhang, Z.; Huang, C.; Dehghan, M.; Zaiane, O.R.; Jagersand, M. U-2-Net: Going deeper with nested U-structure for salient object detection. Pattern Recognit. 2020, 106, 107404. [Google Scholar] [CrossRef]
- Wang, C.; Zhu, R.; Bai, Y.; Zhang, P.; Fan, H. Single-frame super-resolution for high resolution optical remote-sensing data products. Int. J. Remote Sens. 2021, 42, 8099–8123. [Google Scholar] [CrossRef]
- Xiong, Y.; Guo, S.; Chen, J.; Deng, X.; Sun, L.; Zheng, X.; Xu, W. Improved SRGAN for Remote Sensing Image Super-Resolution across Locations and Sensors. Remote Sens. 2020, 12, 1263. [Google Scholar] [CrossRef] [Green Version]
- Su, D.; Kong, H.; Qiao, Y.; Sukkarieh, S. Data augmentation for deep learning based semantic segmentation and crop-weed classification in agricultural robotics. Comput. Electron. Agric. 2021, 190, 106418. [Google Scholar] [CrossRef]
- Xie, S.; Tu, Z. Holistically-Nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1395–1403. [Google Scholar]
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Kuikel, S.; Upadhyay, B.; Aryal, D.; Bista, S.; Awasthi, B.; Shrestha, S. Individual banana tree crown delineation using unmanned aerial vehicle (UAV) images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, 43, 581–585. [Google Scholar] [CrossRef]
- Tagle Casapia, X.; Falen, L.; Bartholomeus, H.; Cardenas, R.; Flores, G.; Herold, M.; Honorio Coronado, E.N.; Baker, T.R. Identifying and quantifying the abundance of economically important palms in tropical moist forest using uav imagery. Remote Sens. 2020, 12, 9. [Google Scholar] [CrossRef] [Green Version]
Items | Parameters and Versions |
---|---|
CPU | Intel® Core™ i7-8700 @3.20 GHz |
RAM | 32 GB DDR4 2666 MHz |
SSD | HS-SSD-C2000Pro 2 TB |
GPU | Nvidia GeForce RTX 3060 (12 GB) |
OS | Windows 10 Professional (DirectX 12) |
ENVS | PyTorch 1.9.0 + Python 3.8 |
Subarea | Number of Trees | RMSE | MRE/% |
---|---|---|---|
A | 25 | 3.95 | 16.37 |
B | 38 | 3.85 | 12.27 |
C | 34 | 6.72 | 16.38 |
D | 20 | 3.01 | 11.87 |
Mean | 29 | 4.78 | 14.27 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ye, Z.; Wei, J.; Lin, Y.; Guo, Q.; Zhang, J.; Zhang, H.; Deng, H.; Yang, K. Extraction of Olive Crown Based on UAV Visible Images and the U2-Net Deep Learning Model. Remote Sens. 2022, 14, 1523. https://doi.org/10.3390/rs14061523
Ye Z, Wei J, Lin Y, Guo Q, Zhang J, Zhang H, Deng H, Yang K. Extraction of Olive Crown Based on UAV Visible Images and the U2-Net Deep Learning Model. Remote Sensing. 2022; 14(6):1523. https://doi.org/10.3390/rs14061523
Chicago/Turabian StyleYe, Zhangxi, Jiahao Wei, Yuwei Lin, Qian Guo, Jian Zhang, Houxi Zhang, Hui Deng, and Kaijie Yang. 2022. "Extraction of Olive Crown Based on UAV Visible Images and the U2-Net Deep Learning Model" Remote Sensing 14, no. 6: 1523. https://doi.org/10.3390/rs14061523
APA StyleYe, Z., Wei, J., Lin, Y., Guo, Q., Zhang, J., Zhang, H., Deng, H., & Yang, K. (2022). Extraction of Olive Crown Based on UAV Visible Images and the U2-Net Deep Learning Model. Remote Sensing, 14(6), 1523. https://doi.org/10.3390/rs14061523