Next Article in Journal
The Distribution and Activity of the Invasive Raccoon Dog in Lithuania as Found with Country-Wide Camera Trapping
Next Article in Special Issue
A Comparison of Unpiloted Aerial System Hardware and Software for Surveying Fine-Scale Oak Health in Oak–Pine Forests
Previous Article in Journal
Wood-Decay Fungi Fructifying in Mediterranean Deciduous Oak Forests: A Community Composition, Richness and Productivity Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Study on Individual Tree Segmentation of Different Tree Species Using Different Segmentation Algorithms Based on 3D UAV Data

1
College of Geomatics and Geoinformation, Guilin University of Technology, No. 12 Jian’gan Road, Guilin 541006, China
2
Guangxi Key Laboratory of Spatial Information and Geomatics, Guilin University of Technology, No. 12 Jian’gan Road, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Forests 2023, 14(7), 1327; https://doi.org/10.3390/f14071327
Submission received: 16 May 2023 / Revised: 22 June 2023 / Accepted: 26 June 2023 / Published: 28 June 2023
(This article belongs to the Special Issue Application of Close-Range Sensing in Forestry)

Abstract

:
Individual structural parameters of trees, such as forest stand tree height and biomass, serve as the foundation for monitoring of dynamic changes in forest resources. Individual tree structural parameters are closely related to individual tree crown segmentation. Although three-dimensional (3D) data have been successfully used to determine individual tree crown segmentation, this phenomenon is influenced by various factors, such as the (i) source of 3D data, (ii) the segmentation algorithm, and (iii) the tree species. To further quantify the effect of various factors on individual tree crown segmentation, light detection and ranging (LiDAR) data and image-derived points were obtained by unmanned aerial vehicles (UAVs). Three different segmentation algorithms (PointNet++, Li2012, and layer-stacking segmentation (LSS)) were used to segment individual tree crowns for four different tree species. The results show that for two 3D data, the crown segmentation accuracy of LiDAR data was generally better than that obtained using image-derived 3D data, with a maximum difference of 0.13 in F values. For the three segmentation algorithms, the individual tree crown segmentation accuracy of the PointNet++ algorithm was the best, with an F value of 0.91, whereas the result of the LSS algorithm yields the worst result, with an F value of 0.86. Among the four tested tree species, the individual tree crown segmentation of Liriodendron chinense was the best, followed by Magnolia grandiflora and Osmanthus fragrans, whereas the individual tree crown segmentation of Ficus microcarpa was the worst. Similar crown segmentation of individual Liriodendron chinense and Magnolia grandiflora trees was observed based on LiDAR data and image-derived 3D data. The crown segmentation of individual Osmanthus fragrans and Ficus microcarpa trees was superior according to LiDAR data to that determined according to image-derived 3D data. These results demonstrate that the source of 3D data, the segmentation algorithm, and the tree species all have an impact on the crown segmentation of individual trees. The effect of the tree species is the greatest, followed by the segmentation algorithm, and the effect of the 3D data source. Consequently, in future research on individual tree crown segmentation, 3D data acquisition methods should be selected based on the tree species, and deep learning segmentation algorithms should be adopted to improve the crown segmentation of individual trees.

1. Introduction

Forest ecosystems, which are a key part of terrestrial ecosystems, not only protect biodiversity [1], prevent soil erosion [2], and maintain the global carbon balance [3,4] but also supply renewable resources for human production and life [5]. As the fundamental component of forest ecosystems, the accurate acquisition of structural parameters of individual trees is the foundation for forest resource inventory and dynamic change monitoring [6,7]. However, the traditional approach to determining individual structural parameters of trees usually involves field measurements. In addition to being time-consuming, challenging, and inefficient, this approach makes it difficult to gather forest structure data in places with complicated topography [8]. The application of high-resolution images and airborne light detection and ranging (LiDAR) data has successfully solved this problem, providing a new technical means for the rapid and accurate acquisition of individual structural parameters of trees in forests. However, the extraction of individual structural parameters of trees is affected to some extent by the difficulty of directly obtaining three-dimensional (3D) information from high-resolution images. Compared to high-resolution images, airborne LiDAR can obtain not only horizontal structural information about forests but also vertical structural information [9]. Therefore, LiDAR has been widely used for the extraction of structural parameters of forests [10,11]. However, the acquisition cost of airborne LiDAR data is relatively high. A high flight height is generally used to reduce the cost of data acquisition, resulting in a relatively low point density of LiDAR data. Low-density LiDAR data results in the loss of structural information about the forest to some extent, further affecting the accuracy of individual structural parameters of trees [12].
Unmanned aerial vehicles (UAVs) offer the advantages of simple operation, flexibility, and low cost, making it possible to obtain low-cost and high-density 3D data [13,14,15]. At present, there are two ways to obtain high-density 3D data using UAVs. One involves using LiDAR sensors to directly obtain 3D data by emitting laser pulses. Because laser pulses can penetrate the canopy and reach the ground, the obtained 3D data include ground points, points on the surface of the canopy, and points inside the canopy. The other method involves using RGB cameras to capture high-resolution stereo images; 3D data are obtained from the stereo images using the Structure from Motion (SFM) algorithm. Because stereo images can only be used to obtain information about the surface of objects, the 3D data derived from images contain many points on the canopy surface and few ground points. Both techniques have been proven capable of obtaining high-density 3D data [16]. However, owing to differences in the principles of data acquisition, the obtained 3D data differ, and the segmentation results of individual trees may differ depending on the 3D data acquisition technique.
In addition to the different ways of obtaining 3D data, the individual tree segmentation algorithm and tree species are also important factors affecting individual tree segmentation. Two main categories of algorithms are available for individual tree segmentation based on 3D data. The first category comprises traditional tree crown segmentation algorithms, which include segmentation algorithms based on region growth, hierarchical clustering, and voxels. The Li2012 and layer-stacking segmentation (LSS) crown segmentation algorithms are commonly used in individual tree segmentation research. For example, Iqbal et al. [17] used point cloud individual tree detection (PCITD) and Li2012 segmentation algorithms to segment individual trees in a plantation based on UAV LiDAR data. The Li2012 algorithm produced more reliable individual tree segmentation results than the PCITD algorithm, and the segmentation results obtained using the Li2012 algorithm were about 13% higher than those obtained with the PCITD algorithm. Ayrey et al. [18] used the LSS algorithm for individual tree segmentation with different canopy densities based on LiDAR data. The results showed that the accuracy of individual tree segmentation results in plots with low canopy density was 79%, whereas the accuracy of individual tree segmentation results in plots with high canopy density was 72%. The second category of tree crown segmentation algorithms is based on deep learning. Such algorithms are gradually emerging with the development of artificial intelligence, demonstrating strong generalization abilities in processing 3D data, with a superior ability to achieve individual tree segmentation in complex situations. Widely used deep learning algorithms for individual tree segmentation include PointNet and its improved algorithms, Point convolutional neural network (CNN), Faster R-CNN, etc. For example, Windrim et al. [19] used a 3D CNNto segment individual trees in two plots based on LiDAR data. The results showed that the highest accuracy of individual tree segmentation was 93%. Chen et al. [20] used the PointNet deep learning algorithm to segment individual trees of four forest tree species based on UAV LiDAR data. The results showed that the correct detection rate of individual trees was highest in the nursery base, with an accuracy of 90%. Shen et al. [21] proposed a new algorithm based on energy segmentation and PointCNN segmentation for individual tree crown segmentation based on LiDAR data and image-derived 3D data. The results showed that the accuracy of individual tree segmentation for the two data types reached 90% and 94%, respectively. Li et al. [22] used multiple deep learning methods (PointNet, PointNet++, SGPN, and ASIS) to segment the stems and leaves of various plants (tobacco, tomato, and sorghum) based on 3D data. The authors reported that the segmentation results of the PointNet++ algorithm were the best, with a segmentation accuracy of 93% for stems and 99% for leaves. Reza et al. [23] studied the effect of data preparation on semantic segmentation of 3D LiDAR point cloud data based on deep neural networks using PointNet++ and KPConv. They found that two proposed data preparation methods could improve the performance of both deep neural networks compared to the baseline method based on point cloud partitioning in PointNet++. Although both traditional and deep learning segmentation algorithms have been successfully used for individual tree segmentation, the differences between such algorithms are unknown with respect to LiDAR data and image-derived 3D data.
Different tree species often have different growth structures, and structural differences among tree species can impact the extraction of individual structural parameters of trees. For example, Kwak et al. [24] used a watershed segmentation algorithm to segment 135 individual trees of different species, including 47 Pinus koraiensis, 45 Larix leptolepis, and 43 Quercus spp., and extracted tree height based on LiDAR data. The results showed that the maximum segmentation accuracies for Pinus koraiensis, Larix leptolepis, and Quercus spp. were 68.1%, 86.7%, and 67.4%, with R2 values of tree height estimation of 0.77, 0.80, and 0.74, respectively. García et al. [25] estimated the biomass of different tree species in central Spain based on LiDAR data and found that the R2 values of black pine, Spanish juniper, and Holm oak were greater than 0.85, 0.70, and 0.90, respectively. Tang et al. [14] monitored the growth changes of different tree species based on time-series UAV images. The results showed that the annual growth of Liriodendron chinense was the highest, at 58.64 cm, whereas the annual growth of Osmanthus fragrans was the lowest, at only 34.00 cm. Significant differences were also observed in growth month among tree species. The growth season of Liriodendron chinense was mainly concentrated from April to July, with a total growth of 56.92 cm, accounting for 97.08% of the total annual growth, whereas Ficus microcarpa was in a growing state every month throughout the year, although with the greatest growth changes from May to August, corresponding to a growth of 44.24 cm, accounting for approximately 77.09% of the annual growth. These results indicate that despite differences in the extraction results of individual structural parameters of trees depending on the tree species, such differences are mainly related to the crown size and shape of individual trees. The effect of the tree species on segmentation results for individual trees based on 3D data requires further verification.
Accordingly, in this study, we used UAVs to collect LiDAR data and high-resolution stereo images. Three different segmentation algorithms were used to segment individual tree crowns of four tree species. The specific objectives of this research were as follows: (1) to explore the differences in segmentation results of individual tree crowns between LiDAR data and image-derived points; (2) to explore the differences in segmentation results of individual tree crowns with three different segmentation algorithms; and (3) to explore the differences in segmentation results of individual tree crowns among different tree species. The results can provide not only methodological support to improve the segmentation of individual tree crowns but also basic high-precision data for subsequent extraction of individual structural parameters of trees in forests.

2. Materials and Methods

2.1. Study Area

The study area is located on the Guilin University of Technology campus in Guangxi Zhuang Autonomous Area, China (109°45″–104°40″ E, 24°18″–25°41″ N), as shown in Figure 1. The study area has a subtropical monsoon climate, with an annual average temperature of 17.4–21 °C, annual precipitation of 1814 to 1941 mm, and annual average sunshine duration of 1447.1 h, making it suitable for vegetation growth. The study area is a broadleaf mixed plantation, with main tree species including Cinnamomum camphora (Cinnamomum camphora (L.) Presl), Osmanthus fragrans (Osmanthus fragrans (Thunb.) Lour.), Liriodendron chinense (Liriodendron chinense (Hemsl.) Sarg.), Magnolia grandiflora (Magnolia grandiflora L.), Ficus microcarpa (Ficus microcarpa Linn. f.), etc. Each tree species mostly grows in the same subregion, so the tree structures within the subregion are similar, although with relatively significant differences in tree structure among the growth subregions of different tree species.

2.2. Data Introduction and Processing

2.2.1. Field Data

Field data were collected on 4 July 2022. The obtained parameters consisted of the position, quantity, crown width, and species of individual trees. The position of each tree was obtained with real-time kinematic (RTK) and a total station in collaboration. The crown diameters in the east–west and north–south directions were measured using a tape measure, and the mean value was used as the crown width. The tree height was measured using a measuring pole. A total of 144 trees were measured, including 54 Osmanthus fragrans, 21 Liriodendron chinense, 37 Magnolia grandiflora, 8 Cinnamomum camphora, and 24 Ficus microcarpa. The statistical results of tree height and crown width are shown in Table 1.

2.2.2. UAV LiDAR Data

On 24 October 2022, an AU20 long-distance high-precision LiDAR sensor mounted on BB4 UAV was used to collect LiDAR data. The sensor can receive 16 echoes and output 2 million scanning points per second, with a ranging accuracy of 5 mm. It can scan from 0° to 360°, with an angle resolution of 0.001°. The flight speed, flight height, and scan overlap were set to 6 m/s, 100 m, and 140–180°, respectively. A total of 1.4 million points covers the study area of 13,317 m2, with an average point spacing of 8.89 cm. The obtained 3D data were processed using CoPre V2.4.2 software (Huace Navigation Technology Co., Ltd., Shanghai, China), and unclassified LiDAR points were obtained *.las format, with an average point density of 105 points/m2. Each LiDAR point was assigned three-channel RGB intensity values based on the high-resolution aerial imagery. The specific LiDAR data for different tree species are shown in Figure 2.

2.2.3. 3D Data Derived from UAV High-Resolution Stereo Images

High-resolution stereo images were obtained by a DJI Phantom 4 RTK in five-directional flight mode on 24 October 2022. When collecting images, the weather was clear and breezy. The flight altitude was 60 m. The longitudinal overlap and side overlap were 80% and 70%, respectively. The obtained high-resolution stereo images were processed to obtain sparse 3D data using Pix4D V4.7.5 software (Pix4D SA, Prilly, Switzerland). Sparse points were encrypted to provide more accurate information. The unilateral multiview stereo vision algorithm was used to encrypt sparse points. A total of 16 million points cover the study area of 13,317 m2, with an average point spacing of 8.45 cm. The 3D data were obtained in the form of unclassified RGB data in *.las format, with an average point density of 1184 points/m2. The specific image-derived points for different tree species are shown in Figure 3.

2.3. Production of Individual Tree Segmentation Dataset

The LiDAR data and image-derived points were denoised using a Gaussian filtering algorithm, and the LiDAR data was used to generate DEM using the Kriging algorithm. Due to the fact that LiDAR data and image-derived points were obtained at the same time, the topography would not change. Therefore, both LiDAR data and image-derived points were height-normalized using DEM derived from LiDAR data. The 3D height-normalized data were produced as an individual tree segmentation dataset for the deep neural network algorithm PointNet++.

2.3.1. Data Preparation

Before inputting the deep neural network, four subregions were selected from the study area, ensuring that each subregion contained only one tree species and that all subregions had a similar area. For each subregion, the training datasets and test datasets were manually generated with a ratio of 7:3 both for the LiDAR data and image-derived 3D data: (1) individual tree points belonging to various tree species and growth stages and (2) other objects, including bare ground and understory vegetation, with a tiny piece of the points intersecting with nearby trees, as shown in Figure 4. After obtaining the points of individual trees and other objects, the XYZ coordinates and RGB color information were extracted and converted into NPY files to accelerate the model’s file-reading process. Then, each point was labeled and divided into either the training set or the test set according to the specified area. Finally, the dataset was initialized, and the quantity and category distributions of points were calculated. The dataset was sampled at random by splitting the sampling subregion.

2.3.2. Data Augmentation

Data augmentation is usually applied to obtain high-precision data and prevent model overfitting by randomly rotating, scaling, moving, discarding, and scrambling data [26]. Thus, in deep learning (such as PointNet++ algorithm), after obtaining a limited dataset of points, data enhancement techniques were used to augment the datasets. There were 1165 original data samples. The number of data samples was increased to 10,485 by executing operations such as Z-axis rotation, random shifting, random scaling, and random elimination of each point.

2.4. Individual Tree Segmentation Algorithms

To explore the differences in the results of individual tree segmentation obtained by different segmentation algorithms, two traditional tree crown segmentation algorithms and one deep learning algorithm were selected. The two traditional tree crown segmentation algorithms were the Li2012 algorithm based on region growth and the LSS algorithm based on hierarchical clustering, which have been widely used in segmentation research. The PointNet++ algorithm, which is also widely used, was selected as the deep learning algorithm. A detailed introduction to each algorithm is provided below.

2.4.1. PointNet++

PointNet++ is an improved version of PointNet. PointNet employs an end-to-end learning methodology. Instead of first transforming the 3D data into a more manageable format, such as voxels, PointNet directly processes the scattered and unordered points, maximizing the preservation of the spatial features of 3D data [27]. However, the segmentation results are affected by the fact that PointNet only learns global features and lacks local contextual information. PointNet++ mimics multilevel convolutional neural networks, adding multilevel feature learning on the basis of PointNet and proposing solutions for uneven sampling density [28]. Multilevel feature learning consists of three primary components: sampling, grouping, and PointNet. The PointNet++ model uses farthest point sampling (FPS) to select the center point (centroid) and ball query grouping to find all points within the radius range of the center point, dividing the point set into overlapping local regions. The features of each local region are extracted by iteration until all point features are obtained, ensuring coverage of the entire point sampling space. The architecture of PointNet++ is shown in Figure 5. After the successful identification of the tree points by PointNet++, the classified tree points are further segmented as tree crowns using a point-based clustering segmentation algorithm. According to the actual distribution of trees, the minimum horizontal spacing between trees was set to 0.2 m. To minimize the training time for PointNet++ models, NVIDIA RTX 2080Ti GPUs (NVIDIA Inc., Santa Clara, CA, USA) were used instead of CPUs. The learning rate was set to 0.001, the number of epochs was set to 200, and the batch size was set to 16.

2.4.2. Li2012

Li 2012 was originally developed by Li et al. [29] to classify and assign IDs to each point through iterative operations. Individual tree segmentation can be achieved by adjusting the horizontal spacing according to the height threshold. First, the local highest point is extracted from the elevation-normalized points by a local maximum filter and used as a candidate tree vertex for individual trees. Secondly, the target tree is segmented by growing near the candidate tree vertices in accordance with an adaptive horizontal distance threshold and removed from the points after it is completely segmented. Finally, the algorithm continues to search for the next candidate tree vertex in the remaining points to segment a new target tree. In this study, a minimum height threshold was applied to exclude points below a specific height (the height threshold in this study was set to 2 m).

2.4.3. Layer-Stacking Segmentation (LSS)

The layer-stacking method is an algorithm used to segment individual trees based on LiDAR data [18]. First, the LiDAR data are sliced into height intervals. In this study, the minimum height from the ground was set to 2 m, and a layered slice was set every 1 m until the highest point of the tree was reached. Then, the points of each layer were identified and segmented. In this study, the local maximum filter of the adaptive window was configured to identify each layer’s seed points. The points of each sliced layer were clustered according to the distance to the seed point through K-means clustering. The process was repeated iteratively until the seed point’s location did not shift. Finally, a Thiessen polygon was built for each sliced layer using the seed points, and the polygons from all sliced layers were merged to create the segmentation result, resulting in a representative tree outline.

2.5. Accuracy Evaluation

A positional relationship between the reference tree crown and the segmented tree crown occurs in three situations: true positive (TP), when an individual tree is correctly segmented as a tree; false positive (FP), when an individual tree is incorrectly detected as several trees; and false negative (FN), when several individual trees or non-individual tree portions are incorrectly detected as a tree. The recall (R), precision (P), and F score (F) were calculated according to Formulas (1)–(3), respectively, to evaluate the segmentation accuracy of individual trees. F represents the overall accuracy of individual tree segmentation, R represents the proportion of correctly detected trees relative to the actual number of trees, and P represents the proportion of correctly detected trees relative to the entire detection result [30].
R = TP TP + FN
P = TP TP + FP
F = 2 × P × R P + R

3. Results

3.1. Comparative Analysis of the Overall Results of Individual Tree Segmentation

Three different segmentation algorithms were used to perform individual tree segmentation based on two types of 3D data; the overall results are shown in Figure 6.
As evidenced by the results shown in Figure 6, all three algorithms can accurately depict the crown of individual trees based on two types of 3D data, especially for the segmentation accuracy of Liriodendron, with a relatively high tree height. Specifically, most of the individual trees could be detected by the PointNet++ algorithm, and the segmentation accuracy based on LiDAR data was generally better than that based on image-derived points, as shown in Figure 6a,b. The segmentation accuracy obtained by the Li2012 algorithm based on image-derived points is slightly better than that based on LiDAR data, as shown in Figure 6c,d. The segmentation accuracy obtained by the LSS algorithm based on LiDAR data and image-derived points is similar, with oversegmentation and undersegmentation, as shown in Figure 6e,f.

3.2. Comparison and Analysis of Detailed Results of Individual Tree Segmentation

To further explore the differences in individual tree segmentation accuracy among algorithms based on two types of 3D data and different tree species, four different tree species were selected for further analysis: Liriodendron chinense, Magnolia grandiflora, Osmanthus fragrans, and Ficus microcarpa.
(1)
Individual tree segmentation accuracy of Liriodendron chinense
The individual tree segmentation accuracy of Liriodendron chinense based on LiDAR data and image-derived points using three different algorithms is shown in Figure 7. For the convenience of visual discrimination, ground points are removed when displaying the segmentation accuracy of individual trees.
The segmentation accuracy obtained by the PointNet++ and Li2012 algorithms based on LiDAR data is relatively satisfactory, and most individual trees could be accurately segmented, as shown in Figure 7b,d, respectively. Oversegmentation was observed in the individual tree segmentation accuracy obtained by the LSS algorithm, as shown in Figure 7c. The individual tree segmentation accuracy obtained by the Li2012 algorithm based on image-derived points is relatively satisfactory, whereas oversegmentation was observed in the segmentation results obtained using the PointNet++ and LSS algorithms, as shown in Figure 7f–h, respectively.
(2)
Individual tree segmentation accuracy of Magnolia grandiflora
The individual tree segmentation accuracy of Magnolia grandiflora obtained using three different algorithms based on LiDAR data and image-derived points is shown in Figure 8.
For the LiDAR data, all three algorithms have achieved accurate segmentation of most individual trees. Some undersegmentation was observed in the segmentation accuracy obtained by the PointNet++ and Li2012 algorithms, whereas oversegmentation was observed in the segmentation accuracy obtained by the LSS algorithm, as shown in Figure 8b–d, respectively. For image-derived points, all individual trees were correctly segmented by the PointNet++ algorithm, whereas undersegmentation was observed in the segmentation accuracy obtained by the LSS and Li2012 algorithms, as shown in Figure 8f–h, respectively.
(3)
Individual tree segmentation accuracy of Osmanthus fragrans
The individual tree segmentation accuracy of Osmanthus fragrans obtained using three different algorithms based on LiDAR data and image-derived points is shown in Figure 9.
The individual tree segmentation accuracy of Osmanthus fragrans is significantly lower than that of Liriodendron chinense and Magnolia grandiflora, with obvious oversegmentation and undersegmentation. Specifically, for LiDAR data, most individual trees were correctly segmented by the PointNet++ algorithm, whereas undersegmentation was observed in the segmentation accuracy obtained by the LSS and Li2012 algorithms, as shown in Figure 9b–d, respectively. For image-derived points, most individual trees were also correctly segmented by the PointNet++ algorithm; however, the LSS algorithm suffered from severe undersegmentation, whereas the Li2012 algorithm suffered from both undersegmentation and oversegmentation, as shown in Figure 9f–h, respectively.
(4)
Individual tree segmentation accuracy of Ficus microcarpa
The individual tree segmentation accuracy of Ficus microcarpa obtained using three different algorithms based on LiDAR data and image-derived points is shown in Figure 10.
As shown in Figure 10, both for LiDAR data and image-derived points, most individual trees were not accurately segmented by either of the three algorithms. For LiDAR data, the segmentation accuracy obtained by the PointNet++ algorithm is better than that obtained by the LSS and Li2012 algorithms, with severe oversegmentation in the segmentation accuracy obtained by the LSS and Li2012 algorithms, as shown in Figure 10b–d, respectively. For image-derived points, most individual trees were correctly segmented by the PointNet++ algorithm, whereas oversegmentation was observed in the segmentation accuracy obtained by the LSS and Li2012 algorithms, as shown in Figure 10f–h, respectively.

3.3. Accuracy Evaluation of Individual Tree Segmentation Results

(1)
Accuracy evaluation of individual tree segmentation accuracy obtained using three different algorithms based on two types of 3D data
Quantitative analysis was conducted on the segmentation accuracy obtained by three algorithms based on LiDAR data and image-derived points, as shown in Table 2.
Table 2 shows that the segmentation accuracy based on LiDAR data is better than that based on image-derived points, although the differences are not significant, with 0.02–0.14 higher F values.
Among the three tested segmentation algorithms, i.e., PointNet++, LSS, and Li2012, the segmentation accuracy obtained by the PointNet++ algorithm based on LiDAR data and image-derived points was better than tht obtained by Li2012 and LSS, with a maximum F value of 0.91. The segmentation accuracy obtained by the Li2012 algorithm is weaker than that obtained by the PointNet++ algorithm, with a maximum F value of 0.90. In comparison, the segmentation accuracy obtained by the LSS algorithm is the worst. The segmentation accuracy based on LiDAR data (F = 0.86) is superior to that based on image-derived points (F = 0.73), with an F value difference of 0.13.
(2)
Accuracy evaluation of individual tree segmentation results for four tree species
Quantitative evaluation was conducted on the segmentation accuracy of individual trees of four different tree species, and the numbers of correctly segmented, undersegmented, and over segmented individual trees in the sample plot were counted and plotted as a column chart. The results are shown in Figure 11.
As shown in Figure 11, among the four different tree species, the individual tree segmentation accuracy of Liriodendron chinense is the best, followed by the segmentation accuracy of Magnolia grandiflora and Osmanthus fragrans and the segmentation accuracy of Ficus microcarpa.
In summary, the source of 3D data, the segmentation algorithm, and the tree species all influence the segmentation accuracy of individual trees. Tree species exert the greatest effect on the segmentation accuracy of individual trees, with a maximum F difference of 0.67; followed by the effect of the segmentation algorithm, with a maximum F difference of 0.44; and the 3D data source, with an F difference of 0.23.

4. Discussion

4.1. Analysis of Differences in Individual Tree Segmentation Accuracy Based on Different Types of 3D Data

LiDAR data and image-derived points, as the main means of obtaining 3D data at present, play a crucial role in estimating forest tree structures. However, owing to the different principles applied to obtain points for the two types of data, tree segmentation, and structural parameter results differ to some extent. To further investigate the differences in individual tree segmentation between the two types of 3D data sources, LiDAR data and high-resolution stereo images were obtained in this study using UAVs, and individual trees were segmented based on LiDAR data and image-derived points. The individual tree segmentation accuracy based on LiDAR data is better than that based on image-derived points. For tree species with relatively high tree heights and clear boundaries, the segmentation accuracy based on LiDAR data and image-derived points is similar, with no significant difference. For tree species with relatively low tree heights and blurred boundaries, the segmentation accuracy based on LiDAR data is better than that based on image-derived points, which is consistent with previous research findings. For example, Cao et al. [31] used LiDAR data and high-resolution stereo-image-derived points to segment individual trees and estimate the structural parameters of a plantation with different tree species, heights, and stand densities. The performances of LiDAR data and image-derived points varied depending on the forest structures of the plantation. Specifically, for coniferous forest plots with high tree height and low canopy density, the results based on the two types of 3D data were similar. However, in broadleaf forest plots with a relatively dense canopy, owing to the limited ability of high-resolution image-derived points to obtain information under the forest canopy, the results based on image-derived points were worse than those based on LiDAR data. Analysis of the differences in individual tree segmentation accuracy between the two types of 3D data sources showed that the main reason is the difference in the principles applied to obtain 3D data. In the study area, gaps were observed between the crowns of most high trees (Liriodendron chinense and Magnolia grandiflora), which are less obstructed by each other relative to low crowns, which are connected and easily obstruct one another (Osmanthus fragrans and Ficus microcarpa). LiDAR sensors obtain 3D data by receiving laser pulse echoes, capturing relatively detailed canopy and ground information, whereas image-derived points are obtained through joint calculation of stereo images, so only points on the unobstructed canopy surface can be obtained. The difference in data acquisition principles between LiDAR data and image-derived points results in different segmentation accuracies of individual trees depending on the forest conditions.

4.2. Analysis of Differences in Individual Tree Segmentation Accuracy Obtained by Different Algorithms

The segmentation algorithm is an important factor that affects the results of individual tree segmentation. To explore the segmentation accuracy of individual trees obtained by different segmentation algorithms, PointNet++, Li2012, and LSS were used to segment individual tree crowns. The segmentation accuracy obtained by the deep learning algorithm PointNet++ is better than that obtained by the Li2012 and LSS algorithms, with satisfactory applicability across tree species. These results are in agreement with those reported in previous studies. For example, Hu and Li [32] used deep learning segmentation algorithms and four traditional segmentation algorithms to segment individual trees based on high-resolution stereo-image-derived 3D data. The results showed that the accuracy of individual tree segmentation based on the deep learning algorithm was over 90%, which was far superior to the results of traditional algorithms. Chen et al. [20] used the PointNet++ deep learning algorithm and two traditional segmentation algorithms to segment individual trees based on LiDAR data. The results demonstrated that the segmentation accuracy of the deep learning algorithm was 1%–6% higher than that of the conventional segmentation algorithms. Analysis showed that in contrast to traditional segmentation algorithms, the PointNet++ deep learning algorithm can use multilevel feature learning to model abstract features in 3D data, extract effective features from many samples, and iteratively train to improve model performance. PointNet++ also starts with the original 3D data during the training process, preserving the spatial features of the points to the greatest extent possible. Low understory vegetation and partially incomplete individual tree points were removed during the point classification using the PointNet++ algorithm, eliminating interference from other ground object points during subsequent individual tree segmentation. However, compared to traditional algorithms, deep learning algorithms are more time-consuming, mainly because they are relatively complex and require learning massive amounts of data to obtain relatively satisfactory segmentation accuracy. However, with the development of computer hardware and cloud computing, computing speeds are longer a limiting factor for the widespread application of deep learning. Therefore, in the future, deep learning segmentation algorithms should be used as much as possible to improve the accuracy and applicability of single tree segmentation in complex forest scenes.

4.3. Analysis of Differences in Individual Tree Segmentation Accuracy of Different Tree Species

In addition to the effect of the 3D data source and the segmentation algorithm on the segmentation accuracy of individual trees, tree species influence individual tree segmentation. To explore the differences in individual tree segmentation, four tree species were segmented based on LiDAR data and image-derived points. The segmentation accuracy of Liriodendron chinense was the best, followed by that of Magnolia grandiflora, Osmanthus fragrans, and Ficus microcarpa. Similarly, for Liriodendron chinense and Magnolia grandiflora, the segmentation accuracy based on LiDAR data and image-derived points were almost identical, with little difference, whereas for Osmanthus fragrans and Ficus macrocarpa, the segmentation accuracy based on LiDAR data were superior to that based on image-derived points, as in previous similar studies. For example, Julia et al. [33] performed individual trees segmentation and tree species classification for 13 tree species (8 broadleaf and 5 coniferous) based on airborne hyperspectral data. The results showed that the segmentation accuracy varied from 75% to 100% among different tree species. Qin et al. [21,34] proposed a segmentation algorithm based on LiDAR data, as well as hyperspectral and high-resolution RGB data, to segment individual trees of 13 tree species in a subtropical broadleaf forest, and the results also indicated that the segmentation accuracy may vary among different tree species. Yang et al. [35] studied the influence of forest type on individual tree segmentation of coniferous forests, broadleaf forests, and mixed forests. The results show that different tree species had a significant impact on the performance of segmentation algorithms. The difference in individual tree segmentation accuracy among different tree species is mainly caused by differences in canopy structure. Specifically, the tree heights of Liriodendron chinense and Magnolia grandiflora are relatively high, and there is a significant height difference defining the canopy boundary. Specifically, the crown of the Liriodendron chinense tree is in a tower shape, with a longer crown length and a significant height difference relative to the top and edge of the crown. The crowns of Magnolia grandiflora and Osmanthus fragrans trees are elliptical, with multiple adjacent crowns connected and a relatively small height difference between the top of the crown and the edge of the crown. The crown of Ficus microcarpa is relatively large, with a sparse structure and multiple vertices, making it prone to oversegmentation. Therefore, in the future, the effect of tree species on segmentation accuracy should be considered.

5. Conclusions

In this study, LiDAR data and high-resolution stereo images were collected with UAVs, and three different segmentation algorithms were used to segment individual tree crowns of four tree species to explore the impact of the 3D data source, the individual tree segmentation algorithm, and the tree species on the segmentation accuracy of individual trees. The main conclusions are as follows:
(1)
For LiDAR data and image-derived point data, the segmentation accuracy based on LiDAR data is generally better than that based on image-derived point data. In particular, for tree species with relatively high tree heights and clear boundaries, the segmentation accuracy based on LiDAR data and image-derived points are similar, with a difference in F value of 0.017. For tree species with relatively low tree heights and blurred boundaries, the segmentation accuracy based on LiDAR data is better than that based on image-derived points, with a difference in F value of 0.136.
(2)
Among the three tested segmentation algorithms, the results obtained by the PointNet++ algorithm are the best, with a maximum F value of 0.91, whereas the LSS algorithm yielded the lowest segmentation accuracy, with a maximum F value of 0.86.
(3)
Among the four investigated tree species, the segmentation accuracy of Liriodendron chinense is the best, followed by that of Magnolia grandiflora and Osmanthus fragrans, whereas the segmentation accuracy of Ficus microcarpa is the worst. For Liriodendron chinense and Magnolia grandiflora, the segmentation accuracy of individual tree crowns based on LiDAR data and image-derived points is similar, whereas for Osmanthus fragrans and Ficus microcarpa, the segmentation accuracy of individual tree crowns based on LiDAR data is superior to that based on image-derived points.
(4)
The source of 3D data, the segmentation algorithm, and the tree species all have an impact on the individual tree crown segmentation accuracy. The effect of the tree species is the greatest, followed by the effects of the segmentation algorithm and the 3D data source.
In this study, we systematically investigated the factors that affect individual tree segmentation, such as the 3D data source, the segmentation algorithm, and the tree species. The reported results not only provide methodological support to improve the segmentation accuracy of individual trees but also represent high-precision basic data that can be used for subsequent extraction of individual structural parameters of trees in forests. However, relatively few not-up-to-date segmentation algorithms were used in this study, with a relatively small study area and relatively simple terrain and tree structures. The latest deep learning algorithms should be applied in larger areas with complex terrain, abundant tree species, diverse forest structures, and both natural and plantation forests to verify the conclusions drawn in the present study. Therefore, in future research, we plan to obtain the data under different conditions and continue to deepen our research in this field.

Author Contributions

Conceptualization, H.Y.; data curation, H.Y. and Y.L.; formal analysis, H.Y. and Y.L.; methodology, H.Y., J.C. and X.T.; supervision, Q.Y.; validation, Y.H.; writing—original draft preparation, Y.L. and H.Y.; writing—review and editing, Y.L. and X.T. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by grants from the National Natural Science Foundation of China (41901370, 42261063), the Guangxi Natural Science Foundation (2018GXNSFBA281075), the Guangxi Science and Technology Base and Talent Project (GuikeAD19110064), and the BaGuiScholars program of the provincial government of Guangxi (Hongchang He).

Data Availability Statement

Not applicable.

Acknowledgments

The authors sincerely thank the editors and the anonymous reviewers for their constructive feedback.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Law, B.E.; Moomaw, W.R.; Hudiburg, T.W.; Schlesinger, W.H.; Sterman, J.D.; Woodwell, G.M. Creating strategic reserves to protect forest carbon and reduce biodiversity losses in the United States. Land 2022, 11, 721. [Google Scholar] [CrossRef]
  2. Krankina, O.N.; Harmon, M.E.; Schnekenburger, F.; Sierra, C.A. Carbon balance on federal forest lands of Western Oregon and Washington: The impact of the Northwest Forest Plan. For. Ecol. Manag. 2012, 286, 171–182. [Google Scholar] [CrossRef]
  3. Houghton, R. Aboveground forest biomass and the global carbon balance. Glob. Chang. Biol. 2005, 11, 945–958. [Google Scholar] [CrossRef]
  4. Aydin, M.B.S.; Çukur, D. Maintaining the carbon–oxygen balance in residential areas: A method proposal for land use planning. Urban For. Urban Green. 2012, 11, 87–94. [Google Scholar] [CrossRef]
  5. Wulder, M.A.; Bater, C.W.; Coops, N.C.; Hilker, T.; White, J.C. The role of LiDAR in sustainable forest management. For. Chron. 2008, 84, 807–826. [Google Scholar] [CrossRef] [Green Version]
  6. Wolf, J.A.; Fricker, G.A.; Meyer, V.; Hubbell, S.P.; Gillespie, T.W.; Saatchi, S.S. Plant species richness is associated with canopy height and topography in a neotropical forest. Remote Sens. 2012, 4, 4010–4021. [Google Scholar] [CrossRef] [Green Version]
  7. Fan, Y.; Feng, H.; Jin, X.; Yue, J.; Liu, Y.; Li, Z.; Feng, Z.; Song, X.; Yang, G. Estimation of the nitrogen content of potato plants based on morphological parameters and visible light vegetation indices. Front. Plant Sci. 2022, 13, 1012070. [Google Scholar] [CrossRef]
  8. Lee, H.; Slatton, K.C.; Roth, B.E.; Cropper, W., Jr. Adaptive clustering of airborne LiDAR data to segment individual tree crowns in managed pine forests. Int. J. Remote Sens. 2010, 31, 117–139. [Google Scholar] [CrossRef]
  9. Jaakkola, A.; Hyyppä, J.; Kukko, A.; Yu, X.; Kaartinen, H.; Lehtomäki, M.; Lin, Y. A low-cost multi-sensoral mobile mapping system and its feasibility for tree measurements. ISPRS J. Photogramm. Remote Sens. 2010, 65, 514–522. [Google Scholar] [CrossRef]
  10. Lim, K.; Treitz, P.; Wulder, M.; St-Onge, B.; Flood, M. LiDAR remote sensing of forest structure. Prog. Phys. Geogr. 2003, 27, 88–106. [Google Scholar] [CrossRef] [Green Version]
  11. Leblanc, S.G.; Chen, J.M.; Fernandes, R.; Deering, D.W.; Conley, A. Methodology comparison for canopy structure parameters extraction from digital hemispherical photography in boreal forests. Agric. For. Meteorol. 2005, 129, 187–207. [Google Scholar] [CrossRef] [Green Version]
  12. Iqbal, I.; Osborn, J.; Stone, C.; Lucieer, A.; Dell, M.; McCoull, C. Evaluating the robustness of point clouds from small format aerial photography over a Pinus radiata plantation. Aust. For. 2018, 81, 162–176. [Google Scholar] [CrossRef]
  13. Yang, J.; Kang, Z.; Cheng, S.; Yang, Z.; Akwensi, P.H. An individual tree segmentation method based on watershed algorithm and three-dimensional spatial distribution analysis from airborne LiDAR point clouds. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1055–1067. [Google Scholar] [CrossRef]
  14. Tang, X.; You, H.; Liu, Y.; You, Q.; Chen, J. Monitoring of Monthly Height Growth of Individual Trees in a Subtropical Mixed Plantation Using UAV Data. Remote Sens. 2023, 15, 326. [Google Scholar] [CrossRef]
  15. Chen, J.; Chen, Z.; Huang, R.; You, H.; Han, X.; Yue, T.; Zhou, G. The Effects of Spatial Resolution and Resampling on the Classification Accuracy of Wetland Vegetation Species and Ground Objects: A Study Based on High Spatial Resolution UAV Images. Drones 2023, 7, 61. [Google Scholar] [CrossRef]
  16. Mielcarek, M.; Kamińska, A.; Stereńczak, K. Digital aerial photogrammetry (DAP) and airborne laser scanning (ALS) as sources of information about tree height: Comparisons of the accuracy of remote sensing methods for tree height estimation. Remote Sens. 2020, 12, 1808. [Google Scholar] [CrossRef]
  17. Iqbal, I.A.; Osborn, J.; Stone, C.; Lucieer, A. A comparison of ALS and dense photogrammetric point clouds for individual tree detection in radiata pine plantations. Remote Sens. 2021, 13, 3536. [Google Scholar] [CrossRef]
  18. Ayrey, E.; Fraver, S.; Kershaw Jr, J.A.; Kenefic, L.S.; Hayes, D.; Weiskittel, A.R.; Roth, B.E. Layer stacking: A novel algorithm for individual forest tree segmentation from LiDAR point clouds. Can. J. Remote Sens. 2017, 43, 16–27. [Google Scholar] [CrossRef]
  19. Windrim, L.; Bryson, M. Detection, segmentation, and model fitting of individual tree stems from airborne laser scanning of forests using deep learning. Remote Sens. 2020, 12, 1469. [Google Scholar] [CrossRef]
  20. Chen, X.; Jiang, K.; Zhu, Y.; Wang, X.; Yun, T. Individual tree crown segmentation directly from UAV-borne LiDAR data using the PointNet of deep learning. Forests 2021, 12, 131. [Google Scholar] [CrossRef]
  21. Shen, X.; Huang, Q.; Wang, X.; Li, J.; Xi, B. A Deep Learning-Based Method for Extracting Standing Wood Feature Parameters from Terrestrial Laser Scanning Point Clouds of Artificially Planted Forest. Remote Sens. 2022, 14, 3842. [Google Scholar] [CrossRef]
  22. Li, D.; Shi, G.; Li, J.; Chen, Y.; Zhang, S.; Xiang, S.; Jin, S. PlantNet: A dual-function point cloud segmentation network for multiple plant species. ISPRS J. Photogramm. Remote Sens. 2022, 184, 243–263. [Google Scholar] [CrossRef]
  23. Mahmoudi Kouhi, R.; Daniel, S.; Giguère, P. Data Preparation Impact on Semantic Segmentation of 3D Mobile LiDAR Point Clouds Using Deep Neural Networks. Remote Sens. 2023, 15, 982. [Google Scholar] [CrossRef]
  24. Kwak, D.-A.; Lee, W.-K.; Lee, J.-H.; Biging, G.S.; Gong, P. Detection of individual trees and estimation of tree height using LiDAR data. J. For. Res. 2007, 12, 425–434. [Google Scholar] [CrossRef]
  25. García, M.; Riaño, D.; Chuvieco, E.; Danson, F.M. Estimating biomass carbon stocks for a Mediterranean forest in central Spain using LiDAR height and intensity data. Remote Sens. Environ. 2010, 114, 816–830. [Google Scholar] [CrossRef]
  26. Shawky, O.A.; Hagag, A.; El-Dahshan, E.-S.A.; Ismail, M.A. Remote sensing image scene classification using CNN-MLP with data augmentation. Optik 2020, 221, 165356. [Google Scholar] [CrossRef]
  27. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar] [CrossRef] [Green Version]
  28. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 2017, 30, 5105–5114. [Google Scholar] [CrossRef]
  29. Li, W.; Guo, Q.; Jakubowski, M.K.; Kelly, M. A new method for segmenting individual trees from the lidar point cloud. Photogramm. Eng. Remote Sens. 2012, 78, 75–84. [Google Scholar] [CrossRef] [Green Version]
  30. Xu, X.; Zhou, Z.; Tang, Y.; Qu, Y. Individual tree crown detection from high spatial resolution imagery using a revised local maximum filtering. Remote Sens. Environ. 2021, 258, 112397. [Google Scholar] [CrossRef]
  31. Cao, L.; Liu, H.; Fu, X.; Zhang, Z.; Shen, X.; Ruan, H. Comparison of UAV LiDAR and digital aerial photogrammetry point clouds for estimating forest structural attributes in subtropical planted forests. Forests 2019, 10, 145. [Google Scholar] [CrossRef] [Green Version]
  32. Hu, X.; Li, D. Research on a single-tree point cloud segmentation method based on UAV tilt photography and deep learning algorithm. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 4111–4120. [Google Scholar] [CrossRef]
  33. Maschler, J.; Atzberger, C.; Immitzer, M. Individual tree crown segmentation and classification of 13 tree species using airborne hyperspectral data. Remote Sens. 2018, 10, 1218. [Google Scholar] [CrossRef] [Green Version]
  34. Qin, H.; Zhou, W.; Yao, Y.; Wang, W. Individual tree segmentation and tree species classification in subtropical broadleaf forests using UAV-based LiDAR, hyperspectral, and ultrahigh-resolution RGB data. Remote Sens. Environ. 2022, 280, 113143. [Google Scholar] [CrossRef]
  35. Yang, Q.; Su, Y.; Jin, S.; Kelly, M.; Hu, T.; Ma, Q.; Li, Y.; Song, S.; Zhang, J.; Xu, G. The influence of vegetation characteristics on individual tree segmentation methods with airborne LiDAR data. Remote Sens. 2019, 11, 2880. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Location of the study area.
Figure 1. Location of the study area.
Forests 14 01327 g001
Figure 2. UAV LiDAR data (including RGB) in the study area: (a) top view of LiDAR data; front-view LiDAR data of (b) Magnolia grandiflora, (c) Osmanthus fragrans, (d) Liriodendron chinense, and (e) Ficus microcarpa.
Figure 2. UAV LiDAR data (including RGB) in the study area: (a) top view of LiDAR data; front-view LiDAR data of (b) Magnolia grandiflora, (c) Osmanthus fragrans, (d) Liriodendron chinense, and (e) Ficus microcarpa.
Forests 14 01327 g002
Figure 3. Three-dimensional (3D) data derived from UAV high-resolution images in the study area: (a) top view of image-derived 3D data; front-view image-derived 3D data of (b) Magnolia grandiflora, (c) Osmanthus fragrans, (d) Liriodendron chinense, and (e) Ficus microcarpa.
Figure 3. Three-dimensional (3D) data derived from UAV high-resolution images in the study area: (a) top view of image-derived 3D data; front-view image-derived 3D data of (b) Magnolia grandiflora, (c) Osmanthus fragrans, (d) Liriodendron chinense, and (e) Ficus microcarpa.
Forests 14 01327 g003
Figure 4. Illustration of 3D data. Rows (ae) show individual tree points of different species: (a) Liriodendron chinense, (b) Osmanthus fragrans, (c) Cinnamomum camphora, (d) Magnolia grandiflora, and (e) Ficus macrocarpa. The last two rows (f,g) are points of bare ground, understory vegetation, and small portions of intersections with neighboring trees. Different colors indicate elevation information.
Figure 4. Illustration of 3D data. Rows (ae) show individual tree points of different species: (a) Liriodendron chinense, (b) Osmanthus fragrans, (c) Cinnamomum camphora, (d) Magnolia grandiflora, and (e) Ficus macrocarpa. The last two rows (f,g) are points of bare ground, understory vegetation, and small portions of intersections with neighboring trees. Different colors indicate elevation information.
Forests 14 01327 g004
Figure 5. PointNet++ network architecture. The white dots represent the tree points, and the red dots represent the center points (centroids) after sampling and grouping.
Figure 5. PointNet++ network architecture. The white dots represent the tree points, and the red dots represent the center points (centroids) after sampling and grouping.
Forests 14 01327 g005
Figure 6. Segmentation results of LiDAR data and image-derived points: (a,b) show the segmentation results of PointNet++; (c,d) show the segmentation results of Li2012; (e,f) show the segmentation results of LSS. The red circles represented the reference tree crowns. The color of the visualized point cloud is matched according to the ID of the segmented individual tree.
Figure 6. Segmentation results of LiDAR data and image-derived points: (a,b) show the segmentation results of PointNet++; (c,d) show the segmentation results of Li2012; (e,f) show the segmentation results of LSS. The red circles represented the reference tree crowns. The color of the visualized point cloud is matched according to the ID of the segmented individual tree.
Forests 14 01327 g006aForests 14 01327 g006b
Figure 7. Individual tree segmentation results of Liriodendron chinense. (a) shows the original point morphology of LiDAR data, and (bd) shows the segmentation results using different segmentation algorithms based on LiDAR data. (e) shows the original point morphology of the image points, and (fh) shows the segmentation results using different segmentation algorithms based on the image points. Red circles represent reference tree crowns. The color of the visualized point cloud is matched according to the ID of the segmented single tree.
Figure 7. Individual tree segmentation results of Liriodendron chinense. (a) shows the original point morphology of LiDAR data, and (bd) shows the segmentation results using different segmentation algorithms based on LiDAR data. (e) shows the original point morphology of the image points, and (fh) shows the segmentation results using different segmentation algorithms based on the image points. Red circles represent reference tree crowns. The color of the visualized point cloud is matched according to the ID of the segmented single tree.
Forests 14 01327 g007
Figure 8. Individual tree segmentation results of Magnolia grandiflora. (a) shows the original point morphology of LiDAR data, and (bd) shows the segmentation results using different segmentation algorithms based on LiDAR data. (e) shows the original point morphology of the image points, and (fh) shows the segmentation results using different segmentation algorithms based on the image points. Red circles represent reference tree crowns. The color of the visualized point cloud is matched according to the ID of the segmented single tree.
Figure 8. Individual tree segmentation results of Magnolia grandiflora. (a) shows the original point morphology of LiDAR data, and (bd) shows the segmentation results using different segmentation algorithms based on LiDAR data. (e) shows the original point morphology of the image points, and (fh) shows the segmentation results using different segmentation algorithms based on the image points. Red circles represent reference tree crowns. The color of the visualized point cloud is matched according to the ID of the segmented single tree.
Forests 14 01327 g008
Figure 9. Individual tree segmentation results of Osmanthus fragrans. (a) shows the original point morphology of LiDAR data, and (bd) shows the segmentation results using different segmentation algorithms based on LiDAR data. (e) shows the original point morphology of the image points, and (fh) shows the segmentation results using different segmentation algorithms based on the image points. Red circles represent reference tree crowns. The color of the visualized point cloud is matched according to the ID of the segmented single tree.
Figure 9. Individual tree segmentation results of Osmanthus fragrans. (a) shows the original point morphology of LiDAR data, and (bd) shows the segmentation results using different segmentation algorithms based on LiDAR data. (e) shows the original point morphology of the image points, and (fh) shows the segmentation results using different segmentation algorithms based on the image points. Red circles represent reference tree crowns. The color of the visualized point cloud is matched according to the ID of the segmented single tree.
Forests 14 01327 g009
Figure 10. Individual tree segmentation results of Ficus microcarpa. (a) shows the original point morphology of LiDAR data, and (bd) shows the segmentation results using different segmentation algorithms based on LiDAR data. (e) shows the original point morphology of the image points, and (fh) shows the segmentation results using different segmentation algorithms based on the image points. Red circles represent reference tree crowns. The color of the visualized point cloud is matched according to the ID of the segmented single tree.
Figure 10. Individual tree segmentation results of Ficus microcarpa. (a) shows the original point morphology of LiDAR data, and (bd) shows the segmentation results using different segmentation algorithms based on LiDAR data. (e) shows the original point morphology of the image points, and (fh) shows the segmentation results using different segmentation algorithms based on the image points. Red circles represent reference tree crowns. The color of the visualized point cloud is matched according to the ID of the segmented single tree.
Forests 14 01327 g010
Figure 11. Percentage stacked plot of the individual tree segmentation accuracy of four different tree species.
Figure 11. Percentage stacked plot of the individual tree segmentation accuracy of four different tree species.
Forests 14 01327 g011
Table 1. Statistical results of tree height (TH) and crown width (CW) measured in the field.
Table 1. Statistical results of tree height (TH) and crown width (CW) measured in the field.
Tree SpeciesNumberMin.
TH (m)
Max.
TH (m)
Ave.
TH (m)
Min.
CW (m)
Max.
CW (m)
Ave.
CW (m)
Osmanthus fragrans542.65.94.21.35.44.8
Liriodendron chinense217.817.014.73.36.15.0
Magnolia grandiflora375.310.58.12.65.14.2
Cinnamomum camphora87.610.38.83.46.95.6
Ficus microcarpa246.39.88.45.37.46.2
Table 2. Accuracy evaluation of individual tree segmentation results based on LiDAR data and image-derived points.
Table 2. Accuracy evaluation of individual tree segmentation results based on LiDAR data and image-derived points.
3D Data SourceAlgorithmTPFNFPRPF
LiDAR dataPointNet++1211850.870.960.91
Li20121172070.850.940.90
LSS1093050.780.960.86
Images pointsPointNet++11715120.890.910.90
Li20121083060.780.950.86
LSS825480.600.910.73
Note: TP, true positive; FN, false negative; FP, false positive; R, recall; P, precision; F, F score.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; You, H.; Tang, X.; You, Q.; Huang, Y.; Chen, J. Study on Individual Tree Segmentation of Different Tree Species Using Different Segmentation Algorithms Based on 3D UAV Data. Forests 2023, 14, 1327. https://doi.org/10.3390/f14071327

AMA Style

Liu Y, You H, Tang X, You Q, Huang Y, Chen J. Study on Individual Tree Segmentation of Different Tree Species Using Different Segmentation Algorithms Based on 3D UAV Data. Forests. 2023; 14(7):1327. https://doi.org/10.3390/f14071327

Chicago/Turabian Style

Liu, Yao, Haotian You, Xu Tang, Qixu You, Yuanwei Huang, and Jianjun Chen. 2023. "Study on Individual Tree Segmentation of Different Tree Species Using Different Segmentation Algorithms Based on 3D UAV Data" Forests 14, no. 7: 1327. https://doi.org/10.3390/f14071327

APA Style

Liu, Y., You, H., Tang, X., You, Q., Huang, Y., & Chen, J. (2023). Study on Individual Tree Segmentation of Different Tree Species Using Different Segmentation Algorithms Based on 3D UAV Data. Forests, 14(7), 1327. https://doi.org/10.3390/f14071327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop