Next Article in Journal
Fractional Cover Mapping of Invasive Plant Species by Combining Very High-Resolution Stereo and Multi-Sensor Multispectral Imageries
Previous Article in Journal
Analysis of the Essential Oils of Chamaemelum fuscatum (Brot.) Vasc. from Spain as a Contribution to Reinforce Its Ethnobotanical Use
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Tree Height Extraction Approach for Individual Trees by Combining TLS and UAV Image-Based Point Cloud Integration

1
Centre of Co-Innovation for Sustainable Forestry in Southern China, Nanjing Forestry University, Nanjing 210037, China
2
Nanjing Institute of Environmental Sciences, Ministry of Ecology and Environment, Nanjing 210042, China
3
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Forests 2019, 10(7), 537; https://doi.org/10.3390/f10070537
Submission received: 19 May 2019 / Revised: 17 June 2019 / Accepted: 26 June 2019 / Published: 27 June 2019
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)

Abstract

:
Research Highlights: This study carried out a feasibility analysis on the tree height extraction of a planted coniferous forest with high canopy density by combining terrestrial laser scanner (TLS) and unmanned aerial vehicle (UAV) image–based point cloud data at small and midsize tree farms. Background and Objectives: Tree height is an important factor for forest resource surveys. This information plays an important role in forest structure evaluation and forest stock estimation. The objectives of this study were to solve the problem of underestimating tree height and to guarantee the precision of tree height extraction in medium and high-density planted coniferous forests. Materials and Methods: This study developed a novel individual tree localization (ITL)-based tree height extraction method to obtain preliminary results in a planted coniferous forest plots with 107 trees (Metasequoia). Then, the final accurate results were achieved based on the canopy height model (CHM) and CHM seed points (CSP). Results: The registration accuracy of the TLS and UAV image-based point cloud data reached 6 cm. The authors optimized the precision of tree height extraction using the ITL-based method by improving CHM resolution from 0.2 m to 0.1 m. Due to the overlapping of forest canopies, the CSP method failed to delineate all individual tree crowns in medium to high-density forest stands with the matching rates of about 75%. However, the accuracy of CSP-based tree height extraction showed obvious advantages compared with the ITL-based method. Conclusion: The proposed method provided a solid foundation for dynamically monitoring forest resources in a high-accuracy and low-cost way, especially in planted tree farms.

1. Introduction

Most small and midsize tree farms experience technical problems when trying to accurately and quantitatively evaluate the forest asset value. These farms are not able to sell or jointly develop the exact asset value, generate corresponding income, or achieve the goal of maintaining and increasing the asset value. To sustainably manage forest resources, tree height information is an important parameter reflecting volume and site quality [1]. This information is necessary to quantitatively estimate forest stock volume and aboveground biomass and to evaluate forest resource assets. Traditional tree height measurements mainly rely on foresters to measure in field by an altimeter, but they have a large workload, slow speed, and huge manpower and material resource requirements. Since traditional optical remote sensing (such as Landsat imagery data) cannot directly obtain the vertical structural information of a forest canopy, the ability to accurately estimate forest height is limited, and therefore, it is difficult to meet the demand of actual forest management practices. In addition, for planted forests with high density, these two above methods have a large error rate in measuring tree height because of the large area of shielding. In recent years, light detection and ranging (LiDAR) and unmanned aerial vehicle (UAV) have developed rapidly in the extraction of forest vegetation structural parameters. These techniques offer advantages in the acquisition of vertical structure parameters of vegetation at sub-meter levels [2,3].
LiDAR is an active remote-sensing technology that measures distance by recording the time difference between the emitted and returned signals. LiDAR emits a laser pulse to obtain three-dimensional (3D) observations on the object surface [4]. A terrestrial laser scanner (TLS) is mounted on a remote-sensing platform fixed or moving on the ground. While the scanner rotates in the two-dimensional (2D) plane, the laser prism rotates in the vertical direction to observe 3D information [5]. Compared with other ground-based laser scanning (e.g., wearable, personal, and mobile laser scanning), TLS does not offer advantages in terms of portability and efficiency [6,7,8,9], but it is widely used in forest resource surveys because of its accuracy and stability. In addition, TLS does not require other special equipment (like a car), and it has a relatively simple scanning operation. TLS captures the fine vertical structure of the forest, especially in the understory of the canopy [10], which offers unique advantages in the acquisition of a high-precision digital elevation model (DEM) [11]. Due to the topography variations and limited field of view of terrestrial scanners in vertical directions (Figure 1), as well as the shadow effects of other objects (such as trees, branches, and shrubs) in the transmission direction of the laser beams, it is difficult for TLS to obtain information about the upper canopy and treetop [12]. Therefore, the precision of tree height extraction using TLS alone is insufficient. Moskal and Zheng [13] measured tree height using single-scan TLS data in urban forests, and the root mean square error (RMSE) of the tree height estimation was 0.75 m at the tree level. Seidel et al. [14] used TLS data to observe forestland with sparse planting density, and tree height generally was underestimated.
Airborne laser scanners (ALS) offer advantages over TLS in acquiring relatively complete forest canopy and retrieving relevant structural parameters at a regional scale. Thus, ALS is widely used to dynamically monitor regional forests [15,16,17], assess forest carbon reserves [18,19], and monitor forest fires [20]. Naesset [21] used ALS data to estimate average stand height, and the results showed a significant correlation (approximately 91%) between the ALS and field-based tree height results. The final estimated height difference was between 0.4 m and 1.9 m, and was used extensively to obtain forest structural parameters at a landscape or regional scale. Pang et al. [22] used ALS to estimate tree height with an accuracy higher than 87%, and the overall accuracy was 90.59%. Liang et al. [4] proposed a method of tree height measurement by combining ALS-based tree top and TLS-based tree stem location information. Due to the relatively high cost, operational safety problems, and high technical requirements for operators, however, the ALS has not prevailed in most small and midsize tree farms.
In this paper, the authors obtained relatively complete forest canopy information by replacing ALS using low-cost UAV image-based photogrammetry [23,24,25]. In addition, UAV has become an effective supplement in the area of aerial photogrammetry with the obvious advantage of high spatial and temporal resolutions, and great mobility [26,27]. UAV image-based photogrammetry can generate a forest canopy point cloud data through dense matching of multi-view images based on the structure from motion (SFM) algorithm and 3D reconstructed technology [28,29,30]. Generally, the photogrammetry point cloud cannot compete with the TLS point cloud in terms of the accuracy of the point coordinate, but accuracy at the submeter level can be achieved through geometric correction of the ground control points (GCPs), reference images, and topographic features [31]. Zhang et al. [30] used global navigation satellite system (GNSS) to collect the GCPs and conduct geometric correction for the photogrammetry-based point cloud, and the localization accuracy reached 0.32 m to 0.69 m. Dandois et al. [31] used GCPs and an air-route matching technique to carry out geometric correction for a reconstructed point cloud, and the localization accuracy reached 0.4 m to 1.4 m.
The objective of this study was to solve the low precision of high canopy height extraction individually through ground and aerial remote-sensing measurements. It presented a new method to extract tree height in a high canopy stand by combining TLS and UAV image-based point cloud. First, a combination of profile features of obvious landmarks and the selection of GCPs was investigated, and the registration accuracy of the UAV image-based point cloud and TLS point cloud was evaluated. Second, this study proposed a new tree height extraction method based on the registered UAV and TLS data (hereafter referred as mixed data) and evaluated the accuracy and efficiency of the extracted tree height results. Third, effective recommendations for forest resource assessment and management planning for small and midsize tree farms were made.

2. Materials

2.1. Study Area

The sample spot of the study area is near the east gate of Nanjing Forestry University in Nanjing City, Jiangsu Province (31°14′–32°37′ N, 118°22′–119°14′ E), which is located in the middle and lower reaches of the Yangtze River, China (Figure 2). Nanjing City belongs to the north subtropical humid climate. It has an annual average temperature of approximately 15.4 °C and a mean precipitation of approximately 1106 mm. The species investigated in the sample spot was Metasequoia (Metasequoia glyptostroboides Hu & W. C. Cheng), which has a straight form and belongs to Cupressaceae family.

2.2. Equipment Introduction and Data Collection

In this study, the aerial images were obtained of our study area using the DJI Phantom 4 Pro (Figure 3a). The maximal horizontal flight speed of Phantom 4 Pro is 20 m/s, and the maximal ascending and descending speed are 6 m/s and 4 m/s respectively. The vertical hovering accuracy can reach ±0.1 m, the horizontal hovering accuracy can reach ±0.3 m, the maximal remote control flight distance is 7000 m, and the single battery life is 30 min. The camera has a five-way perception and four-way obstacle avoidance. It is equipped with an integrated cradle head camera, with a pitch angle ranging from −90° to about +30°. The camera lens parameter’s field of view is 94°/20 mm, and the effective pixel is 20 million.
The authors collected all of the data at the end of June 2018, when weather conditions were good and the wind speed was 1.6–3.3 m/s. TLS data was collected using the Riegl VZ-400i scanning system (Figure 3b), which included a scanner host and camera. The technical specifications are given in Table 1. The scanner host was used to collect the 3D point cloud, and the camera acquired panoramic photos by continuous shooting to provide color (RGB) information for the point cloud. To ensure the degree of overlap between scanning spots, two to three spots were set at the four corners of the sample spot, and evenly spaced the scanning spots on all four sides. A total of 22 spots were set (Figure 2).
The UAV was controlled using the Pix4D Capture flight front control software to obtain the UAV aerial images for our study area. This software designs the UAV flight path and reconstructs the 3D model based on the captured 2D aerial images using the build-in SFM algorithm. The function of the self-setting flight route can effectively reduce errors caused by manual operation and can enhance UAV’s stability [32,33]. Although UAV image acquisition usually adopts a single-track route, in this paper, a dual-track flight mode was adopted (Figure 4). This improved image quality and point cloud density by increasing the number of images and the degree of image overlapped in the sample area [34]. The UAV image measurement was taken at approximately midday, the flight altitude of the first route was 90 m, the overlap was 90%, the flight area was 100 m × 100 m, and the flight duration was 14 min 26 s. The second route maintained the same flight altitude and overlap rate, and the flight area coverage was changed to 50 m × 50 m with a flight time of 4 min 4 s. The total flight duration was 18 min 30 s, and 125 images were acquired in total. The flight altitude at 70 m and 120 m was also tested, but the amount of image data produced at 70 m was too large to process efficiently without obvious improvement of generated point cloud data which obtained at 90 m. For the case of flight altitude of 120 m, the quality point cloud data did not meet our expectations.
The forest inventory work was conducted to obtain the stem locations of each tree, total number of trees, and GCPs within the sampled forest plot using the HITARGET iRTK2 with the horizontal and vertical accuracy of ±8 mm and ±15 mm, respectively. In addition, for registration purpose, the coordinates of four GCPs were recorded which were visible for both TLS and UAV.

3. Methods

The proposed method (Figure 5) in this study could be summarized as following three steps: (1) The UAV image-based point cloud and TLS point cloud were registered according to the profile features of obvious landmarks and the selection of GCPs; (2) the mixed data were preprocessed; and (3) tree height information was extracted at individual tree level based on the novel proposed method in this study which is suitable for planted coniferous forests.

3.1. Data Registration

The registration process for low-altitude and high-resolution UAV-based aerial images included the following steps:
1. Align photos: All UAV captured aerial photos into the PhotoScan software were imported, and it matched all images by setting two key parameters named key point limit and tie point limit with the default values of 40,000 and 4000, respectively. The key point limit was the maximum number of points detected for each image. The tie point limit was the maximum number of points used for image matching. To achieve the better result, the software was set as reference pair preselection mode with the highest accuracy of image matching.
2. Optimize camera alignment: Specific optimization parameters to correct the errors caused by the coordinate deviation and other factors during the fixed-point shooting of the camera were selected (Figure 4), including f (focal length); cx, cy (principal point coordinates, i.e., the coordinates of lens optical axis interception with a sensor plane); k1, k2, k3, k4 (radial distortion coefficients); and p1, p2, p3, p4 (tangential distortion coefficients).
3. Build dense point clouds: According to different research needs, this study used the SFM algorithm and multi view stereo algorithm to generate 3D point clouds with corresponding density requirements (Figure 5a). This process combined the feature matching points and camera optimization parameters obtained in the first two steps. Firstly, the relative position between photos was calculated by geometrically matching points, and the coordinate of the matching points between images was calculated to generate a sparse 3D point cloud. Secondly, bundle adjustment and camera optimization parameters were utilized to reduce the construction error [35,36]. Finally, a multi view stereo algorithm was used for the sparse point cloud, each pixel grid in the image was searched to obtain more matching points and generate a dense point cloud [37]. The authors obtained the absolute coordinate of the points by GCPs and completed the entire process using Agisoft PhotoScan Professional software.
The supporting Riscan Pro software was used for data registration (Figure 5b). First, coarse registration was performed by manually selecting the namesake points of different scanning spots. Then, automatic accurate registration was performed using the iterative closet point algorithm [38]. The algorithm minimized the error function by finding the nearest point in the set of two points and computing the parameters of the point cloud transformation. Finally, the data was obtained utilizing the multi-station adjustment.
To register the TLS point cloud and UAV image-based point cloud, two separated steps were taken to achieve the coarse and accurate registration results, respectively. The overall point cloud of the UAV was rotated to achieve the coarse registration with a profile of the landmarks (e.g., buildings and road signs). GCPs were used for high-precision registration and selected TLS data as the standard coordinate system. 3D coordinates were extracted of the GCP positions in the TLS data, which were used to transform the point cloud of the UAV. The final accurate registration results are shown in Figure 5c.

3.2. Mixed Data Preprocessing and CHM Acquirement

When raw mixed data were obtained, the authors completed several important preprocessing steps. First, the noise and outlier points were removed using the statistical outlier removal filter from the Point Cloud Library. The filter utilizes statistical analysis for every point and calculates the average distance to its neighbors. If the distance is not within a certain range, the point will be treated as a noisy point and be removed. Then, ground and non-ground points were separated based on the progressive triangulated irregular network densification filter [39,40], using the LiDAR360 software. The filter obtained the initial ground seed points according to a morphological operation, and then removed the large residual seed points by plane fitting. The remaining ground seed points were used to construct a triangular network and encrypted the network to obtain the final ground points. The authors generated a DEM and digital surface model (DSM) from the ground points and denoised points through triangulated irregular network interpolation, respectively, with a spatial resolution of 0.1 m and 0.2 m. A canopy height model (CHM) was able to be calculated with the same resolution by subtracting the DEM from DSM (Figure 5d).
It was necessary to locate every tree in the sample spot. The average diameter at breast height (DBH) of the trunk was approximately 0.2 m, and the localization accuracy of a single tree reached only 0.2 m. Therefore, the raster data of CHM was selected with a resolution of 0.2 m for data processing, which was optimal for the subsequent extraction of tree height. Due to the high accuracy of the 3D point cloud and the extremely large number of points within the range of the grid area, the authors selected raster data for the CHM with a resolution of 0.1 m for precision verification.

3.3. Tree Height Extraction

For LiDAR point cloud, it is necessary to do individual tree segmentation, which is an important consideration when extracting forest parameters for individual trees, including tree height, DBH, and crown width. There are two main ways to separate individual trees: An algorithm based on CHM, which is marker watershed segmentation [41]; and an algorithm based on the point cloud [42], which is point cloud segmentation (PCS). The precision of individual tree segmentation, however, directly affects the extraction of forest structural parameters, which generally are underestimated.
In this study, a new extraction method was proposed that was suitable for planted coniferous forests (Figure 5e). This method followed the principle of coarse first and accurate second, and this study obtained the coarser tree height by individual tree localization (ITL) and utilized canopy height model (CHM) seed points (CSP) to gain accurate results.

3.3.1. Tree Height Extraction Based on Individual Tree Localization

When collecting a localization coordinate of individual trees from mixed data, the deviation of terrain can lead to errors. Therefore, it is necessary to normalize mixed data to eliminate the impact of terrain on point cloud data and localization coordinates.
By reading the point cloud coordinate and attribute information of mixed data and then matching DEM data, this study calculated the DEM raster localization corresponding to each point cloud. The DEM value corresponding to the z value of each point cloud was subtracted to obtain the normalized data. The point cloud with a normalized height value within the range of 1.2 m to 1.4 m was selected and used the grid size of 0.2 m × 0.2 m to automatically identify and mark the center localization of the grid (Figure 6), which was the localization coordinate of the tree trunk. There were two cases of automatic marking: The profile of the number 1 tree was included in the range marked by the grid, but the profile of the number 72 tree was not completely within the range marked by the grid. The discrepancy could lead to a deviation in the tree localization and could affect the accuracy of height extraction. Therefore, the authors needed to conduct data validation and a comparative analysis with a higher resolution of 0.1 m.
After obtaining the ITL coordinate, the tree localization coordinate was matched with the CHM corresponding to this point to ensure the consistency of the coordinate system. Then, the value of the grid in which the registration point was located was automatically extracted as the tree height of the individual tree. This process was completed using ArcGis10.2 software.

3.3.2. Tree Height Extraction Based on CHM Seed Points

This study used the local maximum algorithm to extract tree height and used the projection distance D between the coordinates of the highest point and the ITL coordinates to determine whether the tree height value was effective. Different-size search windows were selected to filter the maximum value of CHM as the seed point. The specific algorithm was as follows:
Title: CSP algorithm
Input:
1. CHM raster data of a certain resolution extracted through mixed point cloud (Tif format); 2. The information of ITL coordinate: Li = {(Xi, Yi) | i = 1,2,3,…,n, n is the number of trees}; and
3. Input the distance threshold d, the height threshold H, and search window size SIZE.
Output:
1. Output the image of seed point Si and ITL point Li on CHM; and
2. If Did, output Si, (Xsi,Ysi) and Vh; If Di > d, output False.
1. Read the CHM raster information, including the origin coordinate (X0, Y0) of the raster data, resolution R (m), and raster value. According to the original coordinate, resolution and raster line, and column number (A, B), the localization coordinate of any raster (Xp, Yp) can be obtained.
2. Utilize a Gaussian-smoothing filter and local maximum filter for the CHM raster data, according to the search window size SIZE and height threshold H the point with the largest value in the search window size is marked as the seed point Si, and obtain the coordinates of Si (Xsi, Ysi) and its corresponding tree height value Vh.
3. Read the coordinate information of ITL, load the coordinate information of seed point Si at the same time. Project both points onto the XOY plane, take the projection distance between each seed point and the nearest ITL point as Di, and judge the size of Di and d
The projection distance between the localization of the tree trunk and the seed point indicates the deviation degree of the trunk growth (Figure 7a). The planting density of the sample plot was approximately 3 m and the canopy had a high overlapping rate. Therefore, to make the projection localization of the highest point as close as possible to the ITL, the extracted tree height had a high precision, so the projection distance threshold was set as d. If the seed point exceeded the threshold value, it was regarded as a recognition error. Otherwise, the raster value obtained from the seed point was the height of this individual tree, and the precise height of the seed points was obtained by raster interpolation. There were many other low shrubs and vegetation points in the data, however; to prevent these false seed points from mixing, the height threshold was set as H as follows:
X p =   X 0 ± R × A
Y p =   Y 0 ± R × B
where (X0, Y0) is the origin coordinate, (Xp, Yp) is the localization coordinate of any raster, R is the resolution of the CHM, and (A, B) is the number of raster row and column.
The seed point matching results were comprehensively evaluated under different window sizes according to the correctness rate and matching rate. The correctness rate was defined as the ratio of the number of matched seed points to the total number of seed points, and the matching rate was defined as the ratio of the number of matched seed points to the total number of trees.
The number of matched individual trees detected by CSP was less than the total number of trees. Due to the large area that crossed the crown width, the seed point of many relatively low trees could not be identified (Figure 7b). Thus, the efficiency of extracting tree height by CSP was relatively poor.

4. Results

4.1. Validation of Mixed Data Registration Accuracy

In this study, the authors used the coordinate system of the TLS data as the reference coordinate system. The UAV data was registered into the reference coordinate system by combining the profile features of the landmarks and GCPs. In addition to the matching accuracy of the obtained mixed data, the verification of the UAV coordinates and the mixed data point cloud were also needed.
Five ground objects were selected with the same name in the UAV data and mixed data as test points to verify the registration accuracy. P01 is the corner point of the green belt, P02 and P03 are the obvious landmark point of the road, and P04 and P05 are the street lamp points. The error in the table shows the average error range of the point in three directions (Table 2), the minimum error of 0.049 m, maximum error of 0.083 m, and RMSE of 0.060 m.

4.2. Height Extraction

4.2.1. ITL Extraction Results

In the process of extracting the ITL coordinates, because the localization accuracy of real time kinematic (RTK) for GCP reached the millimetric level, the mixed data below the canopy was basically obtained through the TLS. The point cloud accuracy also reached the millimetric level, and thus, the localization accuracy of the sample trees obtained was reliable.
Localization coordinates of 107 trees (Metasequoia) were obtained according to the mixed point cloud data. According to the survey of the sample plot, there were 107 trees in the sample plot, with the ITL rate reaching 100%. Among them, the number 6 and number 36 trees were dead, so they were not included in the tree height extraction target, and thus, we obtained a total of 105 height values of trees.
Figure 8 shows that the localization of individual trees was basically located in the canopy area, which verified the feasibility of quickly extracting tree height through ITL. With the reduction of the grid size, this study also found that the CHM image became more accurate, and the canopy edge was closer to the actual situation of the sample. The extracted maximum tree heights of both resolutions were 26.98 m and 26.99 m, respectively. The z value of the highest point was 27.01 m through the normalized mixed point cloud data, which proved that the precision of extracting the tree height through high-resolution CHM completely met the requirements.

4.2.2. CSP extraction results

The CSP method was used to match the highest point of the individual tree with the localization point; the height threshold H, the distance threshold d, and the search window size had to be determined to obtain accurate matching results. According to the actual survey data of the sample spot, the authors set the H as 10 m and the d as 1 m, and four window sizes were selected according to the degree of canopy intersecting and tree spacing: 1 m × 1 m, 1.5 m × 1.5 m, 2 m × 2 m, and 3 m × 3 m. The study found that when the window was less than 1 m × 1 m, the number of identified seed points increased exponentially. The correctness rates of the identification decreased significantly. Therefore, the window size was no longer reduced. Distance matching was performed between the extracted seed points and the localization point, and the results are shown in Table 3.
This study found that the search accuracy became better as the size of the window decreased. In addition, the matching seed points increased based on the original number of points, but the increasing number was decreasing. Although the matching rate with ITL was increasing from 51.44% to 75.24%, the accuracy of seed point identification was decreasing from 87.1% to 66.94%. As some trees were relatively low, their canopy was covered by the canopy of other tall trees, and therefore, the seed point information could not be extracted.

4.2.3. Tree Height Extraction Results and Precision Evaluation

The tree height from CHM was extracted with resolutions of 0.2 m and 0.1 m by ITL, which combined the field-measured tree height to generate a scatter plot, and the precision of the regression model was analyzed (Figure 9a,b). The R2 values of tree height extracted from CHM with resolutions of 0.2 m and 0.1 m were 0.849 and 0.895, respectively, indicating that the improvement of the CHM resolution could increase the precision of tree height extraction to a certain extent.
After the distance matching of seed points according to the CSP method, a total of 79 seed points matching with ITL was obtained, and the precision of CSP-derived tree height and field-measured tree height was analyzed (Figure 9c). The R2 value was 0.981, which indicated that the accuracy of extracting the tree height by CSP was extremely high.
When utilizing ITL to extract the tree height, this study found several trees that had a large error were seriously overestimated. The canopy of tall trees would be covered with low trees, leading to a higher height value extracted by the CHM, which also explained the low matching rate after identifying seed points through CSP.

5. Discussion

5.1. Data Registration Accuracy

To achieve accurate registration, this study registered TLS point cloud and UAV image-based point cloud based on the 3D coordinates of the GCPs and the profile feature of the landmarks. The registration accuracy of the mixed point cloud directly affected the precision of subsequent preprocessing products and parameter extractions. The coordinate system of the TLS point cloud data was selected as the reference coordinate system because of its high accuracy. The UAV image-based point cloud was generated from technical reconstruction of 2D aerial images and was registered into the TLS point cloud data.
According to the 3D coordinates of the test point, the RMSE was 6 cm. Thus, the registered data obtained by this method could be used widely in forestry research. Due to the TLS equipment limitations, the method can be applied only to small areas and cannot obtain 3D data at the regional scale, which was a deficiency of this research method. The combination of UAV and TLS equipment was well suited for the need of small and midsize tree farms.
Currently, there are many methods for integrating aerial photographic data with ground data. Aicardi et al. [43] proposed that the SFM algorithm could be used to obtain the forest digital model, whose accuracy reached 3 cm and was applied to extract forest parameters. In addition, this study used the internal geometric constraint method [44] to conduct rough registration of data. Deep-learning algorithms have been used widely in image and data classification, which also could be used in data registration.

5.2. Parameters Extraction

The extraction of vegetation structure parameters of 3D point cloud data was generally based on the results of individual tree segmentation. Due to the integrity of mixed data, the underestimation of tree height caused by the absence of the local data in TLS or ALS point cloud data was completely made up. The integrity rate and accuracy of individual tree segmentation were the keys, and many related algorithms have been developed. Li et al. [42] utilized the PCS algorithm to separate individual trees in mixed forests, which combined the regional growth model and threshold method. Tao et al. [45] utilized the metabolic ecology theories and comparative shortest-path algorithm to extract the individual crown width. This study also used the PCS algorithm to conduct individual tree segmentation and extract tree height. Due to the misjudgment and over-segmentation caused by the area that crossed the canopy width, the precision of the extracted tree height was not satisfactory.
The method proposed in this study met the requirements of forestry investigation in terms of precision. ITL emphasized the accuracy as well as the resolution of CHM. A marking grid with 0.2 m × 0.2 m was used to extract the ITL coordinate, because the average DBH of the trees was approximately 0.2 m. Further, the marking error by optimizing the CHM resolution was reduced. Thus, the accuracy of the tree height was extracted using the CHM with a resolution that was 0.1 m higher. CSP emphasized the matching rates of seed points. As the search window size became smaller, the number of identified seed points increased. The matching points that could be determined by the distance threshold had a saturation value, and the factors that affected the saturation value were determined by stand density and the growth situation. Generally, the forest stand was relatively average and had no competitive problems, such as a high canopy blocking most of the sunlight. The number of match points would be closer to the actual number. In general, for high-density coniferous forests, the canopy still had a wide range of coverage and crossover problems. Therefore, the inability to identify each tree was a weakness of this method, but its high accuracy exhibited a great potential for precision forestry.

5.3. Suggestions for Tree Farm Resource Surveys

For small and midsize tree farms, the combination of UAV and TLS equipment provided convenience for a tree farm resource survey. Other than large-scale industrial operations and natural disasters, the topography of tree farms does not change for a long time. Therefore, TLS could be used to scan the topography of the tree farm to obtain high-precision DEM and ITL coordinates for the whole tree farm. This study established a resource information bank in the order of area-small-class-ID-number. Then, the UAV for annual data collection was used. The CSP method was the preferred method to correctly extract the height of matching trees. The authors adopted the ITL method to extract the height of the remaining trees. In this way, tree farm resources could be dynamically monitored year by year.

6. Conclusions

Our results showed that it was feasible to use UAV and TLS mixed data to extract the height of medium and high-density planted coniferous forests. The methods proposed in this study improved the accuracy and efficiency of tree height extraction. Stand accumulation and carbon sequestration could be quantitatively estimated by measuring structural parameters of the forest canopy, and the authors were able to evaluate the economic and ecological benefits of tree farms. The results showed that the estimation accuracy of tree height using ITL-based approach was more accurate when the resolution of CHM increased. This proved that the difference between stand density and canopy structure affected the ability to identify seed points according to CSP. The best option was to combine ITL and CSP to extract tree height information in the forest resources survey. For broad-leaved forests, most tree height values could be obtained by CSP. In addition, the accuracy of registered data and the ability to identify seed points could be further optimized by introducing a deep-learning algorithm, which improved the quality of registered data and parameter extraction results.

Author Contributions

Conceptualization, J.T. and Y.X.; methodology, J.T. and T.D.; software, J.T. and W.T.; formal analysis, H.L.; investigation, C.L. and T.D.; resources, Y.X. and H.L.; data curation, W.M.; writing—original draft preparation, J.T.; writing—review and editing, Q.H. and H.L.; visualization, J.T.; supervision, Y.X.; project administration, Y.X.; funding acquisition, H.L.

Funding

The research was supported by a basic special business fund for research and development for the central level scientific research institutes, Nanjing Institute of Environmental Sciences, Ministry of Ecology and Environment (GYZX190101), and the National Key R&D Program of China (2018YFD1100104).

Acknowledgments

We would like to thank Guang Zheng for the help in the revision of manuscript, and we are particularly grateful to the following lab members for their help and assistance with field work: Jing Zhang, Longbin Song, Yangyang Sun and Weicheng Hua.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Miłosz, M.; Krzysztof, S.; Anahita, K. Testing and evaluating different LiDAR-derived canopy height model generation methods for tree height estimation. Int. J. Appl. Earth Obs. 2018, 71, 132–143. [Google Scholar]
  2. Mass, H.G.; Bienert, A.; Scheller, S.; Keane, E. Automatic forest inventory parameter determination from terrestrial laser scanner data. Int. J. Remote Sens. 2008, 29, 1579–1593. [Google Scholar] [CrossRef]
  3. Li, H.D.; Gao, J.X.; Hu, Q.W.; Li, Y.K.; Tian, J.R.; Liao, C.R.; Ma, W.B.; Xu, Y.N. Assessing revegetation effectiveness on an extremely degraded grassland with terrestrial LiDAR, southern Qinghai-Tibetan Plateau. Agr. Ecosyst. Environ. 2019, 282, 13–22. [Google Scholar] [CrossRef]
  4. Liang, X.L.; Kankare, V.; Hyyppä, J.; Wang, Y.; Kukko, A.; Haggrén, H.; Yu, X.; Kaartinen, H.; Jaakkola, A.; Guan, F.; et al. Terrestrial laser scanning in forest inventories. ISPRS J. Photogram. Remote Sens. 2016, 115, 63–77. [Google Scholar] [CrossRef]
  5. Liang, X.L.; Hyyppä, J.; Kaartinen, H.; Lehtomaki, M.; Pyorala, J.; Pfeifer, N.; Holopainen, M.; Brolly, G.; Francesco, P.; Hackenberg, J. International benchmarking of terrestrial laser scanning approaches for forest inventories. ISPRS J. Photogram. Remote Sens. 2018, 144, 137–179. [Google Scholar] [CrossRef]
  6. Bauwens, S.; Bartholomeus, H.; Calders, K.; Lejeune, P. Forest inventory with terrestrial LiDAR: A comparison of static and hand-held mobile laser scanning. Forests 2016, 7, 127. [Google Scholar] [CrossRef]
  7. Cabo, C.; Del Pozo, S.; Rodríguez-Gonzálvez, P.; Ordóñez, C.; González-Aguilera, D. Comparing terrestrial laser scanning (TLS) and wearable laser scanning (WLS) for individual tree modeling at plot level. Remote Sens. 2018, 10, 540. [Google Scholar] [CrossRef]
  8. Liang, X.; Kukko, A.; Kaartinen, H.; Hyyppä, J.; Yu, X.; Jaakkola, A.; Wang, Y. Possibilities of a personal laser scanning system for forest mapping and ecosystem services. Sensors 2013, 14, 1228–1248. [Google Scholar] [CrossRef]
  9. Oveland, I.; Hauglin, M.; Giannetti, F.; Schipper Kjørsvik, N.; Gobakken, T. Comparing three different ground based laser scanning methods for tree stem detection. Remote Sens. 2018, 10, 538. [Google Scholar] [CrossRef]
  10. Pang, Y.; Li, Z.Y.; Chen, E.X.; Sun, G.Q. Lidar Remote Sensing Technology and Its Application in Forestry. Sci. Silvae Sin. 2005, 41, 129–136. [Google Scholar]
  11. Zhao, Y.Y.; Hu, Q.W.; Li, H.D.; Wang, S.H.; Ai, M.Y. Evaluating Carbon Sequestration and PM2.5 Removal of Urban Street Trees Using Mobile Laser Scanning Data. Remote Sens. 2018, 10, 1759. [Google Scholar] [CrossRef]
  12. Van, L.M.; Nieuwenhuis, M. Retrieval of forest structural parameters using LiDAR remote sensing. Eur. J. For. Res. 2010, 129, 749–770. [Google Scholar]
  13. Moskal, L.M.; Zheng, G. Retrieving forest inventory variables with terrestrial laser scanning (TLS) in urban heterogeneous forest. Remote Sens. 2011, 4, 1–20. [Google Scholar] [CrossRef]
  14. Seidel, D.; Fleck, S.; Leuschne, C. Analyzing forest canopies with ground-based laser scanning: A comparison with hemispherical photography. Agr. For. Meteorol. 2012, 154-155, 1–8. [Google Scholar] [CrossRef]
  15. Cao, L.; Coops, N.C.; Innes, J.L.; Sheppard, S.R.J.; Fu, L.Y.; Ruan, H.H.; She, G.H. Estimation of forest biomass dynamics in subtropical forests using multi-temporal airborne LiDAR data. Remote Sens. Environ. 2016, 178, 158–171. [Google Scholar] [CrossRef]
  16. Zhang, Z.N.; Cao, L.; She, G.H. Estimating Forest Structural Parameters Using Canopy Metrics Derived from Airborne LiDAR Data in Subtropical Forests. Remote Sens. 2017, 9, 940. [Google Scholar] [CrossRef]
  17. Shen, X.; Cao, L.; Chen, D.; Sun, Y.; Wang, G.; Ruan, H. Prediction of forest structural parameters using airborne full-waveform LiDAR and hyperspectral data in subtropical forests. Remote Sens. 2018, 10, 1729. [Google Scholar] [CrossRef]
  18. Weiskittel, A.R.; Hann, D.W.; Kershaw, J.A.; Vanclay, J.K. Forest Growth and Yield Modeling. In Bibliography; John Wiley & Sons: Hoboken, NJ, USA, 2011; pp. 327–395. [Google Scholar]
  19. Bettinger, P.; Lennette, M.; Johnson, K.N.; Spies, T.A. A hierarchical spatial framework for forest landscape planning. Ecol. Model. 2005, 182, 25–48. [Google Scholar] [CrossRef]
  20. Kane, V.R.; North, M.P.; Lutz, J.A.; Churchill, D.J.; Roberts, S.L.; Smith, D.F.; McGaughey, R.J.; Kane, J.T.; Brooks, M.L. Assessing fire effects on forest spatial structure using a fusion of Landsat and airborne LiDAR data in yosemite national park. Remote Sens. Environ. 2014, 151, 89–101. [Google Scholar] [CrossRef]
  21. Naesset, E. Determination of mean tree height of forest stands using airborne laser scanner data. ISPRS J. Photogram. Remote Sens. 1997, 52, 49–56. [Google Scholar] [CrossRef]
  22. Pang, Y.; Zhao, F.; Li, Z.Y.; Zhou, S.F.; Deng, G. Forest Height Inversion using Airborne Lidar Technology. J. Remote Sens. 2008, 12, 152–158. [Google Scholar]
  23. Gomez, C.; Purdie, H. UAV- based photogrammetry and geocomputing for Hazards and Disaster Risk Monitoring—A Review. Geoenviron. Disasters 2016, 3, 23. [Google Scholar] [CrossRef]
  24. Ai, M.Y.; Hu, Q.W.; Li, J.Y.; Wang, M.; Yuan, H.; Wang, S.H. A robust photogrammetric processing method of low-altitude UAV images. Remote Sens. 2015, 7, 2302–2333. [Google Scholar] [CrossRef]
  25. Liu, Q.W.; Li, S.M.; Li, Z.Y.; Fu, L.Y.; Hu, K.L. Review on the applications of UAV-based LiDAR and photogrammetry in forestry. Sci. Silvae Sin. 2017, 7, 134–148. [Google Scholar]
  26. Patricio, M.C.; Francisco, A.V.; Fernando, C.R.; Francisco-Javier, M.C.; Alfonso, G.F.; Fernando-Juan, P.P. Assessment of UAV-photogrammetric mapping accuracy based on variation of ground control points. Int. J. Appl. Earth Obs. 2018, 72, 1–10. [Google Scholar]
  27. Francisco, A.V.; Fernando, C.R.; Patricio, M.C. Assessment of photogrammetric mapping accuracy based on variation ground control points number using unmanned aerial vehicle. Measurement 2017, 98, 221–227. [Google Scholar]
  28. Dandois, J.P.; Ellis, E.C. Remote sensing of vegetation structure using computer vision. Remote Sens. 2010, 2, 1157–1176. [Google Scholar] [CrossRef]
  29. Jonathan, L.; Marc, P.D.; Stephanie, B. A photogrammetric workflow for the creation of a forest canopy height model from small unmanned aerial system imagery. Forests 2013, 4, 922–944. [Google Scholar]
  30. Zhang, J.; Hu, J.; Lian, J.; Fan, Z.; Ouyang, X.; Ye, W. Seeing the forest from drones: Testing the potential of lightweight drones as a tool for long-term forest monitoring. Biol. Conserv. 2016, 198, 60–69. [Google Scholar] [CrossRef]
  31. Dandois, J.P.; Ellis, E.C. High spatial resolution three-dimensional mapping of vegetation spectral dynamics using computer vision. Remote Sens. Environ. 2013, 136, 259–276. [Google Scholar] [CrossRef] [Green Version]
  32. Ahmadabadian, A.H.; Robson, S.; Boehm, J.; Shortis, M.; Wenzel, K.; Fritsch, D. A comparison of dense matching algorithms for scaled surface reconstruction using stereo camera rigs. ISPRS J. Photogramm. 2013, 78, 157–167. [Google Scholar] [CrossRef]
  33. Frirsch, D.; Khosravani, A.M.; Cefalu, A.; Wenzel, K. Multi-Sensors and Multiray Reconstruction for Digital Preservation. In Photogrammetric Week 2011; Wichmann: Berlin, Germany, 2011; pp. 305–323. [Google Scholar]
  34. Jun, M. Application of UAV Remote Sensing in Tree Parameters Extraction in Plantation Forest. Master’s Thesis, Nanjing Forestry University, Nanjing, China, June 2018. [Google Scholar]
  35. Zaragoza, L.M.E.; Caroti, G.; Piemonte, A.; Riedel, B.; Tengen, D.; Niemeier, W. Structure from motion (SfM) processing of UAV images and combination with terrestrial laser scanning, applied for a 3D-documentation in a hazardous situation. J. Assoc. Inf. Syst. 2017, 18, 1492–1505. [Google Scholar]
  36. Cao, M.W.; Li, S.J.; Jia, W.; Li, S.L.; Liu, X.P. Robust bundle adjustment for large-scale structure from motion. Multimed Tools Appl. 2017, 76, 21843–21867. [Google Scholar] [CrossRef]
  37. Robleda, P.G.; Caroti, G.; Zaragoza, I.M.; Piemonte, A. Computational vision in UV-mapping of textured meshes coming from photogrammetric recovery: unwrapping frescoed vault. ISPRS Int. Arch. Photogramm. Remote Sens. 2016, XLI–B5, 391–398. [Google Scholar] [CrossRef]
  38. Besel, P.J.; McKay, N.D. A method for registration of 3–D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  39. Axelsson, P. DEM generation from laser scanner data using adaptive TIN models. Int. Arch. Photogramm. Remote Sens. 2000, 33, 111–118. [Google Scholar]
  40. Zhao, X.Q.; Guo, Q.H.; Su, Y.J.; Xue, B.L. Improved progressive TIN densification filtering algorithm for airborne LiDAR data in forested areas. ISPRS J. Photogramm. Remote Sens. 2016, 117, 79–91. [Google Scholar] [CrossRef] [Green Version]
  41. Chen, Q.; Baldocchi, D.; Goog, P.; Kelly, M. Isolating individual trees in a savanna woodland using small footprint lidar data. Photogramm. Eng. Remote Sens. 2006, 72, 923–932. [Google Scholar] [CrossRef]
  42. Li, W.K.; Guo, Q.H.; Jakubowski, M.K.; Kelly, M. A new method for segmenting individual trees from the lidar point cloud. Photogramm. Eng. Remote Sens. 2012, 78, 75–84. [Google Scholar] [CrossRef]
  43. Aicardi, I.; Dabove, P.; Lingua, A.M.; Piras, M. Integration between TLS and UAV photogrammetry techniques for forestry applications. iForest 2016, 10, 41–47. [Google Scholar] [CrossRef]
  44. Zhang, W.M.; Zhao, J.; Chen, M.; Chen, Y.M.; Yan, K.; Li, L.Y.; Qi, J.B.; Wang, X.Y.; Luo, J.H.; Chu, Q. Registration of optical imagery and LiDAR data using an inherent geometrical constraint. Opt Express 2015, 23, 7694–7702. [Google Scholar] [CrossRef] [PubMed]
  45. Tao, S.L.; Wu, F.F.; Guo, Q.H.; Wang, Y.C.; Li, W.K.; Xue, B.L.; Hu, X.Y.; Li, P.; Tian, D.; Li, C.; et al. Segmenting tree crowns from terrestrial and mobile lidar data by exploring ecological theories. ISPRS J. Photogramm. Remote Sens. 2015, 110, 67–76. [Google Scholar] [CrossRef]
Figure 1. Missing information in the upper canopy of the TLS scanning observations because of the vertical scanning angle.
Figure 1. Missing information in the upper canopy of the TLS scanning observations because of the vertical scanning angle.
Forests 10 00537 g001
Figure 2. The study area and locations of scanning plots; the location of the sample spot is shown in the lower-right subfigure.
Figure 2. The study area and locations of scanning plots; the location of the sample spot is shown in the lower-right subfigure.
Forests 10 00537 g002
Figure 3. (a) DJI Phantom 4 Pro and (b) Riegl VZ-400i terrestrial laser scanning.
Figure 3. (a) DJI Phantom 4 Pro and (b) Riegl VZ-400i terrestrial laser scanning.
Forests 10 00537 g003
Figure 4. UAV flight routes and error estimates. Z error is represented by ellipse color. X, Y errors are represented by ellipse shape. The estimated camera locations are marked with a black dot and the flight routes are the connection of these dots.
Figure 4. UAV flight routes and error estimates. Z error is represented by ellipse color. X, Y errors are represented by ellipse shape. The estimated camera locations are marked with a black dot and the flight routes are the connection of these dots.
Forests 10 00537 g004
Figure 5. The flow chart of extract height of planted coniferous forests with TLS and UAV point cloud data: (a) UAV image-based point cloud; (b) TLS point cloud; (c) mixed point cloud; (d) CHM by point cloud preprocessing; and (e) tree height extraction method.
Figure 5. The flow chart of extract height of planted coniferous forests with TLS and UAV point cloud data: (a) UAV image-based point cloud; (b) TLS point cloud; (c) mixed point cloud; (d) CHM by point cloud preprocessing; and (e) tree height extraction method.
Forests 10 00537 g005
Figure 6. The localization of individual trees. The authors used the grid size of 0.2 m × 0.2 m to automatically identify and mark the center localization of the grid.
Figure 6. The localization of individual trees. The authors used the grid size of 0.2 m × 0.2 m to automatically identify and mark the center localization of the grid.
Forests 10 00537 g006
Figure 7. The distance threshold method in CSP: (a) The projection distance between each seed point and the nearest ITL point as D; and (b) some seed points matched by ITL were not identified.
Figure 7. The distance threshold method in CSP: (a) The projection distance between each seed point and the nearest ITL point as D; and (b) some seed points matched by ITL were not identified.
Forests 10 00537 g007
Figure 8. The ITL point in the CHM of the sample ((a) the resolution of CHM is 0.1 m, (b) the resolution of CHM is 0.2 m, and the black box shows the difference between the two CHM resolutions).
Figure 8. The ITL point in the CHM of the sample ((a) the resolution of CHM is 0.1 m, (b) the resolution of CHM is 0.2 m, and the black box shows the difference between the two CHM resolutions).
Forests 10 00537 g008
Figure 9. The comparison of ITL-derived, CSP-derived, and field-measured tree height (a) ITL: 0.2 m resolution; (b) ITL: 0.1 m resolution; and (c) CSP: 0.2 m resolution.
Figure 9. The comparison of ITL-derived, CSP-derived, and field-measured tree height (a) ITL: 0.2 m resolution; (b) ITL: 0.1 m resolution; and (c) CSP: 0.2 m resolution.
Forests 10 00537 g009
Table 1. Riegl VZ-400i terrestrial scanning system technical specifications.
Table 1. Riegl VZ-400i terrestrial scanning system technical specifications.
Measurement Distance Range1.5 m (min a) to 800 m (max)
Laser transmitting frequency500,000 points per second
Ranging precision5 mm @ 100 m
Field of view360° (horizontal)
Total 100° (+60°/−40°) (vertical)
a The minimum measurement distance (scanning blindness) comes from the scanning angle in the vertical direction of the LiDAR system and the length of the support (Figure 1).
Table 2. Three-dimensional coordinates of five ground object points with the same name in the UAV and mixed data (U represents UAV data, M represents mixed data, the unit is m).
Table 2. Three-dimensional coordinates of five ground object points with the same name in the UAV and mixed data (U represents UAV data, M represents mixed data, the unit is m).
IDU-XU-YU-ZM-XM-YM-ZError
P0154.927−62.316198.6879.745−51.6150.8010.083
P0265.978−50.230197.55120.742−38.953−0.1820.048
P0368.679−36.707196.33223.060−25.588−1.3030.068
P0458.841−25.852195.56912.922−14.875−2.0020.054
P0527.621−28.334196.313−18.200−18.058−1.5340.049
Table 3. Seed point matching results.
Table 3. Seed point matching results.
Window SizeIdentified Seed Points Matched Seed PointsCorrectness RateMatching Rate
1 m × 1 m1187966.94%75.24%
1.5 m × 1.5 m937580.65%71.43%
2 m × 2 m847285.71%68.57%
3 m × 3 m625487.10%51.43%

Share and Cite

MDPI and ACS Style

Tian, J.; Dai, T.; Li, H.; Liao, C.; Teng, W.; Hu, Q.; Ma, W.; Xu, Y. A Novel Tree Height Extraction Approach for Individual Trees by Combining TLS and UAV Image-Based Point Cloud Integration. Forests 2019, 10, 537. https://doi.org/10.3390/f10070537

AMA Style

Tian J, Dai T, Li H, Liao C, Teng W, Hu Q, Ma W, Xu Y. A Novel Tree Height Extraction Approach for Individual Trees by Combining TLS and UAV Image-Based Point Cloud Integration. Forests. 2019; 10(7):537. https://doi.org/10.3390/f10070537

Chicago/Turabian Style

Tian, Jiarong, Tingting Dai, Haidong Li, Chengrui Liao, Wenxiu Teng, Qingwu Hu, Weibo Ma, and Yannan Xu. 2019. "A Novel Tree Height Extraction Approach for Individual Trees by Combining TLS and UAV Image-Based Point Cloud Integration" Forests 10, no. 7: 537. https://doi.org/10.3390/f10070537

APA Style

Tian, J., Dai, T., Li, H., Liao, C., Teng, W., Hu, Q., Ma, W., & Xu, Y. (2019). A Novel Tree Height Extraction Approach for Individual Trees by Combining TLS and UAV Image-Based Point Cloud Integration. Forests, 10(7), 537. https://doi.org/10.3390/f10070537

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop