Next Article in Journal
Multiproxy Approach to Reconstruct the Fire History of Araucaria araucana Forests in the Nahuelbuta Coastal Range, Chile
Previous Article in Journal
Mangrove Resource Mapping Using Remote Sensing in the Philippines: A Systematic Review and Meta-Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Extraction of Forest Inventory Variables at the Tree Level by Using Smartphone Images to Construct a Three-Dimensional Model

College of Computer and Control Engineering, Northeast Forestry University, Harbin 150040, China
*
Author to whom correspondence should be addressed.
Forests 2023, 14(6), 1081; https://doi.org/10.3390/f14061081
Submission received: 17 April 2023 / Revised: 17 May 2023 / Accepted: 17 May 2023 / Published: 24 May 2023
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)

Abstract

:
This paper focuses on the current urgent demand for the accurate measurement of forest inventory variables in the fields of forestry carbon sink measurement, ecosystem research, and forest resource conservation, and proposes the use of images to construct a three-dimensional measurement model of forest inventory variables, which is a new method to realize the automatic extraction of forest inventory variables. This method obtains sample site information by using high-definition images taken in the forest by a smartphone, which significantly improves the field operation efficiency and simple operation, and effectively alleviates the problems of long field operation times, complicated operations, and expensive equipment used by current methods for obtaining forest inventory variables. We propose to optimize the Eps parameters of the DBSCAN algorithm based on the MVO algorithm for point cloud clustering to obtain single wood point clouds, which improves the accuracy of the model and can effectively solve the problem of large interference from human factors. The scale coefficients of the image and the actual model are obtained by the actual measurement of tree height and diameter at breast height to complete the construction of the three-dimensional measurement model of the stand and are then combined with the AdQSM algorithm to realize the automatic extraction of forest inventory variables, which provides a new interdisciplinary method for the comprehensive extraction of forest inventory variables. The accuracy of the model measured in the experimental sample site of Fraxinus mandshurica Rupr was as follows: the absolute error of tree height measurement ranged from 0.05 to 0.37 m, the highest relative error of measurement was 2.03%, and the average relative error was 1.53%; for the absolute error of diameter at breast height, measurement ranged from 0.007 to 0.057 m, the highest relative error of measurement was 7.358%, and the average relative error was 3.616%. The method proposed in this study can be directly applied to the process of acquiring and visualizing the variables of forest inventory in the field of ecological research, which has good flexibility and can meet individual research needs.

1. Introduction

Forests are the largest carbon reservoir in terrestrial ecosystems [1]. They are capable of absorbing atmospheric carbon dioxide and fixing it in vegetation or soil, reducing atmospheric carbon dioxide concentrations [2]. In order to reduce atmospheric CO2, it is necessary to have knowledge of forest inventory variables (e.g., stand species, diameter at breast height, tree height, etc.) so that forests can be managed more rationally [3,4]. In addition to the importance of forest inventory variables in carbon sinks, they are also essential for forest inventories and the assessment of forest components [5,6]. However, only diameter at breast height is available for the direct measurement of forest inventory variables, while other variables are difficult to measure and lead to large measurement errors. The standing tree, as the basic component of a forest stand, is a spatially irregularly distributed object, making it difficult to accurately measure forest inventory variables, so it is imperative to construct a three-dimensional model of stand estimation [7]. The three-dimensional model of a forest stand can realize the visual analysis of forest inventory variables, improve the measurement accuracy of forest inventory variables, and reduce the measurement difficulty. The construction of the 3D forest stand measurement model has completely changed tree measurement and monitoring. There are two types of three-dimensional construction of forest stands: one is the modeling approach based on geometric parameters [8,9] and the other is the modeling approach based on stand images [10]. The geometric parameter-based approach is used to obtain the stand 3D point cloud directly through the 3D scanning device; the stumpage image-based approach requires the alignment of the images to generate the stand 3D point cloud. When the acquisition of geometric parameters and stumpage images in the 3D modeling process are collectively referred to as information acquisition, there are two types of information acquisition: on-forest acquisition and in-forest acquisition, depending on the location of the information acquisition device [11,12].
Satellite data, airborne LiDAR scanning, UAV-borne LiDAR, and UAV-borne RGB cameras are included in forest on-farm acquisition [13,14], of which two methods using satellite data and airborne LiDAR scanning are suitable for the large-scale measurement of forest inventory variables. Yue Pan et al. [15] proposed a three-dimensional reconstruction of ground crops based on airborne LiDAR technology, using airborne LiDAR directly to obtain three-dimensional information on ground crops. The method greatly improves the efficiency of statistical cultivation and can also be used to monitor crops on the ground. However, when this method is applied to forestry, the height of crops in the experimental sample site in this study is lower than forest stands, and when airborne LiDAR or RGB cameras are applied to forest stands for three-dimensional reconstruction, although the detection of the top canopy is not affected by the shading problem, the accurate hitting of tree tops and mapping of ground ability are affected by various factors, including forest structure, LiDAR pulse density, scanning angle, platform height, and beam size, and thus the model’s accuracy is not high [16]. At the same time, the complex internal structure of forest stands restricts the flight of UAVs. Additionally, the time required for UAV data collection is costly, the use of UAVs requires professional staff, and many safety problems exist simultaneously with the use of UAVs. In forestry measurement, there is no guarantee that there is space for drones to start and stop, so the use of drones has high requirements for the actual environment.
In-forest acquisition includes both LiDAR scanning and RGB cameras [17]. Indu Indirabai et al. [18] proposed a terrestrial laser scanner (TLS)-based 3D reconstruction of trees with forest environment leaf area index retrieval [19]. In 2019, Gaia Vaglio Laurin et al. [20] divided the sample plot into 70 sub-samples of 10 m × 10 m, and 88 scanning positions were set using two Riegl-VZ400i ground-based lidars, each equipped with 6 reflectors for information acquisition, which is a large and time-consuming workload for information acquisition and data processing and can guarantee the point cloud measurement accuracy of less than 1 cm. However, using this high-precision LiDAR equipment to obtain information is not only very expensive but also complicated to operate. In the actual measurement process, due to factors such as angular scanning resolution of the TLS device, beam divergence, and occlusion, the number of laser pulses entering different structural stands is limited, resulting in large differences in point cloud density, which affects the correct determination of the size of the stand components. The main reason is that the accuracy of estimating forest inventory variables using quantitative structure modeling (QSM) algorithms depends on the high-density uniformity of the point cloud [21], yet obtaining uniform high-density point clouds based on TLS is still challenging [22].
Low-cost photogrammetric methods have significant advantages over high-cost ground-based LiDAR acquisition of 3D data [23,24]. In 2014, Liang et al. used a handheld camera for the 3D mapping of individual trees in a forest sample plot to successfully segment the main trunk of a single-standing tree and achieve the measurement of breast diameter [25]. To effectively improve the measurement accuracy of breast diameter, in 2019, Livia Piermattei et al. proposed a motion structure-based terrestrial photogrammetry technique to obtain clear 3D structures within 3 m of trunk height [26], with a breast diameter measurement error of approximately 1 cm. To further improve the measurement efficiency, in 2021, Martin Mokros et al. proposed the use of a multi-camera prototype (MultiCam, 2021, Brno, Czech Republic) to acquire images of sample plots, enabling the method to be applied to forest inventories [27]. In order to explore 3D modeling methods for single-standing trees containing a canopy, in 2015, Jordan Miller et al. demonstrated the potential of using a low-cost handheld camera alongside structure-from-motion with multi-view stereo-photogrammetry (SfM-MVS) to accurately measure trees [28]. This method has the advantage of being low-cost and it is efficient in measuring diameter at breast height and can achieve the same measurement accuracy as TLS. However, the method has only been experimented with in potted single dwarf trees and is not yet ready for application in forests, mainly because images of single trees are not available in forests. The acquisition of tree images in the forest sample site is bound to have multiple tree images coexisting, making the need for single-stand segmentation after generating 3D point clouds a difficult problem.
To solve this problem, Sebastian Dersch et al. proposed a high-density airborne LiDAR point cloud tree segmentation method combining graph-cut clustering and target-based trunk detection, which uses graph-cut clustering and supports automatic trunk detection but has a requirement for laser point density, with a trunk LiDAR point density of at least 5 points/m for automatic trunk detection techniques to successfully locate the trunk [29]. Compared to this method, the DBSCAN clustering algorithm firstly does not require pre-training to complete the clustering [30]. Secondly, it can perform clustering for point clouds with a density of less than 5 points/m. Most importantly, the DBSCAN algorithm does not require a pre-set number of clusters, can detect the number of clusters naturally, and can handle the noise in the database. However, DBSCAN suffers from the problem of needing to set parameters artificially, which can cause the clustering effect to be disturbed by perceived factors [31]. For this reason, this paper proposes to use the improved DBSCAN to automatically perform individual wood segmentation on the point cloud, which solves the human interference problem and improves the accuracy of clustering so that the automatic measurement of forest inventory variables can be achieved.
In summary, in order to solve the current problems of the high time cost of data acquisition and expensive acquisition equipment in 3D reconstruction, this study proposes using smartphones to acquire images to construct the forest stand 3D measurement model, introduce the multiverse algorithm to improve DBSCAN, solve the problem of parameters relying on human settings, improve the accuracy of clustering results, and use the improved tree quantitative structure model (AdQSM) to ensure the accuracy of the 3D measurement model.

2. Materials and Methods

2.1. Experimental Sample Sites and Data Acquisition

2.1.1. Experimental Sample Site Overview and Equipment

In this paper, we take a water hyacinth as the research object on the campus of Northeast Forestry University, which is located in Harbin City, Heilongjiang Province, at a longitude of 125°42′~130°10′ E and a latitude of 44°4′~46°40′ N, as shown in Figure 1. The climate of this area is a temperate monsoon climate, with long winters and short summers, with an average annual precipitation of 569.1 mm, an average annual temperature of 3.6 °C, and an altitude of 141 m.
The main reason for this study to propose the use of smartphones to acquire image data of forest areas is the fast upgrade and high performance of digital image sensors of smartphones. There is no special requirement for the model of the smartphone in this research method. In this study, the smartphone model used is the iPhone 12, and its video recording performance index is 1080p (1920 × 1080, 60 fps).

2.1.2. Image Generation of Dense Point Clouds

A texture video containing tree trunks and branches in the forest stand was acquired using a smartphone, and the images were extracted from the video captured by the smartphone at an interval of 25 frames. Using the 3D modeling software Pix4Dmapper (Pix4D, Lausanne, Switzerland), the images extracted from the video were generated into a dense point cloud, as shown in Figure 2.

2.1.3. Point Cloud Data Pre-Processing

Noise and ground point filtering is performed on the forest points cloud. In this paper, point cloud filtering needs to filter out the ground points and noise. Firstly, straight-pass filtering is used, which can separate ground points from non-ground points by the operation of taking values in the specified Z-axis direction and setting the desired elevation threshold. Secondly, statistical filtering is used to remove the sparse outlier noise points, and the neighborhood of each point is statistically analyzed using statistical methods, and then the points that do not satisfy the set threshold are removed. In order to meet the needs of the above two points, this paper combines direct-pass filtering with statistical filtering to filter the point cloud, and the results of filtering ground points for the forest subsample shown in Figure 2 are shown in Figure 3a, and the results after filtering noise points are shown in Figure 3b.

2.2. Optimized DBSCAN Clustering Algorithm to Obtain Single-Standing Trees

2.2.1. Improved DBSCAN Algorithm

DBSCAN is a more representative density-based clustering algorithm. The basic idea is expressed as follows: a data object point P i ( x i , y i , z i ) is arbitrarily selected from the n points of the data set, and the Euclidean distance to the remaining data points P j ( x j , y j , z j ) is calculated in a sphere space centered at that point and radius of the epsilon (Eps), as shown in Equation (1). If the number of points contained in the sphere space is not less than the minimum number contained (MinPts), the selected data object point p is called the core point, and all points within the intersecting circle centered on the core point, represent the same category. After selecting new data points, the above steps are repeated until all data points have been processed.
d i j = ( x i x j ) 2 + ( y i y j ) 2 + ( z i z j ) 2
where i = 1 , 2 , n , j = 1 , 2 , n and i j .
There is randomness in the choices of Eps and MinPts parameters of the DBSCAN algorithm, and choosing different thresholds can have a great effect on the results of clustering. The MVO is introduced in this paper to optimize DBSCAN and obtain the optimal solution of Eps.
The MVO mathematical model of the algorithm is as follows:
Create a set of random universes U :
U = x 1     x 2         x n T
In which n denotes the number of universes and x n denotes the argument of the nth universe.
The silhouette coefficient is used as the objective function, and the silhouette coefficient reflects the intra-cluster tightness and divisibility of the clustering structure, and the distance d i j between points inside each class P i and other points of the same class is calculated separately according to Equation (1), and the average value is expressed as d s ( p i ) . Each point P i corresponds to a d s ( p i ) , and similarly, the average value of the distance from point P i to each point of different classes is calculated as d h ( P i ) , so that E p s = x n . The point cloud is classified into c classes, and a total of the average of the distances of c − 1 different classes is calculated, where the minimum value is represented by min { d h ( P i ) } . Each point P i corresponds to a min { d h ( P i ) } , and the silhouette coefficient S i l i of P i points is calculated, as shown in Equation (3). The value of the silhouette coefficient ranges from −1 to 1, and closer to 1 means the cohesiveness and separation are relatively better. Assuming a total of n P i points in the point cloud, the average silhouette coefficient S i l k of n points is calculated, as shown in Equation (4), and the expansion rate of each universe N I k is obtained from the average silhouette coefficient, as shown in Equation (5).
S i l i = min { d h ( P i ) } d s ( P i ) max d s ( P i ) , min { d h ( P i ) }
where max d s ( P i ) , min { d h ( P i ) } represents the larger of min { d h ( P i ) } and d s ( p i ) .
S i l k = i = 1 n S i l i n
N I k = 1 S i l k
where k = 1 , 2 , , n .
The probability of wormhole existence (WEP) and the travel distance value (TDR) are two important parameters that are updated with the following principles.
W E P = W E P min + t ( W E P max W E P min T )
T D R = 1 t 1 p T 1 p
where t is the current number of iterations, T is the maximum number of iterations, W E P min and W E P max are the minimum and maximum probabilities of the presence of wormhole and P is the degree of utilization.
x i = x + T D R ( ( u b l b ) × r 2 + l b ) r 3 < 0.5 x T D R ( ( u b l b ) × r 2 + l b ) r 3 0.5 r 4 < W E P x i r 4 W E P
where x represents the parameter of the best universe to date, l b represents the lower bound of U, u b represents the upper bound of U, x i represents the parameter of the ith universe, and r 2 , r 3 , r 4 are random numbers ranges from 0 to 1.
The MVO algorithm is used to continuously update the Eps parameter in the DBSCAN algorithm and find the global optimal Eps parameter by maximizing the silhouette coefficient, as shown in Figure 4, so as to avoid the need to manually set Eps parameters.

2.2.2. Evaluating Indicator

When considering the evaluation of the performance of the clustering algorithm, four evaluation metrics, accuracy, adjusted Rand index, adjusted mutual information, and F1 score, are used to make the judgment.
(1)
Accuracy (ACC)
ACC is useful for comparing the labels obtained with the true labels provided by the data, as shown in Equation (9), and takes values in the ranges from 0 to 1, with higher values implying that the clustering results match the true situation.
A C C = i = 1 k n i n
where n i is the number of correct objects predicted by the clustering algorithm compared to the true cluster and n is the total number of objects.
(2)
Adjusted Rand Index (ARI)
The ARI reflects the degree of overlap between the two divisions, as shown in Equation (10), and its range ranges from 0 to 1, the higher the value means that the clustering results match the true situation.
A R I = R I E [ R I ] max [ R I ] E [ R I ]
where R I represents the Rand index, as shown in Equation (11).
R I = 2 ( a + b ) m ( m 1 )
where a represents the number of elemental pairs of the same class in both C and K and b represents the number of elemental pairs of different classes in both C and K. C represents actual class information, K represents clustering results, and m is the number of samples.
(3)
Adjusted Mutual Information (AMI)
AMI is used to determine the degree of agreement between two data distributions, as shown in Equation (12), and takes a range of ranges from −1 to 1, the higher the value means that the clustering results match the real situation.
A M I = M I E [ M I ] max ( H ( U ) , H ( V ) ) E [ M I ]
where M I represents the mutual information, as shown in Equation (13).
M I ( U , V ) = i = 1 R j = 1 C p i , j log ( p i , j p i × p j )
where U is the true label vector, V is the clustering outcome vector, p i , j is called joint probability. p i and p j are edge probabilities.
(4)
F1 Score
F1 score takes into account both the precision rate (P), as shown in Equation (14), and the recall rate (R) of the classification model, as shown in Equation (15), which can be regarded as a weighted average of precision rate and recall rate of the model. Its value ranges from 0 to 1, as shown in Equation (16). The larger the value, the more consistent the clustering result is with the real situation.
P = TP TP + FP
R = TP TP + FN
F 1 = 2 P × R P + R
where TP is a positive sample where the prediction is correct, FP is a positive sample where the prediction is wrong, and FN is a negative sample where the prediction is wrong.

2.3. Calculate the Scale Factor

In this paper, the Z-coordinate of the point cloud is adjusted by calculating the scale factor R z between the actual value H of tree height and the output value of the model, as shown in Equation (17). The measurement model output value is expressed by the difference between the highest point P H ( x H , y H , z H ) and the lowest point P L ( x L , y L , z L ) of a single tree.
R z = z H z L H
The X and Y coordinates of the point cloud are adjusted by calculating the scale factor R x y between the actual value B of the tree diameter at breast height and the model output value, as shown in Equation (18). The measurement model output value of the model is represented by d ( P i ) .
R x y = d ( P i ) B
The point cloud coordinates were adjusted to P i ( X i , Y i , Z i ) , where X i = R x y x i , Y i = R x y y i , and Z i = R z z i , according to the scale factor in the horizontal direction to complete the construction of the three-dimensional measurement model of the forest stand sample site.

2.4. Automatic Extraction of Forest Inventory Variables Based on the AdQSM Algorithm

The completed scaled point cloud of single trees after clustered segmentation is modeled using the AdQSM method (https://github.com/GuangpengFan/AdQSM, accessed on 26 July 2022), as shown in Figure 5, to obtain forest inventory variables such as trunk volume, branch volume, trunk length, branch length, number of branches, and trunk circumference.

3. Results

3.1. Experimental Conditions

The automatic image-matching process using Pix4Dmapper software was used to generate image-based point clouds from photographs. The computer used was equipped with an AMD Ryzen 7 5800H CPU (8 cores @ 3.2 GHz) and 16 GB of DDR4 RAM. The variables of the matching process were set to high quality, and automatic lens calibration was performed while image matching was performed. The experimental platform equipped with MVO-DBSCAN for image-based point cloud clustering is based on Windows 10 operating system with an Intel(R) Core (TM) i7-10700K CPU (8 cores @ 3.8 GHz), 10 GB NVIDIA RTX 3080 GPU, and 32 GB of DDR4 RAM in MATLAB R2020a.
The actual height of the trees was measured using a laser rangefinder (Leica DISTO X310), and the average of three measurements for each tree was calculated as the actual height of the trees. The measured diameter at breast height was obtained by dividing the circumference of the tree by 3.14 using a tape measure 1.3 m above ground level.

3.2. Clustering Algorithm Comparison Experiment

In view of verifying the clustering performance of the MVO-DBSCAN algorithm, the MVO-DBSCAN algorithm is analyzed in comparison with the DBSCAN, K-means, and MeanShift algorithms by using spiral, scatter, and composite datasets, as shown in Figure 6, Figure 7 and Figure 8, where (a) represents the correct clustering results and (b), (c), (d), and (e) represent the clustering results of the DBSCAN, K-means, MeanShift, and MVO-DBSCAN algorithms.
In Figure 6 and Figure 8, it can be seen that the K-means and MeanShift clustering algorithms do not reflect the real clustering results in the spiral and scattered datasets, and K-means needs to set the number of clusters in advance, which cannot meet the needs of this experiment. Although DBSCAN can cluster according to real classes, there is a possibility of not getting reasonable clustering results due to the need of artificial setting of parameters. DBSCAN algorithm can reflect the real data structure, but it cannot obtain a reasonable clustering effect due to the parameter setting. MVO-DBSCAN algorithm can not only reflect the real data structure but also reduce human intervention and make a reasonable clustering of the data.
By comparing the results of the four evaluation metrics obtained by the four algorithms on the three datasets, as shown in Figure 9, it can be concluded that the ACC, ARI, AMI, and F1 of the MVO-DBSCAN and DBSCAN algorithms on the spiral dataset and the composite dataset are much higher than the other two algorithms, and the four evaluation metrics of the MVO-DBSCAN algorithm on the three datasets are close to the optimal values. Compared with DBSCAN, MVO-DBSCAN avoids the Eps parameter being set by humans and saves time cost, avoids errors caused by humans setting parameters, and improves the clustering performance.

3.3. Experiment and Analysis of Forest Stand Measurement Model Construction

The steps of the forest stand measurement model construction method proposed in this paper are as follows:
(1)
On the basis of Figure 3b, the forest point cloud is clustered by using MVO to find the Eps parameters of DBSCAN, as shown in Figure 10, to obtain the single wood point cloud.
(2)
Number the single wood point clouds in the standpoint clouds, as shown in Figure 11, take the single woods numbered 1, 7, 9, 10, and 13 as the first group and the remaining single woods as the second group, obtain the model output values of tree height and diameter at breast height from the first group of single wood point clouds using Cloud Compare software, and use the measured tree height and diameter at breast height values and the model output tree height and diameter at breast height values to obtain the scale factor, as shown in Table 1 and Table 2.
(3)
Using the scale coefficients obtained in step (2) to transform the single wood coordinates and model the single wood separately using the AdQSM method, the output values of the measurement model for the two variables of tree height and diameter at the breast height of the second group of single wood were obtained, as shown in Table 3 and Table 4, and the stand measurement model was generated, as shown in Figure 12, and the model was analyzed by the relationship between the output values of the measurement model for tree height and diameter at breast height and the actual measured values.
By using this measurement model to measure tree height, the absolute error of measurement is between 0.05 and 0.37 m, the highest relative error of measurement is 2.03%, and the average relative error is 1.53%, which satisfies the error allowed for measuring tree height in forests, and at the same time, we obtain 5.45, which reduces the Z coordinate of the point cloud by 5.45.
By using this measurement model to measure the diameter at breast height, the absolute error of measurement is between 0.007 and 0.057 m, the highest relative error of measurement is 7.358%, and the average relative error is 3.616%, and the Z coordinate of the point cloud is reduced by 1.996.

4. Discussion

The results of the comparison experiments of the clustering algorithms show that the performance of the clustering algorithm based on the MOV-DBSCAN algorithm proposed in this paper has been improved, and the most worrying values are obtained for all evaluation indexes compared to other algorithms. The experiments of single wood clustering using the MOV-DBSCAN algorithm have solved the problems of under-segmentation due to small Eps parameters and under-segmentation due to large Eps parameters. The image acquisition method and point cloud generation method used in this paper can obtain high-density point clouds, which have significant advantages in terms of economy and portability compared with TLS with the same performance. At the same time, the algorithm proposed in this paper is used to filter and cluster the point cloud segmentation within the study sample, which not only achieves the accurate number of trees but also maintains the structural integrity of individual trees. Most importantly, the hard-to-identify branches are segmented from the point cloud. After segmentation, the point cloud still has the characteristic of high density, which can meet the requirement of high density for the automatic extraction of single wood forest inventory variables using the AdQSM algorithm. The experimental sample site selected in this paper has a total of 22 Fraxinus mandshurica Rupr, with a maximum tree height of 22.90 m and a minimum tree height of 15.30 m. It is difficult to collect images from the ground and measure the tree height directly; however, the application of the method proposed in this paper can still obtain good results and fine branch information, as shown in Figure 13.
In terms of automatic extraction of tree forest inventory variables, the results of the root mean square error analysis of the tree height and diameter at breast height extraction were obtained by applying the method proposed in this paper. As shown in Figure 14, the RMSE for tree height estimation is 0.329 m, and the RMSE for diameter at breast height estimation is 0.022 m. Compared with previous studies [23,27], the accuracy of the model’s forest inventory variables meets the requirements needed for actual forest inventory. To ensure the accurate generation of the point cloud and to be able to measure the tree height accurately, images were selected to be collected after the leaves fell [24]. The penetration of images is not as strong as TLS, so the image-based point clouds have high environmental requirements. In the image generation of point clouds, the number of images is small, the overlap rate is so low that the images cannot be aligned, and the acquisition path is not reasonably planned. The above problems can lead to point cloud generation failure. This study often uses video recordings for image acquisition to ensure the number of images, and the acquisition path is used to collect the outer circle of the sample area first. Then, the inner circle images in the order of tree arrangement are collected, which fully ensures that the point cloud generation is not affected by the number of images, acquisition path, and other problems.
It can be seen in Table 1 and Table 3 that the estimated tree heights of numbers 5, 6, 11, 12, 14, 15, 17, 19, 20, 21, and 22 are higher than the actual measured values. The reason is that these trees are taller and the sky as a background will affect them when generating point clouds, which will generate more noise and increase the height of single trees, leading to large, estimated values. The estimated tree heights of No. 8 and 16 are smaller than the true values because these two trees grow near the edge of the sample site and are taller, so the shooting range cannot be too wide in order to prevent interference from other sample sites. The tree heights of numbers 2, 3, 4, and 18 were at medium height, so the absolute errors were relatively small. A reasonable selection of experimental sample sites can make the accuracy higher and can be applied to precise measurement directions.

5. Conclusions

In this study, we propose the use of images to construct a three-dimensional estimation model of the stand sample land to achieve the automatic extraction of forest inventory variables, which can be performed by smartphones alone. In addition, this method is easier to operate, less time-consuming, denser, and less restrictive than using UAVs and ground-based LIDAR to acquire forest point cloud data. In this paper, we use the DBSCAN algorithm to cluster point clouds with high density and, therefore, apply it to forest stand clustering segmentation, and we can also see in Section 3.2 that DBSCAN has very high accuracy for clustering different types of datasets. The Eps need to be set artificially when these data are applied, and the accuracy of forest point cloud clustering segmentation is improved. The main innovation is the use of images to construct a three-dimensional estimation model of forest stand samples to achieve the automatic extraction of forest inventory variables of forest stands. The method can obtain a large amount of data resources using little time and a low human cost, and it is more intelligent in analyzing the complex irregular structure of forest three-dimensional space, and the accuracy of the obtained forest inventory variables is high. In the face of the complexity of forests, the method proposed in this paper is more flexible and accurate when applied to forest inventory variable estimation, and it can be applied to the extraction and visualization analysis of forest inventory variables in the field of ecosystems and forest resource protection.

Author Contributions

Conceptualization, J.S. and Q.H.; methodology, J.S. and Q.H.; software, Y.Z.; validation, Y.Z.; formal analysis, Q.H.; investigation, Q.H.; resources, Q.H.; data curation, Q.H.; writing—original draft preparation, Q.H.; writing—review and editing, J.S. and W.S.; visualization, Y.F.; supervision, C.L.; project administration, Y.F and C.L.; funding acquisition, J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by “The Fundamental Research Funds for the Central Universities”, grant number 2572017CB13, the “Heilongjiang Provincial Natural Science Foundation of China”, grant number QC2016080, and “The APC was funded by Jiayin Song”.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bradford, J.B.; Jensen, N.R.; Domke, G.M.; D’amato, A.W. Potential increases in natural disturbance rates could offset forest management impacts on ecosystem carbon stocks. For. Ecol. Manag. 2013, 308, 178–187. [Google Scholar] [CrossRef]
  2. Yin, S.; Gong, Z.; Gu, L.; Deng, Y.; Niu, Y. Driving forces of the efficiency of forest carbon sequestration production: Spatial panel data from the national forest inventory in China. J. Clean. Prod. 2021, 330, 129776. [Google Scholar] [CrossRef]
  3. Khan, M.N.I.; Islam, M.R.; Rahman, A.; Azad, M.S.; Knohl, A. Allometric relationships of stand level carbon stocks to basal area, tree height and wood density of nine tree species in Bangladesh. Glob. Ecol. Conserv. 2020, 22, e01025. [Google Scholar] [CrossRef]
  4. Lodin, I.; Brukas, V. Ideal vs real forest management: Challenges in promoting production-oriented silvicultural ideals among small-scale forest owners in southern Sweden. Land Use Policy 2020, 100, 104931. [Google Scholar] [CrossRef]
  5. Gillerot, L.; Grussu, G.; Condor-Golec, R.; Tavani, R.; Dargush, P.; Attorre, F. Progress on incorporating biodiversity monitoring in REDD plus through national forest inventories. Glob. Ecol. Conserv. 2021, 32, e01901. [Google Scholar]
  6. Fischer, F.J.; Labrière, N.; Vincent, G.; Hérault, B.; Alonso, A.; Memiaghe, H.; Bissiengou, P.; Kenfack, D.; Saatchi, S.; Chave, J. A simulation method to infer tree allometry and forest structure from airborne laser scanning and forest inventories. Remote Sens. Environ. 2020, 251, 112056. [Google Scholar] [CrossRef]
  7. Barral, P.-A.; Demasi-Jacquier, M.A.; Bal, L.; Omnes, V.; Bartoli, A.; Piquet, P.; Jacquier, A.; Gaudry, M. Fusion Imaging to Guide Thoracic Endovascular Aortic Repair (TEVAR): A Randomized Comparison of Two Methods, 2D/3D Versus 3D/3D Image Fusion. Cardiovasc. Interv. Radiol. 2019, 42, 1522–1529. [Google Scholar] [CrossRef] [PubMed]
  8. Münzinger, M.; Prechtel, N.; Behnisch, M. Mapping the urban forest in detail: From LiDAR point clouds to 3D tree models. Urban For. Urban Green. 2022, 74, 127637. [Google Scholar] [CrossRef]
  9. Schneider, F.D.; Leiterer, R.; Morsdorf, F.; Gastellu-Etchegorry, J.P.; Lauret, N.; Pfeifer, N.; Schaepman, M.E. Simulating imaging spectrometer data: 3D forest modeling based on LiDAR and in situ data. Remote Sens. Environ. 2014, 152, 235–250. [Google Scholar] [CrossRef]
  10. Wei, Y.; Ding, Z.; Huang, H.; Yan, C.; Huang, J.; Leng, J. A non-contact measurement method of ship block using image-based 3D reconstruction technology. Ocean Eng. 2019, 178, 463–475. [Google Scholar] [CrossRef]
  11. Tang, S.; Dong, P.; Buckles, B.P. Three-dimensional surface reconstruction of tree canopy from lidar point clouds using a region-based level set method. Int. J. Remote Sens. 2012, 34, 1373–1385. [Google Scholar] [CrossRef]
  12. Yang, X.Y.; Strahler, A.H.; Schaaf, C.B.; Jupp, D.L.B.; Yao, T.; Zhao, F.; Wang, Z. Three-dimensional forest reconstruction and structural parameter retrievals using a terrestrial full-waveform lidar instrument (Echidn (R)). Remote Sens. Environ. 2013, 135, 36–51. [Google Scholar] [CrossRef]
  13. Bulut, S.; Günlü, A.; Çakır, G. Modelling some stand parameters using Landsat 8 OLI and Sentinel-2 satellite images by machine learning techniques: A case study in Türkiye. Geocarto Int. 2023, 38, 2158238. [Google Scholar] [CrossRef]
  14. Qin, H.; Wang, C.; Xi, X.; Tian, J.; Zhou, G. Simulating the Effects of the Airborne Lidar Scanning Angle, Flying Altitude, and Pulse Density for Forest Foliage Profile Retrieval. Appl. Sci. 2017, 7, 712. [Google Scholar] [CrossRef]
  15. Pan, Y.; Han, Y.; Wang, L.; Chen, J.; Meng, H.; Wang, G.; Zhang, Z.; Wang, S. 3D Reconstruction of Ground Crops Based on Airborne LiDAR Technology—ScienceDirect. IFAC-Pap. 2019, 52, 35–40. [Google Scholar]
  16. Picos, J.; Bastos, G.; Miguez, D.; Alonso, L.; Armesto, J. Individual Tree Detection in a Eucalyptus Plantation Using Unmanned Aerial Vehicle (UAV)-LiDAR. Remote Sens. 2020, 12, 885. [Google Scholar] [CrossRef]
  17. Dassot, M.; Constant, T.; Fournier, M. The use of terrestrial LiDAR technology in forest science: Application fields, benefits and challenges. Ann. For. Sci. 2011, 68, 959–974. [Google Scholar] [CrossRef]
  18. Indirabai, I.; Nair, M.H.; Jaishanker, R.N.; Nidamanuri, R.R. Terrestrial laser scanner based 3D reconstruction of trees and retrieval of leaf area index in a forest environment. Ecol. Inform. 2019, 53, 100986. [Google Scholar] [CrossRef]
  19. Leblanc, S.; Fournier, R. Hemispherical photography simulations with an architectural model to assess retrieval of leaf area index. Agric. For. Meteorol. 2014, 194, 64–76. [Google Scholar] [CrossRef]
  20. Laurin, G.V.; Ding, J.; Disney, M.; Bartholomeus, H.; Valentini, R. Tree height in tropical forest as measured by different ground, proximal, and remote sensing instruments, and impacts on above ground biomass estimates. Int. J. Appl. Earth Obs. Geoinf. 2019, 82, 101899. [Google Scholar]
  21. Fan, G.; Nan, L.; Dong, Y.; Su, X.; Chen, F. AdQSM: A New Method for Estimating Above-Ground Biomass from TLS Point Clouds. Remote Sens. 2020, 12, 3089. [Google Scholar] [CrossRef]
  22. She, J.; Guo, X.; Tan, X.; Liu, J. 3D Visualization of Trees Based on a Sphere-Board Model. ISPRS Int. J. Geo-Inf. 2018, 7, 45. [Google Scholar] [CrossRef]
  23. Mokroš, M.; Liang, X.; Surový, P.; Valent, P.; Čerňava, J.; Chudý, F.; Tunák, D.; Saloň, Š.; Merganič, J. Evaluation of Close-Range Photogrammetry Image Collection Methods for Estimating Tree Diameters. Int. J. Geo-Inf. 2018, 7, 93. [Google Scholar] [CrossRef]
  24. Liang, X.; Wang, Y.; Jaakkola, A.; Kukko, A.; Kaartinen, H.; Hyyppa, J.; Honkavaara, E.; Liu, J. Forest Data Collection Using Terrestrial Image-Based Point Clouds From a Handheld Camera Compared to Terrestrial and Personal Laser Scanning. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5117–5132. [Google Scholar] [CrossRef]
  25. Liang, X.; Jaakkola, A.; Wang, Y.; Hyyppä, J.; Honkavaara, E.; Liu, J.; Kaartinen, H. The Use of a Hand-Held Camera for Individual Tree 3D Mapping in Forest Sample Plots. Remote Sens. 2014, 6, 6587–6603. [Google Scholar] [CrossRef]
  26. Piermattei, L.; Karel, W.; Wang, D.; Wieser, M.; Mokroš, M.; Surový, P.; Koreň, M.; Tomaštík, J.; Pfeifer, N.; Hollaus, M. Terrestrial Structure from Motion Photogrammetry for Deriving Forest Inventory Data. Remote Sens. 2019, 11, 950. [Google Scholar] [CrossRef]
  27. Mokroš, M.; Mikita, T.; Singh, A.; Tomaštík, J.; Chudá, J.; Wężyk, P.; Kuželka, K.; Surový, P.; Klimánek, M.; Zięba-Kulawik, K.; et al. Novel low-cost mobile mapping systems for forest inventories as terrestrial laser scanning alternatives. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102512. [Google Scholar] [CrossRef]
  28. Miller, J.; Morgenroth, J.; Gomez, C. 3D modelling of individual trees using a handheld camera: Accuracy of height, diameter and volume estimates. Urban For. Urban Green. 2015, 14, 932–940. [Google Scholar] [CrossRef]
  29. Dersch, S.; Heurich, M.; Krueger, N.; Krzystek, P. Combining graph-cut clustering with object-based stem detection for tree segmentation in highly dense airborne lidar point clouds. ISPRS J. Photogramm. Remote Sens. 2021, 172, 207–222. [Google Scholar] [CrossRef]
  30. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  31. Comesaña-Cebral, L.; Martínez-Sánchez, J.; Lorenzo, H.; Arias, P. Individual Tree Segmentation Method Based on Mobile Backpack LiDAR Point Clouds. Sensors 2021, 21, 6007. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Basic information of the experimental sample sites.
Figure 1. Basic information of the experimental sample sites.
Forests 14 01081 g001
Figure 2. Three-dimensional point cloud of the forest stand sample site (From blue to red represents a gradual increase in height from the ground). (a) Top view; and (b) front view.
Figure 2. Three-dimensional point cloud of the forest stand sample site (From blue to red represents a gradual increase in height from the ground). (a) Top view; and (b) front view.
Forests 14 01081 g002
Figure 3. Point cloud data pre-processing(From blue to red represents a gradual increase in height from the ground). (a) Filtering ground points and (b) filtering noise.
Figure 3. Point cloud data pre-processing(From blue to red represents a gradual increase in height from the ground). (a) Filtering ground points and (b) filtering noise.
Forests 14 01081 g003
Figure 4. The MVO-DBSCAN algorithm.
Figure 4. The MVO-DBSCAN algorithm.
Forests 14 01081 g004
Figure 5. A single wood 3D measurement model based on the AdQSM algorithm (From blue to red represents a gradual increase in height from the ground, brown color represents the single wood shown in the model).
Figure 5. A single wood 3D measurement model based on the AdQSM algorithm (From blue to red represents a gradual increase in height from the ground, brown color represents the single wood shown in the model).
Forests 14 01081 g005
Figure 6. Clustering results on the spiral dataset(Different colors within the same picture represent different categories). (a) Correct clustering results; (b) DBSCAN clustering results; (c) K-means clustering results; (d) MeanShift clustering results; (e) MVO-DBSCAN clustering results.
Figure 6. Clustering results on the spiral dataset(Different colors within the same picture represent different categories). (a) Correct clustering results; (b) DBSCAN clustering results; (c) K-means clustering results; (d) MeanShift clustering results; (e) MVO-DBSCAN clustering results.
Forests 14 01081 g006
Figure 7. Clustering results on the scattered dataset (Different colors within the same picture represent different categories). (a) Correct clustering results; (b) DBSCAN clustering results; (c) K-means clustering results; (d) MeanShift clustering results; (e) MVO-DBSCAN clustering results.
Figure 7. Clustering results on the scattered dataset (Different colors within the same picture represent different categories). (a) Correct clustering results; (b) DBSCAN clustering results; (c) K-means clustering results; (d) MeanShift clustering results; (e) MVO-DBSCAN clustering results.
Forests 14 01081 g007
Figure 8. Clustering results on the composite dataset (Different colors within the same picture represent different categories). (a) Correct clustering results; (b) DBSCAN clustering results; (c) K-means clustering results; (d) MeanShift clustering results; (e) MVO-DBSCAN clustering results.
Figure 8. Clustering results on the composite dataset (Different colors within the same picture represent different categories). (a) Correct clustering results; (b) DBSCAN clustering results; (c) K-means clustering results; (d) MeanShift clustering results; (e) MVO-DBSCAN clustering results.
Forests 14 01081 g008
Figure 9. A comparison of the evaluation metrics of the three datasets in four different clustering algorithms. (a) ACC comparison result; (b) ARI comparison result; (c) AMI comparison result; (d) F1 comparison result.
Figure 9. A comparison of the evaluation metrics of the three datasets in four different clustering algorithms. (a) ACC comparison result; (b) ARI comparison result; (c) AMI comparison result; (d) F1 comparison result.
Forests 14 01081 g009aForests 14 01081 g009b
Figure 10. A three-dimensional model of a forest stand sample plot (Each color represents a tree).
Figure 10. A three-dimensional model of a forest stand sample plot (Each color represents a tree).
Forests 14 01081 g010
Figure 11. A single wood overhead numbering diagram (From blue to red represents a gradual increase in height from the ground, the number represents the number of trees.).
Figure 11. A single wood overhead numbering diagram (From blue to red represents a gradual increase in height from the ground, the number represents the number of trees.).
Forests 14 01081 g011
Figure 12. A forest stand measurement model.
Figure 12. A forest stand measurement model.
Forests 14 01081 g012
Figure 13. A single wood point cloud detail comparison. (a) No. 17 single wood point cloud and (b) No. 17 single wood model (From blue to red represents a gradual increase in height from the ground).
Figure 13. A single wood point cloud detail comparison. (a) No. 17 single wood point cloud and (b) No. 17 single wood model (From blue to red represents a gradual increase in height from the ground).
Forests 14 01081 g013
Figure 14. Fitting plots of tree height and DBH measurement model output value to the measured value (The red squares represent the true value of the same tree and the model output value). (a) Tree height and (b) CBH.
Figure 14. Fitting plots of tree height and DBH measurement model output value to the measured value (The red squares represent the true value of the same tree and the model output value). (a) Tree height and (b) CBH.
Forests 14 01081 g014
Table 1. The first set of tree height data.
Table 1. The first set of tree height data.
Number of TreesModel Output
Value (m)
Measured Value (m)Proportion
181.8315.205.38
7115.7321.105.48
9108.0119.405.56
1085.215.905.36
1399.3718.205.46
Average value of
tree height
proportion
5.45
Table 2. The first group of breast diameter data.
Table 2. The first group of breast diameter data.
Number of TreesModel Output
Value (m)
Measured Value (m)Proportion
11.7600.8472.078
72.1100.1101.927
91.6300.7662.128
101.0700.5651.894
131.4400.7381.951
Average value of
DBH proportion
1.996
Table 3. The second group of tree data.
Table 3. The second group of tree data.
Number of TreesActual Value of
Tree Height (m)
Measurement Model
Output Value (m)
Absolute
Error (m)
Relative Error (%)
216.3016.130.130.80
315.7015.480.221.40
416.8016.550.251.49
519.2019.510.311.61
619.3019.670.371.92
820.3019.960.341.67
1121.1021.410.311.47
1217.2017.550.352.03
1421.2021.530.331.56
1522.9023.250.351.53
1620.9020.140.361.72
1719.6019.920.321.63
1815.3015.080.221.44
1920.3020.650.351.73
2021.0021.340.341.62
2122.1022.400.301.36
2221.7022.040.341.57
Average value0.291.53
Table 4. The second set of chest diameter data.
Table 4. The second set of chest diameter data.
Number of TreesActual Value of DBH (m)Measurement Model
Output Value (m)
Absolute Error (m)Relative Error (%)
20.9000.9070.0070.757
30.6470.6310.0162.432
40.7520.7970.0455.930
50.9230.9820.0596.388
80.8350.8720.0374.431
60.7700.8270.0577.358
110.8400.8720.0323.779
120.5920.5560.0366.062
140.8000.8220.0222.705
151.0151.0320.0171.681
160.9000.9120.0121.314
170.7750.8070.0324.079
180.5450.5160.0295.315
190.8050.8120.0070.823
200.6140.5910.0233.716
210.6720.6810.0091.393
220.8050.8320.0273.312
Average value0.0273.616
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, J.; Huang, Q.; Zhao, Y.; Song, W.; Fan, Y.; Lu, C. Automatic Extraction of Forest Inventory Variables at the Tree Level by Using Smartphone Images to Construct a Three-Dimensional Model. Forests 2023, 14, 1081. https://doi.org/10.3390/f14061081

AMA Style

Song J, Huang Q, Zhao Y, Song W, Fan Y, Lu C. Automatic Extraction of Forest Inventory Variables at the Tree Level by Using Smartphone Images to Construct a Three-Dimensional Model. Forests. 2023; 14(6):1081. https://doi.org/10.3390/f14061081

Chicago/Turabian Style

Song, Jiayin, Qiqi Huang, Yue Zhao, Wenlong Song, Yiming Fan, and Chao Lu. 2023. "Automatic Extraction of Forest Inventory Variables at the Tree Level by Using Smartphone Images to Construct a Three-Dimensional Model" Forests 14, no. 6: 1081. https://doi.org/10.3390/f14061081

APA Style

Song, J., Huang, Q., Zhao, Y., Song, W., Fan, Y., & Lu, C. (2023). Automatic Extraction of Forest Inventory Variables at the Tree Level by Using Smartphone Images to Construct a Three-Dimensional Model. Forests, 14(6), 1081. https://doi.org/10.3390/f14061081

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop