Next Article in Journal
Networks and Fragments: An Integrative Approach for Planning Urban Green Infrastructures in Dense Urban Areas
Previous Article in Journal
Urban Stormwater Management Using Nature-Based Solutions: A Review and Conceptual Model of Floodable Parks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of Tree Object Segmentation Performance for Individual Tree Recognition Using Remote Sensing Techniques Based on Urban Forest Green Structures

1
Department of Landscape Architecture, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea
2
Huron Network Co., Ltd., 5, Gunpocheomdansaneop 2-ro 22beon-gil, Gunpo-si 15880, Gyeonggi-do, Republic of Korea
*
Author to whom correspondence should be addressed.
Land 2024, 13(11), 1856; https://doi.org/10.3390/land13111856
Submission received: 2 October 2024 / Revised: 3 November 2024 / Accepted: 5 November 2024 / Published: 7 November 2024
(This article belongs to the Section Land Planning and Landscape Architecture)

Abstract

:
This study evaluated whether tree object segmentation using remote sensing techniques could be effectively conducted according to the green structures of urban forests. The remote sensing techniques used were handheld LiDAR and UAV-based photogrammetry. The data collected from both methods were merged to complement each other’s limitations. The green structures of the study area were classified into three types based on the distance between canopy trees and the presence of shrubs. The ability to individually classify trees within each of the three types of green structures was then evaluated. The evaluation method was to assess the success rate by comparing the actual number of trees, which were visually counted in the field, with the number of tree objects classified in the study. To perform semantic segmentation of tree objects, a preprocessing step was conducted to extract only the data related to tree structures from the data collected through remote sensing techniques. The preprocessing steps included data merging, noise removal, separation of DTM and DSM, and separation of green areas and structures. The analysis results showed that tree object recognition was not efficient when the green structures were complex and mixed, and the recognition rate was highest when only canopy trees were present, and the canopies did not overlap. Therefore, when observing in high-density areas, the semantic segmentation algorithm’s variables should be adjusted to narrow the object recognition range, and additional observations in winter are needed to compensate for areas obscured by leaves. By improving data collection methods and systematizing the analysis methods according to the green structures, the object recognition process can be enhanced.

1. Introduction

Recently, Korean government agencies have focused on constructing virtual spaces, such as digital twins, to use for urban environments and management policies [1,2]. Implementing three-dimensional (3D) digital transformation and data collection of urban forests and urban spaces is essential to construct digital twin technology [3,4,5]. Trees are one of the key components within urban spaces. In the visualization of digital twins, 3D tree images constructed using LiDAR can be utilized to represent urban spaces effectively [6,7]. Therefore, remote sensing technology can be utilized for 3D data collection for digital twins [8,9,10]. Furthermore, for urban forest management, individual trees must be quantified and measured in a 3D structure [11,12]. In this process, the collected 3D data of the urban forest can play a significant role as input data for various analyses and modeling [13,14,15,16,17,18,19].
Remote sensing technology is a suitable technique for collecting spatial data from urban environments and urban forests [20,21,22,23]. Remote sensing is effective for the spatial analysis of urban forest characteristics, such as tree location, height, crown width, and tree shape in urban forest management [24,25,26]. Point cloud data acquired through remote sensing allow the object recognition of each tree and various analyses [27,28]. The technology for capturing 3D data allows for the objective restoration of the targeted object by scanning it into point cloud data using a light detection and ranging (LiDAR) sensor, accurately reflecting the surface shape of the object and providing location information [29].
Therefore, remote sensing techniques have been employed to collect urban environment and urban forest spatial data, and related research has been conducted. The researchers have applied various sensing techniques to verify the accuracy of the data collection or testing methods to analyze the sensor data. Remote sensing can be classified into terrestrial and airborne methods. The terrestrial LiDAR data used in traditional surveying fields are a representative tool for the 3D digitalization of space. This system can accurately capture the 3D structure of objects. However, terrestrial LiDAR has relative mobility constraints, making it disadvantageous for surveying extensive areas, such as forests [30]. Airborne LiDAR emphasizes its ability to quickly scan large areas from a high altitude and can collect data across large and complex terrains relatively easily. However, airborne LiDAR can provide data with a relatively lower resolution than terrestrial LiDAR. Additionally, the LiDAR viewpoints significantly affect the areas that are focused upon for detection. Airborne LiDAR tends to create point clouds concentrated on the tree canopy. In contrast, terrestrial LiDAR more intensively detects the trunks, branches, and leaves of the trees [31,32].
Previous studies have compared the performance of airborne and terrestrial LiDAR or have focused on enhancing tree detection techniques and algorithms using LiDAR data. In the former case, the suitable LiDAR analysis technique varied depending on the comparison metrics in the research. Research based on height measurement concluded that using airborne LiDAR is more accurate [31,33]. For applications that prioritize data from the canopy, such as the height and crown width, airborne LiDAR has been widely adopted. Lindberg et al. [34] utilized individual tree crown (ITC) methods to extract tree crowns from data collected using airborne laser scanning (ALS). In contrast, terrestrial LiDAR is considered ideal for capturing intricate details of trees, especially when a detailed depiction is essential. It excels in assessing canopy volume and branch morphology and is adept at gathering data from the subterranean structures of the tree. Given the unique strengths of each technology, the selection between airborne and terrestrial LiDAR should be made judiciously, based on the research objectives and required level of detail. Recently, researchers have begun to combine these methods, collecting tree data in a synergistic approach. Concurrently, efforts are underway to construct 3D datasets from these merged inputs [28,35].
In the latter case, the research has involved semantically segmenting the collected data to extract tree locations and information in planar space or to perfectly represent tree objects. Research has been conducted to test commonly used algorithms, advance existing algorithms, and develop new ones. The results from each algorithm were compared, and their accuracy was tested. The primary algorithms were established through this process. Sun et al. [36] used LiDAR data captured using uncrewed aerial vehicles (UAVs) and assessed six algorithms to identify individual trees and distinguish their crowns. The research employed the linear regression model, a linear model with ridge regularization, support vector regression, random forest, artificial neural network, and the k-nearest neighbors (KNN) models. McRoberts et al. [37] used forest inventory and airborne laser-scanning data in conjunction with the KNN technique to estimate the average aboveground biomass per unit area. Wulder et al. [38] investigated the use of local maximum filtering to detect trees in high spatial resolution (1 m) images. In addition, Michałowska et al. [39] examined the capabilities of LiDAR for classifying tree species and deliberated on the most effective classification algorithm to enhance classification accuracy. Li et al. [40] proposed a new point cloud-driven method called skeleton refined extraction. This method was designed to enhance the accuracy of recognizing tree trunks from point cloud data. Further, Raumonen et al. [41] introduced a new methodology, the tree quantitative structure model (TreeQSM), to reconstruct 3D tree models accurately from point cloud data collected using the terrestrial LiDAR scanning method. Tarsha Kurdi et al. [42] aimed to modify existing solutions for a more accurate representation and visualization of tree crowns. The objective of this research is to propose a new method for calculating tree biomass using terrestrial laser scanning (TLS) and airborne laser-scanning (ALS) measurement data, adopting an approach where the volumetric model of tree biomass is represented in mathematical structures.
These preceding studies have focused on the accuracy of LiDAR sensing technology and tree modeling techniques. However, since LiDAR is a detection technology that utilizes light, it is inevitably obstructed by obstacles. In tree modeling, the most significant obstacle is the trees themselves. Portions of trees that are obscured by leaves or branches cannot be detected, resulting in models that fall short of the desired level, especially for trees with dense canopies or those obscured by other trees. It is necessary to determine the appropriate level of green density and structure suitable for data collection using LiDAR. Therefore, this study aims to compare the remote sensing data collected according to different green structures and evaluate whether the green structure is suitable for constructing three-dimensional tree models.
For this purpose, the point cloud data of the target area were collected using terrestrial LiDAR and aerial photogrammetry. The first comparison was based on the data collection methods. Data collected from the terrestrial and data collected from the aerial were compared to understand the characteristics of the collected data. The second comparison focused on the data collection results according to the green structure of the urban forest. The green spaces within the target area were classified into three types of green structures, and sampling was conducted for analysis based on these three types. The green structure types are Simple-Structure, Narrow-Structure, and Congested-Structure. Simple-Structure is an area where only canopy trees are present, and the distance between trees is wide enough that their canopies do not overlap. Narrow-Structure also contains only canopy trees, but the distance between trees is narrow, causing the canopies to overlap. Congested-Structure includes both canopy trees and shrubs, with a high planting density, resulting in overlapping canopies. Based on the results of aerial photogrammetry and handheld LiDAR, tree object classification was conducted according to each type, and the success rate of tree classification within each zone was assessed. For analysis, a preprocessing process was carried out on the point cloud data, including data merging, noise removal, separation of DTM and DSM, and separation of green spaces and structures.
Through this study, it is possible to assist in extracting 3D greenery models of urban forests, including the tree height, tree base height, breast height diameter, canopy volume, and canopy transparency. Additionally, this research is expected to offer foundational information for determining planting standards for future urban forest development.

2. Materials and Methods

2.1. Study Area

The study area, approximately 350 × 245 m, covers the National Debt Repayment Movement Memorial Park in Daegu, Korea. It is bounded by latitudes 35.8684 and 35.8689 and longitudes 128.6002 and 128.6016. The area primarily comprises trees such as zelkova, pine, maple, and ginkgo. This region is a restricted flight zone for UAVs because it is inside an airport control zone. Therefore, this study was conducted by requesting flight approval. Figure 1 displays the research area along with the measured ground control points.

2.2. Equipment and Data Colleciton

We utilized terrestrial laser scanning and UAV photogrammetry to collect point cloud data for trees (Figure 2).

2.2.1. UAV Photometric Data

The UAV used for the acquisition of 3D point cloud data was the Inspire2 (DJI, Shenzhen, China). For 3D mapping, Pix4D Mapper (Pix4D, Prilly, Switzerland) was utilized, and for GNSS equipment, the Trimble R4s GNSS Receiver was employed. The point cloud data obtained by the UAVs were matched with the color information acquired from the RGB camera attached to the UAVs, allowing for a relatively clear representation of realistic colors. This aids in identifying the characteristics of the target area. Since the data are captured from a high altitude in a downward direction, it is particularly easy to extract point cloud data for flat objects such as rooftops of buildings, the tops of structures, tree canopies, plazas, and parking lots. UAVs are effective for quickly acquiring data over large areas and offer the advantage of being able to analyze various additional information about urban forests, such as orthophotos and 3D simulations, in addition to point cloud data.
Approval for UAV flight and aerial photography was obtained from the relevant authority (11th Fighter Wing, Daegu Base, Air Operations Division, UAV Control Team). The first flight and photography session was conducted on 30 June 2022, and the second on 28 July 2022. The flights were carried out at altitudes of 100 m (three times) and 120 m (once), and a total of 589 images were uploaded after the flights. The flight courses were set to fly over the target area either in an east–west direction or a north–south direction (Figure 3).

2.2.2. Terrestrial Laser Scanning Data

We employed a handheld LiDAR device called MapTorch (A.M.Autonomy, Seoul, Republic of Korea), originating from Korea. The LiDAR sensor used in the MapTorch is the Velodyne Puck-LITE (Velodyne LiDAR, San Jose, CA, USA), and its detailed specifications are as follows (Table 1).
The MapTorch device operates by using a simultaneous localization and mapping method. This approach is particularly advantageous because it enables self-localization in areas where global positioning satellite signals might be weak or unavailable. The compact and lightweight design, distinct from traditional stationary ground LiDAR systems, allows individuals to carry it easily. This portability makes it especially appropriate for scanning expansive areas rather than just individual entities. The Trimble R4s global navigation satellite system receiver was used for positioning the ground control points with an accuracy of 1 to 2 cm using the real-time kinematic positioning method.

2.3. Preprocessing

The point cloud data, acquired using UAV and handheld LiDAR and subsequently merged, were processed using Cloud Compare v2.12 alpha (64 bit) for classification, segmentation, and individual object recognition. A four-step preprocessing procedure was conducted for the semantic segmentation of trees (Figure 4).
The first step in preprocessing was the merging of the point cloud data. Using Cloud Compare’s merge function, we combined the photogrammetry data from the UAV with the handheld LiDAR data (Figure 5). Both data collection methods utilized the same coordinate system, and GPS was corrected through RTK equipment prior to data collection to minimize errors. During the merging process, duplicate points at the same location were removed to reduce potential issues that could arise during the semantic segmentation of individual trees.
The second step was the removal of noise points. By eliminating noise generated during the capture of point cloud data, we aimed to improve the accuracy of the final analysis. Dynamic objects such as pedestrians and pets captured during handheld LiDAR scanning were designated as noise. The removal process was conducted manually by opening the collected data in Cloud Compare and removing the points located on walkways.
The third step was to separate points identified as terrain from the point cloud data and classify them into digital terrain models (DTMs) or digital surface models (DSMs). To reference the terrain information of the study area, the digital terrain model was obtained from the National Geographic Information Institute. The DTM was provided in shapefile (.shp) format and converted to raster format (.tif) using ArcGIS Pro v3. The raster resolution was set to 1 m, and each grid contained the terrain’s elevation values. To distinguish between ground and non-ground elements, the elevation difference between the collected point cloud data and the DTM was calculated. Cloud Compare’s Cloud/Mesh Dist. function was used to calculate the distance between the point cloud and the DTM. This distance represents the relative elevation from the ground, allowing us to differentiate ground from non-ground elements (e.g., buildings, vegetation). Points with an elevation difference of 0.3 m or more from the DTM were considered non-ground elements and were filtered out. The point cloud data were then separated into ground (DTM) and non-ground (DSM) elements and saved accordingly.
The fourth step involved using the point cloud data classified as DSM to distinguish between data representing trees and those representing structures. The data related to structures was removed to focus on the semantic segmentation of trees. This process was performed manually, and structures within the park, such as streetlights and signposts, were removed.

2.4. Semantic Segmentation of Individual Tree

This study analyzed the green structures of the target area and classified it into three types based on the green structures (Figure 6). Evaluation zones were established for each type. The green structures were categorized as follows: areas where only canopy trees are planted and the canopies do not overlap (Simple-Structure), areas where only canopy trees are planted but the canopies overlap (Narrow-Structure), and areas where both canopy trees and understory vegetation are planted, resulting in a congested structure (Congested-Structure) (Table 2).
The Simple-Structure consists only of canopy trees, with a wide spacing between them, ensuring that tree crowns do not overlap. In this structure, it is visually clear that each tree stands individually, and the tree density is low enough for manual tree separation. The average distance between trees is 11 m, and there are 0.9 trees per 100 square meters. Although some trees with low heights are present, they can still be clearly distinguished.
The Narrow-Structure also consists only of canopy trees, but the distance between them is much shorter, causing the tree crowns to overlap. In this structure, the distinction between the crowns is not clear. The average distance between trees is about 5 m, and there are 2.2 trees per 100 square meters.
The Congested-Structure is a type where canopy trees and shrubs coexist, resulting in the highest tree density. Not only are the tree crowns difficult to distinguish, but the trunks are also hard to differentiate due to the dense presence of shrubs. The average distance between trees is approximately 3 m, and there are 3 trees per 100 square meters.
During the phase dedicated to individual tree segmentation, the point cloud library (PCL), an open-source library specifically designed for the processing of point cloud and 3D data, was harnessed. The PCL includes algorithms for filtering, feature estimation, surface reconstruction, model fitting, object recognition, and segmentation. These algorithms are categorized under labels, such as filters, features, key points, registration, kdtree, octree, segmentation, surface, and visualization. The PCL can handle various point cloud data formats, not just the ‘.pcd’ format but also others, such as ‘.xyz’ and ‘.las’.
The Euclidean cluster extraction algorithm was used to individually segment trees. This method is effective in distinguishing individual objects by grouping nearby points. In PCL, a kdtree was used to find the neighboring points for each point, enabling the clustering process. After the neighbor search, the Euclidean cluster extraction algorithm calculated the distance between points and grouped neighboring points into clusters based on each point. The maximum allowable distance between points was set to 0.3 m, considering the proximity or overlapping nature of green structures. This conservative threshold ensured proper segmentation. The minimum cluster size, which helps to identify small clusters as noise, was set to 100, considering the size of the trees and the point density of the point cloud.

3. Results

3.1. Data Collection

3.1.1. UAV Photometric Data

This study employed UAVs and Pix4D to collect 3D point cloud data and processed the data using Cloud Compare (Figure 7). The point cloud data acquired by the UAV, integrated with color information from its attached red, green, and blue (RGB) camera, vividly replicates the true colors of the subject area. The integration of color information aids in understanding area characteristics. Due to the high altitude and downward-facing capture approach of the UAV, it is efficient at extracting point cloud data for flat objects, such as building rooftops, upper parts of structures, tree canopies, plazas, and parking lots. However, when examining the point cloud data from an oblique perspective, there are limitations in capturing data for such areas as the sides and facades of buildings or the lower parts of trees, which are inaccessible from the capture angle of the UAV. In the context of UAV photogrammetry, if an image is not captured or an error occurs during the image alignment process, the point cloud data appears to be missing, as indicated by the red dotted line in the figure below. This reduced point density can pose challenges in subsequent processes, such as noise removal, merging, separation, and segmentation, making it challenging to extract complete data forms.

3.1.2. Terrestrial Laser-Scanning Data

In this study, 3D point cloud data were also collected using the handheld LiDAR MapTorch, and the 3D data were processed using Cloud Compare (Figure 8). The point cloud data acquired by the handheld LiDAR have a high point density, allowing for a precise 3D digital representation of various objects that comprise the space, such as buildings, trees, and structures. However, the handheld LiDAR has a limited detection range and a detection angle of about 30°. Thus, as the distance increases for vertically oriented objects, such as trees, the number and density of point cloud data decrease toward the top. The outcome leads to challenges in extracting complete data forms during subsequent processes, such as noise removal, merging, separation, and segmentation, similar to the challenges facing UAVs.

3.1.3. Comparison of Point Cloud Data

To examine the characteristics of the point cloud data acquired by the UAV and handheld LiDAR, we compared the data collected from the same location. For UAVs, due to their nature of acquiring information from higher altitudes, they have the advantage of capturing the top parts of objects, such as buildings or trees, with clarity. However, they have a lower point density, making it challenging to represent the shape of the target objects accurately. Additionally, point cloud data are not extracted from the sides and bottom of some objects that are inaccessible for photo capture. On the advantage side, UAVs can acquire point cloud data from a vast area in a short time. However, potential problems include areas obscured by overhead structures, shadows causing shading, and extremely low point density, which can make it difficult to discern the shape of the target objects.
In contrast, the handheld LiDAR can acquire up to 300,000 point cloud data per second, offering the advantage of a more precise representation of the target object shapes due to its high point density. However, due to its limited scanning angle of 15° upward and downward, it is unable to collect point cloud data from the top portions of vertical objects, such as buildings and trees. Additionally, the handheld LiDAR allows for the precise acquisition of point cloud data in any location accessible to the operator, making it possible to digitally convert shrubs, bushes, and small structures. However, it faces challenges in capturing data from the rear side of hard-to-reach areas, such as long fences.
The UAV is adept at acquiring point cloud data from the upper parts of objects such as buildings, trees, and structures. A handheld LiDAR excels in collecting data from the sides, facades, and areas obstructed or difficult to reach by UAVs. Therefore, by merging data from the UAV and handheld LiDAR, a more precise 3D digital transformation can be achieved. Figure 9 illustrates the results of merging the UAV point cloud data and handheld LiDAR data using Cloud Compare. The UAV point cloud data, supporting RGB colors (depicted in color), accurately captures the tops of the trees. In contrast, the handheld LiDAR data, which lacks RGB support (in grayscale), precisely captures the sides and facades of the trees.

3.2. Segmentation of Tree Data and Nontree Data

For semantic segmentation of point cloud data, an initial step is required to separate the ground (DTM) and aboveground (DSM) components (Figure 10). The ground point cloud data for this study area are as follows. The point cloud data acquired in this study, initially in a state of numerous points intricately mixed, is meshed or pixelated and separated and segmented into multiple objects with different meanings, enabling a precise analysis as desired by the user.
The aboveground component refers to all objects on the surface extracted separately from the ground within the study area. As it is the target for semantic segmentation, meticulous separation is demanded. During the separation process of the ground and aboveground components, if point cloud data are omitted or connections are broken, errors can occur in the extraction process of individual objects on the surface, such as buildings, trees, and structures.
In this study, the primary emphasis was on the data of trees. As such, we separated the buildings and structures from the DSM and undertook additional segmentation processes to isolate only the tree-related DSM data. We further refined this process by segmenting detail within specific areas determined by density.

3.3. Semantic Segmentation

3.3.1. Simple-Structure Sector

In the Simple-Structure sector, there are 12 trees. When semantically segmented, 52 segments were generated (Figure 11). Among these, 12 segments preserved the full form of the trees, while the remaining segments represented only parts of the trees. Even among the 12 segments representing the full form of the trees, it was difficult to consider the segmentation accurate.
Most errors occurred in the branches. It is believed that these errors were caused by the low point density in the areas connecting the branches to the canopy. The high leaf density in the canopy resulted in insufficient data for the branch–canopy connection, which in turn caused the Euclidean segmentation to fail in meeting the maximum distance criterion for recognizing the same object. For tall trees, the limitations of the ground-based LiDAR’s capture angle and the aerial photogrammetry’s inability to capture the lower parts of the tree canopy resulted in a low point density in the upper parts. As a result, the upper parts of the trees were not segmented as part of the same tree (Figure 12).
The segmentation algorithm tends to disregard areas with a sparse point density, interpreting them as noise. Lowering the threshold for what is considered noise could lead to more ambiguous segmentation. Thus, more comprehensive data must be gathered to capture the missing upper data. The algorithm, which typically segments based on the trunk, might encounter challenges progressing further due to missing branch data, resulting in the observed omission of the upper data. Conducting this data collection and segmentation during winter, when trees shed their leaves, might help mitigate these problems.
There were six cases where trees were successfully classified as complete objects, with the full form of the trees intact. Therefore, in the Simple-Structure sector, where there was a total of 12 trees, the success rate for semantic tree segmentation was 50%. For trees that were accurately segmented, the tree object data were used to measure the tree’s specifications. For example, one tree object was measured with a tree height of 5.98 m, trunk height of 2.93 m, crown width of 7.51 m, and a diameter at breast height (DBH) of 18 cm (Figure 13).

3.3.2. Narrow-Structure Sector

The Narrow-Structure sector (2-1) is like the Simple-Structure sector in that no undergrowth exists. However, the placement and spacing of the trees are irregular, and as the altitude increases, it becomes challenging to visually distinguish the canopies (Figure 14). In the Narrow-Structure sector (2-1), where most trees have a higher base height, the trunks of all trees were accurately segmented using the segmentation algorithm. In the Narrow-Structure sector (2-1), there are 30 trees, and 42 tree object segmentation data were generated. Among these, 27 segments represent the complete form of the trees, while the remaining 15 segments are error data where the branches of the trees were classified as separate objects.
However, the segmentation was not entirely accurate when examining the canopy from a planar view. While the trunks (which maintain a clear distance from adjacent trees) were precisely segmented, the intertwined point cloud data of the canopies resulted in a relatively lower accuracy. As an example (Figure 15), there were cases where the tree canopies were intertwined, leading to the canopy of one tree being recognized as part of another tree’s object. Additionally, there were instances where two or three trees were not properly segmented and were grouped into a single tree object.
This result suggests that capturing additional data during winter could enhance accuracy by obtaining clearer point cloud data for the branches, similar to the Simple-Structure sector (Figure 16). Trees and structures with height values were segmented as individual entities. When separating files containing large structures, such as buildings, even architectural elements were segmented.
In the Narrow-Structure sector (2-2), there are 13 trees, and 31 tree segmentation data were generated. The full form of the trees could be identified in 13 files, while the remaining 18 segments were error data where the branches were classified as separate objects. Additionally, a major issue in this sector was the high density of the tree canopies. In the 13 files where the trees were segmented, the canopy was hardly included, and most of the tree canopies were classified as noise, resulting in their exclusion from the tree data.
The Narrow-Structure sector (2-2) lacks undergrowth, and the trunks of the trees can be visually identified. However, from the branches upward, it becomes challenging to distinguish by sight. The point cloud data for the canopy were not fully captured, and the density of the canopy’s point cloud data is noticeably lower than in that sector (2-1). In the following figure, the blue cluster data could not be segmented and were left behind. The remaining cluster data indicate that data from the branches upward were not segmented.
During the semantic segmentation in the Narrow-Structure sector (2-2), we set a smaller voxel size in the algorithm parameters to minimize the influence of the unclear canopy. A smaller voxel size allows for more detailed classification. However, it also led to the phenomenon where branching trunks closer to the ground were not recognized as a single tree but were classified individually. However, it is preferable to reduce the voxel size as much as possible to enhance accuracy.
As a result of the semantic segmentation in the Narrow-Structure sector, in sector 2-1, 8 out of 42 trees were accurately segmented, showing a success rate of approximately 19%. In sector 2-2, 3 out of 13 trees were accurately segmented, resulting in a success rate of around 23%. Additionally, one representative tree was selected for measurement, and the tree specifications were as follows: tree height of 6.94 m, trunk height of 2.99 m, crown width of 7.83 m, and diameter at breast height (DBH) of 52.4 cm (Figure 17).

3.3.3. Congested-Structure Sector

The Congested-Structure sector differs from that in other sectors in that it has an undergrowth consisting of shrubs, forming a multilayered vegetation structure (Figure 18). Additionally, visually determining the division points between shrubs and trees is challenging, and the point cloud data for the canopy were either sparsely populated or had many missing parts, making it difficult to accurately discern the tree shape.
When semantic segmentation was conducted in the Congested-Structure sector (3-1), the accuracy of the segmentation, even at the trunk level, was lower than in the Narrow-Structure sectors. Small trees, similar in height to shrubs, were not recognized as trees if their trunks were not clearly defined, resulting in not being segmented. Furthermore, trees with relatively short trunks were not segmented because the algorithm could not detect their vertical data. However, for areas where shrubs did not obscure the trunks, the tree trunk data were clear, and segmentation was relatively successful.
A very high concentration of trees characterizes the Congested-Structure sector (3-2) (Figure 19). Unlike the Narrow-Structure sectors, the undergrowth is not limited to specific areas but is distributed throughout the entire sector. In the point cloud state, visually distinguishing the vegetation is exceptionally challenging. During the process of separating the ground from the non-ground areas, the abundance of undergrowth caused the ground segmentation algorithm to misidentify parts of the shrubs as the ground. Such problems require manual intervention to refine the data.
When segmenting using the default settings of the segmentation algorithm, semantic segmentation was possible for areas with distinct stem data, but point clouds intertwined with shrubs were not segmented. The number of unsegmented and missing point cloud data was noticeably higher compared to other sectors. While the previously mentioned reasons apply, the presence of more undergrowth might make data collection within the sector challenging, preventing the acquisition of detailed data compared to other sectors.
In the same Congested-Structure sector, when adjusting the algorithm settings for semantic segmentation, a higher segmentation accuracy could be achieved by tweaking about five variables, specifically the voxel size and threshold values. By adjusting these variable values, a better segmentation outcome than before can be anticipated, and the extent of missing canopy point cloud data was reduced. If missing data prevents the identification of connected trees, automatic classification becomes impossible. While acquiring additional point cloud data is crucial, finding the optimal values for the variables is a critical factor in improving the results.
As a result of the Congested-Structure segmentation, sector 3-1 contained 37 trees, and a total of 41 segmentation data were generated. Among them, 24 meaningful segmentation data were produced, with only 1 tree being accurately segmented, resulting in a success rate of 2%. In sector 3-2, there were 30 trees, and a total of 71 segmentation data were generated. Out of these, 25 meaningful segmentation data were produced, with 2 trees accurately segmented, resulting in a success rate of 6%. Additionally, one representative tree was selected for measurement, and the results were as follows: tree height of 5.05 m, trunk height of 2.47 m, crown width of 5.16 m, and diameter at breast height (DBH) of 38.2 cm (Figure 20).

4. Discussion

Wieser et al. [43] compared the accuracy of the UAV laser-scanning method (ULS) and the terrestrial laser-scanning method (TLS). He determined that ULS, being an aerial-based method, is advantageous for capturing the canopy of trees. On the other hand, since TLS captures from the ground, it can more accurately detect the stem of trees but tends to overestimate the height of the trees. Beyer et al. [44] employed ground-based LiDAR for structural tree modeling. He identified that if canopy data are deficient due to the density of tree leaves, tree modeling becomes challenging. He proposed that this limitation can be addressed with additional aerial LiDAR images using drones, enhancing the 3D leaf density and allowing for direct comparison. These findings demonstrate that this study’s approach of using both ULS and TLS is suitable for 3D tree scanning. This study compared point cloud data obtained from ULS with that from TLS, discerning the unique characteristics, advantages, and disadvantages of each method. Subsequently, by integrating both types of data, this study addressed the limitations and established foundational data for modeling and measurement.
However, leveraging the combined data presented challenges. Since LiDAR operates on light, it cannot observe areas unreachable by light. Even after expanding the LiDAR’s observation scope using both terrestrial and aerial means, parts obscured by trees posed measurement challenges. In densely wooded areas, individual recognition processes faltered, and tree-obscured sectors lacked data collection. Moreover, high leaf density meant that leaves concealed branches, resulting in the separation of the canopy and trunk during the object segmentation phase. To adjust for leaf density, the study proposed gathering additional data during winter, when deciduous trees shed their leaves. Neuville et al. [45] tested this approach in temperate deciduous canopy forests during both leaf-on and leaf-off seasons. The outcomes revealed that the suggested method could detect up to 82% of tree stems with 98% accuracy. It also recognized the challenges of distinguishing between the canopy and tree stems in the leaf-on season. Anticipated future research will validate the 3D modeling process, incorporating seasonal changes, and will establish modeling data.
For tree recognition in complex green structures, it is necessary to adjust the object recognition settings of the segmentation algorithm. By reducing the voxel size and object recognition threshold, a higher level of segmentation can be achieved. This approach is expected to lead to improved tree recognition success. However, due to the nature of LiDAR, which uses light, recognizing parts obscured by dense vegetation is not possible in complex green structures. Therefore, it is crucial to determine the appropriate tree density for conducting tree recognition using LiDAR. Although this study did not perform a detailed adjustment of tree density, future research should include specific discussions regarding the tree density suitable for applying remote sensing investigations.

5. Conclusions

The study was conducted in the National Debt Repayment Movement Memorial Park located in Daegu Metropolitan City, using point cloud data extracted from handheld LiDAR and UAV imagery. The aim was to convert the urban forest and surrounding areas into 3D digital data and recognize, segment, and analyze the green structure of the individual trees. For the primary 3D digital conversion of the target area, point cloud data were acquired using UAVs and handheld LiDAR merged using Cloud Compare. Furthermore, the point cloud data were separated into ground and non-ground data, and individual objectification was attempted on trees in the non-ground data.
Through the semantic segmentation process that recognizes each tree as an individual object, it is possible to grasp the detailed characteristics of individual trees and the urban forest, a collection of individual trees. This approach aids in easily understanding the specifications of individual trees, such as the height, canopy width, and breast height diameter, quantifying the function of the urban forest, and predicting its changes. The 3D digital conversion process for urban forest investigation, analysis, and management proceeds in the order of merging and editing point cloud data, noise removal and point cloud data cleanup, separation of ground (DTM) and non-ground (DSM) data, file format conversion, semantic segmentation manually or using an algorithm, file storage and organization, analysis of individual trees by density type, and extraction of analysis files. Manual and automated methods should be combined, using algorithms at each stage.
The semantic segmentation of trees was conducted by dividing the park’s green structures into three types of zones, and the work was carried out for the trees within these zones. The green structures consist of Simple-Structure, Narrow-Structure, and Congested-Structure. In the Simple-Structure zone, only canopy trees exist, and the spacing between trees is wide enough that their canopies do not overlap. In the Narrow-Structure zone, only canopy trees are present, but the spacing between trees is narrow, causing their canopies to overlap. The Congested-Structure zone contains both canopy trees and shrubs, and the tree density is high, resulting in overlapping canopies.
As a result of the semantic segmentation performed according to the green structures, the Simple-Structure zone showed a success rate of 50%, the Narrow-Structure zone an average success rate of 21%, and the Congested-Structure zone a success rate of 4%. The denser the trees and the more their canopies overlapped, the lower the success rate of semantic segmentation. This issue arises because the tree density exceeds the point recognition distance used in the segmentation algorithm. While reducing the point recognition distance might allow segmentation to be performed to some extent, it could also lead to over-segmentation. Therefore, it is necessary to consider the appropriate point recognition distance and adjust the algorithm’s variable settings according to the green structure.
Additionally, branches obscured by tree canopies were difficult to capture using LiDAR imaging, which resulted in lower point density for the branches and prevented accurate segmentation. To address this problem, supplementary imaging during the leafless season, when trees have shed their leaves, could be employed to capture additional points.
This study identified the need to understand the appropriate green structures for tree surveys in urban forests using remote sensing, and emphasized the importance of adjusting the variables of the semantic segmentation algorithm according to the green structures and considering imaging conditions. However, the study did not confirm an increase in the success rate of tree objectification based on specific variable adjustments or improvements in imaging techniques. Therefore, further research is needed to develop specific application methods, such as optimal variable settings based on different green structures.
The direction of this study involved utilizing remote sensing technology to understand the green structures of urban forests and collect data on tree objects, ultimately determining whether tree object images that could be applied to digital twins could be created using this technology. Digital twins assist us in building a virtual representation of the real environment, allowing us to conduct various simulations. To test and verify the multiple functions of urban forests, it is necessary to establish data that can accurately replicate real measurements, and remote sensing technology was considered an efficient method for building such data. Therefore, based on the results of this study, we concluded that remote sensing technology should be applied considering the complexity of the green structures. This conclusion can serve as a basis for determining where remote sensing technology should be applied in future tree surveys aimed at constructing digital twins.

Author Contributions

Conceptualization, J.-H.E.; methodology, U.-J.S.; software, K.-J.C.; investigation, U.-J.S. and K.-J.C.; data curation, K.-J.C.; writing—original draft preparation, U.-J.S.; writing—review and editing, U.-J.S.; supervision, J.-H.E.; project administration, J.-H.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by ‘Korea Forest Service (Korea Forestry Promotion Institute), Project no. 2022428B10-2224-0802’ and ‘Kyungpook National University Development Project Research Fund, 2022’.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to legal reasons.

Conflicts of Interest

Author Kyung-Jin Chung was employed by the company Huron Network Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Kihwan, S.; Changwha, O. Policy Direction and Institutional Basis for National Digital Twin; Korea Research Institute for Human Settlements: Sejong, Republic of Korea, 2020. [Google Scholar]
  2. Park, J.; Choi, W.; Jeong, T.; Seo, J. Digital twins and land management in South Korea. Land Use Policy 2023, 124, 106442. [Google Scholar] [CrossRef]
  3. Schrotter, G.; Hürzeler, C. The digital twin of the city of Zurich for urban planning. PFG–J. Photogramm. Remote Sens. Geoinf. Sci. 2020, 88, 99–112. [Google Scholar] [CrossRef]
  4. Lehtola, V.V.; Koeva, M.; Elberink, S.O.; Raposo, P.; Virtanen, J.P.; Vahdatikhaki, F.; Borsci, S. Digital twin of a city: Review of technology serving city needs. Int. J. Appl. Earth Obs. Geoinf. 2022, 114, 102915. [Google Scholar] [CrossRef]
  5. Zhao, D.; Li, X.; Wang, X.; Shen, X.; Gao, W. Applying digital twins to research the relationship between urban expansion and vegetation coverage: A case study of natural preserve. Front. Plant Sci. 2022, 13, 840471. [Google Scholar] [CrossRef]
  6. Choi, I.H.; Nam, S.K.; Kim, S.Y.; Lee, D.G. Forest digital twin implementation study for 3D forest geospatial information service. Korean J. Remote Sens. 2023, 39, 1165–1172. [Google Scholar]
  7. Qiu, H.; Zhang, H.; Lei, K.; Zhang, H.; Hu, X. Forest digital twin: A new tool for forest management practices based on Spatio-Temporal Data, 3D simulation Engine, and intelligent interactive environment. Comput. Electron. Agric. 2023, 215, 108416. [Google Scholar] [CrossRef]
  8. Lines, E.R.; Fischer, F.J.; Owen HJ, F.; Jucker, T. The shape of trees: Reimagining forest ecology in three dimensions with remote sensing. J. Ecol. 2022, 110, 1730–1745. [Google Scholar] [CrossRef]
  9. Münzinger, M.; Prechtel, N.; Behnisch, M. Mapping the urban forest in detail: From LiDAR point clouds to 3D tree models. Urban For. Urban Green. 2022, 74, 127637. [Google Scholar] [CrossRef]
  10. Buonocore, L.; Yates, J.; Valentini, R. A proposal for a forest digital twin framework and its perspectives. Forests 2022, 13, 498. [Google Scholar] [CrossRef]
  11. Morgenroth, J.; Östberg, J. Measuring and monitoring urban trees and urban forests. In Routledge Handbook of Urban Forestry; Routledge: London, UK, 2017; pp. 33–48. [Google Scholar]
  12. Baines, O.; Wilkes, P.; Disney, M. Quantifying urban forest structure with open-access remote sensing data sets. Urban For. Urban Green. 2020, 50, 126653. [Google Scholar] [CrossRef]
  13. Alonzo, M.; McFadden, J.P.; Nowak, D.J.; Roberts, D.A. Mapping urban forest structure and function using hyperspectral imagery and lidar data. Urban For. Urban Green. 2016, 17, 135–147. [Google Scholar] [CrossRef]
  14. Park, Y.; Guldmann, J.M.; Liu, D. Impacts of tree and building shades on the urban heat island: Combining remote sensing, 3D digital city and spatial regression approaches. Comput. Environ. Urban Syst. 2021, 88, 101655. [Google Scholar] [CrossRef]
  15. Nitoslawski, S.A.; Galle, N.J.; Van Den Bosch, C.K.; Steenberg, J.W. Smarter ecosystems for smarter cities? A review of trends, technologies, and turning points for smart urban forestry. Sustain. Cities Soc. 2019, 51, 101770. [Google Scholar] [CrossRef]
  16. Panagiotidis, D.; Abdollahnejad, A.; Surový, P.; Chiteculo, V. Determining tree height and crown diameter from high-resolution UAV imagery. Int. J. Remote Sens. 2017, 38, 2392–2410. [Google Scholar] [CrossRef]
  17. Feng, L.; Chen, S.; Zhang, C.; Zhang, Y.; He, Y. A comprehensive review on recent applications of unmanned aerial vehicle remote sensing with various sensors for high-throughput plant phenotyping. Comput. Electron. Agric. 2021, 182, 106033. [Google Scholar] [CrossRef]
  18. Navarro, A.; Young, M.; Allan, B.; Carnell, P.; Macreadie, P.; Ierodiaconou, D. The application of Unmanned Aerial Vehicles (UAVs) to estimate above-ground biomass of mangrove ecosystems. Remote Sens. Environ. 2020, 242, 111747. [Google Scholar] [CrossRef]
  19. Krause, S.; Sanders, T.G.; Mund, J.P.; Greve, K. UAV-based photogrammetric tree height measurement for intensive forest monitoring. Remote Sens. 2019, 11, 758. [Google Scholar] [CrossRef]
  20. Weng, Q. Remote sensing of impervious surfaces in the urban areas: Requirements, methods, and trends. Remote Sens. Environ. 2012, 117, 34–49. [Google Scholar] [CrossRef]
  21. Feng, Q.; Liu, J.; Gong, J. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sens. 2015, 7, 1074–1094. [Google Scholar] [CrossRef]
  22. Pintar, A.M.; Skudnik, M. Identifying Even-and Uneven-Aged Forest Stands Using Low-Resolution Nationwide Lidar Data. Forests 2024, 15, 1407. [Google Scholar] [CrossRef]
  23. Zięba-Kulawik, K.; Wężyk, P. Monitoring 3D changes in urban forests using landscape metrics analyses based on multi-temporal remote sensing data. Land 2022, 11, 883. [Google Scholar] [CrossRef]
  24. Shojanoori, R.; Shafri, H.Z. Review on the use of remote sensing for urban forest monitoring. Arboric. Urban For. (AUF) 2016, 42, 400–417. [Google Scholar] [CrossRef]
  25. Wang, X.; Wang, Y.; Zhou, C.; Yin, L.; Feng, X. Urban forest monitoring based on multiple features at the single tree scale by UAV. Urban For. Urban Green. 2021, 58, 126958. [Google Scholar] [CrossRef]
  26. Deng, S.; Jing, S.; Zhao, H. A Hybrid Method for Individual Tree Detection in Broadleaf Forests Based on UAV-LiDAR Data and Multistage 3D Structure Analysis. Forests 2024, 15, 1043. [Google Scholar] [CrossRef]
  27. Weinmann, M.; Weinmann, M.; Mallet, C.; Brédif, M. A classification-segmentation framework for the detection of individual trees in dense MMS point cloud data acquired in urban areas. Remote Sens. 2017, 9, 277. [Google Scholar] [CrossRef]
  28. Panagiotidis, D.; Abdollahnejad, A.; Slavik, M. 3D point cloud fusion from UAV and TLS to assess temperate managed forest structures. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102917. [Google Scholar] [CrossRef]
  29. Tigges, J.; Churkina, G.; Lakes, T. Modeling above-ground carbon storage: A remote sensing approach to derive individual tree species information in urban settings. Urban Ecosyst. 2017, 20, 97–111. [Google Scholar] [CrossRef]
  30. Kc, Y.B.; Liu, Q.; Saud, P.; Gaire, D.; Adhikari, H. Estimation of Above-Ground Forest Biomass in Nepal by the Use of Airborne LiDAR, and Forest Inventory Data. Land 2024, 13, 213. [Google Scholar] [CrossRef]
  31. Hilker, T.; Lyapustin, A.I.; Tucker, C.J.; Sellers, P.J.; Hall, F.G.; Wang, Y. Remote sensing of tropical ecosystems: Atmospheric correction and cloud masking matter. Remote Sens. Environ. 2012, 127, 370–384. [Google Scholar] [CrossRef]
  32. Larue, E.A.; Wagner, F.W.; Fei, S.; Atkins, J.W.; Fahey, R.T.; Gough, C.M.; Hardiman, B.S. Compatibility of aerial and terrestrial LiDAR for quantifying forest structural diversity. Remote Sens. 2020, 12, 1407. [Google Scholar] [CrossRef]
  33. Jung, S.E.; Kwak, D.A.; Park, T.; Lee, W.K.; Yoo, S. Estimating crown variables of individual trees using airborne and terrestrial laser scanners. Remote Sens. 2011, 3, 2346–2363. [Google Scholar] [CrossRef]
  34. Lindberg, E.; Holmgren, J. Individual tree crown methods for 3D data from remote sensing. Curr. For. Rep. 2017, 3, 19–31. [Google Scholar] [CrossRef]
  35. Chen, J.; Zhao, D.; Zheng, Z.; Xu, C.; Pang, Y.; Zeng, Y. A clustering-based automatic registration of UAV and terrestrial LiDAR forest point clouds. Comput. Electron. Agric. 2024, 217, 108648. [Google Scholar] [CrossRef]
  36. Sun, Y.; Jin, X.; Pukkala, T.; Li, F. Predicting individual tree diameter of larch (Larix olgensis) from UAV-LiDAR data using six different algorithms. Remote Sens. 2022, 14, 1125. [Google Scholar] [CrossRef]
  37. McRoberts, R.E.; Næsset, E.; Gobakken, T. Optimizing the k-Nearest Neighbors technique for estimating forest aboveground biomass using airborne laser scanning data. Remote Sens. Environ. 2015, 163, 13–22. [Google Scholar] [CrossRef]
  38. Wulder, M.; Niemann, K.O.; Goodenough, D.G. Local maximum filtering for the extraction of tree locations and basal area from high spatial resolution imagery. Remote Sens. Environ. 2000, 73, 103–114. [Google Scholar] [CrossRef]
  39. Michałowska, M.; Rapiński, J.; Janicka, J. Tree species classification on images from airborne mobile mapping using ML. NET. Eur. J. Remote Sens. 2023, 56, 2271651. [Google Scholar] [CrossRef]
  40. Li, J.; Wu, H.; Xiao, Z.; Lu, H. 3D modeling of laser-scanned trees based on skeleton refined extraction. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102943. [Google Scholar] [CrossRef]
  41. Raumonen, P.; Kaasalainen, M.; Åkerblom, M.; Kaasalainen, S.; Kaartinen, H.; Vastaranta, M.; Holopainen, M.; Disney, M.; Lewis, P. Fast automatic precision tree models from terrestrial laser scanner data. Remote Sens. 2013, 5, 491–520. [Google Scholar] [CrossRef]
  42. Tarsha Kurdi, F.; Lewandowicz, E.; Gharineiat, Z.; Shan, J. Accurate Calculation of Upper Biomass Volume of Single Trees Using Matrixial Representation of LiDAR Data. Remote Sens. 2024, 16, 2220. [Google Scholar] [CrossRef]
  43. Wieser, M.; Mandlburger, G.; Hollaus, M.; Otepka, J.; Glira, P.; Pfeifer, N. A case study of UAS borne laser scanning for measurement of tree stem diameter. Remote Sens. 2017, 9, 1154. [Google Scholar] [CrossRef]
  44. Beyer, R.; Bayer, D.; Letort, V.; Pretzsch, H.; Cournède, P.H. Validation of a functional-structural tree model using terrestrial Lidar data. Ecol. Model. 2017, 357, 55–57. [Google Scholar] [CrossRef]
  45. Neuville, R.; Bates, J.S.; Jonard, F. Estimating forest structure from UAV-mounted LiDAR point cloud using machine learning. Remote Sens. 2021, 13, 352. [Google Scholar] [CrossRef]
Figure 1. Study area.
Figure 1. Study area.
Land 13 01856 g001
Figure 2. Remote sensing equipment used for collecting data: (a) Inspire2; (b) Maptorch.
Figure 2. Remote sensing equipment used for collecting data: (a) Inspire2; (b) Maptorch.
Land 13 01856 g002
Figure 3. Flight courses.
Figure 3. Flight courses.
Land 13 01856 g003
Figure 4. Process of detecting individual trees through the point cloud data.
Figure 4. Process of detecting individual trees through the point cloud data.
Land 13 01856 g004
Figure 5. Merging of the DTM and point cloud data: (a) 3D view; (b) 2D view with linear polygons representing the DTM; (c) cross-sectional view.
Figure 5. Merging of the DTM and point cloud data: (a) 3D view; (b) 2D view with linear polygons representing the DTM; (c) cross-sectional view.
Land 13 01856 g005
Figure 6. Five sectors categorized by green structure.
Figure 6. Five sectors categorized by green structure.
Land 13 01856 g006
Figure 7. Law data of detection from UAV: (a) top-down view; (b) missing points viewed from a bird’s-eye view.
Figure 7. Law data of detection from UAV: (a) top-down view; (b) missing points viewed from a bird’s-eye view.
Land 13 01856 g007
Figure 8. Law data of detecting from the handheld LiDAR: (a) top-down view; (b) missing points viewed from cross-sectional view.
Figure 8. Law data of detecting from the handheld LiDAR: (a) top-down view; (b) missing points viewed from cross-sectional view.
Land 13 01856 g008
Figure 9. Results of merging point cloud data collected by each method: (a) top-down view; (b) bird’s-eye view; (c) detailed view.
Figure 9. Results of merging point cloud data collected by each method: (a) top-down view; (b) bird’s-eye view; (c) detailed view.
Land 13 01856 g009
Figure 10. Preprocessing of segmentation: (a) segmentation of DTM and DSM; (b) segmentation of green space and structures.
Figure 10. Preprocessing of segmentation: (a) segmentation of DTM and DSM; (b) segmentation of green space and structures.
Land 13 01856 g010
Figure 11. Examples of issues encountered during the segmentation of tall trees: (a) tree object segmented by PCL; (b) tree object with missing parts combined.
Figure 11. Examples of issues encountered during the segmentation of tall trees: (a) tree object segmented by PCL; (b) tree object with missing parts combined.
Land 13 01856 g011
Figure 12. Segmentation in Simple-Structure sector.
Figure 12. Segmentation in Simple-Structure sector.
Land 13 01856 g012
Figure 13. An example of tree specification measurement: Simple-Structure.
Figure 13. An example of tree specification measurement: Simple-Structure.
Land 13 01856 g013
Figure 14. Segmentation in Narrow-Structure sectors: (a) 2-1 sector; (b) 2-2 sector.
Figure 14. Segmentation in Narrow-Structure sectors: (a) 2-1 sector; (b) 2-2 sector.
Land 13 01856 g014
Figure 15. An example of issues that arise when segmenting trees with overlapping canopies.
Figure 15. An example of issues that arise when segmenting trees with overlapping canopies.
Land 13 01856 g015
Figure 16. Parts of tree canopy classified as noise data.
Figure 16. Parts of tree canopy classified as noise data.
Land 13 01856 g016
Figure 17. An example of tree specification measurement: Narrow-Structure.
Figure 17. An example of tree specification measurement: Narrow-Structure.
Land 13 01856 g017
Figure 18. Segmentation in Congested-Structure (3-1) sector.
Figure 18. Segmentation in Congested-Structure (3-1) sector.
Land 13 01856 g018
Figure 19. Segmentation in Congested-Structure (3-2) sector: (a) before adjusting the algorithm variables; (b) after adjusting the algorithm variables.
Figure 19. Segmentation in Congested-Structure (3-2) sector: (a) before adjusting the algorithm variables; (b) after adjusting the algorithm variables.
Land 13 01856 g019
Figure 20. An example of tree specification measurement: Congested-Structure.
Figure 20. An example of tree specification measurement: Congested-Structure.
Land 13 01856 g020
Table 1. Specifications of the Velodyne Puck-LITE.
Table 1. Specifications of the Velodyne Puck-LITE.
CategorySensor Specifications
Sensor
·
Velodyne Puck LITE 16-channel laser scanner
·
Measures up to 100 m range
·
Accuracy: +/− 3 cm (in general conditions)
·
Vertical FOV (field of View): 30° (15° up, 15° down)
·
Angular resolution (vertical): 2°
·
Horizontal FOV: 360°
·
Angular resolution (horizontal): 0.1° to 0.4°
·
Rotation rate: 5–10 Hz
·
Integrated web server for monitoring and configuration
Laser
·
Class 1—Eye safe
·
905 nm wavelength
Mechanical & Electrical
·
Max points: up to 300,000 points
·
Network interface: 100 Mnps (megabits per second)
·
UDP packet output
Table 2. Tree planting characteristics by green structure.
Table 2. Tree planting characteristics by green structure.
StructureDistance (m)Density (Tree/100 m2)
Simple-Structure110.9
Narrow-Structure52.2
Congested-Structure33
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sung, U.-J.; Eum, J.-H.; Chung, K.-J. Evaluation of Tree Object Segmentation Performance for Individual Tree Recognition Using Remote Sensing Techniques Based on Urban Forest Green Structures. Land 2024, 13, 1856. https://doi.org/10.3390/land13111856

AMA Style

Sung U-J, Eum J-H, Chung K-J. Evaluation of Tree Object Segmentation Performance for Individual Tree Recognition Using Remote Sensing Techniques Based on Urban Forest Green Structures. Land. 2024; 13(11):1856. https://doi.org/10.3390/land13111856

Chicago/Turabian Style

Sung, Uk-Je, Jeong-Hee Eum, and Kyung-Jin Chung. 2024. "Evaluation of Tree Object Segmentation Performance for Individual Tree Recognition Using Remote Sensing Techniques Based on Urban Forest Green Structures" Land 13, no. 11: 1856. https://doi.org/10.3390/land13111856

APA Style

Sung, U. -J., Eum, J. -H., & Chung, K. -J. (2024). Evaluation of Tree Object Segmentation Performance for Individual Tree Recognition Using Remote Sensing Techniques Based on Urban Forest Green Structures. Land, 13(11), 1856. https://doi.org/10.3390/land13111856

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop