Next Article in Journal
Phenological Response of an Arizona Dryland Forest to Short-Term Climatic Extremes
Next Article in Special Issue
Using UAV-Based Photogrammetry and Hyperspectral Imaging for Mapping Bark Beetle Damage at Tree-Level
Previous Article in Journal
Net Surface Shortwave Radiation from GOES Imagery—Product Evaluation Using Ground-Based Measurements from SURFRAD
Previous Article in Special Issue
aTrunk—An ALS-Based Trunk Detection Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Extraction of Vegetation Points from LiDAR Using 3D Fractal Dimension Analyses

1
College of Geographic and Oceanographic Sciences, Nanjing University, Nanjing 210023, China
2
Changjiang River Scientific Research Institute, Changjiang Water Resources Commission, Wuhan 430010, China
3
Jiangsu Center for Collaborative Innovation in Geographical Information Resource Development and Application, Nanjing 210023, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2015, 7(8), 10815-10831; https://doi.org/10.3390/rs70810815
Submission received: 4 February 2015 / Revised: 12 August 2015 / Accepted: 14 August 2015 / Published: 21 August 2015

Abstract

:
Light Detection and Ranging (LiDAR), a high-precision technique used for acquiring three-dimensional (3D) surface information, is widely used to study surface vegetation information. Moreover, the extraction of a vegetation point set from the LiDAR point cloud is a basic starting-point for vegetation information analysis, and an important part of its further processing. To extract the vegetation point set completely and to describe the different spatial morphological characteristics of various features in a LiDAR point cloud, we have used 3D fractal dimensions. We discovered that every feature has its own distinctive 3D fractal dimension interval. Based on the 3D fractal dimensions of tall trees, we propose a new method for the extraction of vegetation using airborne LiDAR. According to this method, target features can be distinguished based on their morphological characteristics. The non-ground points acquired by filtering are processed by region growing segmentation and the morphological characteristics are evaluated by 3D fractal dimensions to determine the features required for the determination of the point set for tall trees. Avon, New York, USA was selected as the study area to test the method and the result proves the method’s efficiency. Thus, this approach is feasible. Additionally, the method uses the 3D coordinate properties of the LiDAR point cloud and does not require additional information, such as return intensity, giving it a larger scope of application.

Graphical Abstract

1. Introduction

The extraction and analysis of vegetation information is important in many fields of research. It is widely used in biomass and carbon estimations, hydrological model calculations, urban ecological assessments, etc. Accurately and entirely obtaining vegetation information is a difficult task [1,2,3,4]. Currently, there are various types of techniques for extracting and surveying vegetation information. These techniques involve, for example, detailed field surveys of all the vegetation in a district, which can provide the most accurate and detailed distribution of vegetation and a basis for classification [5]. Other techniques include using optical remote sensing images to extract the vegetation information based on distinctive spectral and textural features [6,7], and using Light Detection And Ranging (LiDAR) to obtain 3D point cloud data, designing a point cloud processing workflow to extract vegetation information [8,9,10]. LiDAR offers a fast and high-precision method for obtaining 3D information [11]. Unlike traditional observation techniques, LiDAR acquisitions can identify a large number of ground features and provide accurate spatial locations of surfaces and features. It is now widely used in digital mapping, forest monitoring, resource surveying, etc. [12]. LiDAR has been used for information acquisition and feature descriptions in an increasing number of studies, because of its advantages, including high precision, independence from environmental conditions, and its ability to obtain the vertical structure of surface features [13].
Currently, LiDAR studies focus on feature classification and building information extraction. For example, Yu et al. [14] divided their study area into water, vegetation, and other areas based on hyperspectral images, and extracted buildings according to height and roughness from LiDAR-derived DSM (Digital Surface Model). José Sánchez-Lopera and José Luis Lerma [15] used angular classification to differentiate buildings from vegetation and other small objects. For multi-class problems, researches have used many different approaches. Lafarge and Mallet [16] used an energy function and the Potts model to combine local features and local context. Rutzinger et al. [17,18] presented an object-based method for point cloud classification, which combined points clustering and full-waveform ALS data classification, and this method performed well for vegetation and building detection in urban areas. Additionally, multiple-entity features have been applied to improve both the accuracy and speed of computation by using several different entities in classifications from point clouds [19,20].
Moreover, researchers have undertaken exploratory work on the extraction of vegetation information from LiDAR data. Generally, studies focus on the following aspects: (1) Combining remote sensing, aerial imagery, and other spatial data, and using the LiDAR point cloud to extract vegetation information [21,22,23,24]; (2) Interpolating or processing the LiDAR point cloud into a raster image in the form of DTM (Digital Terrain Model), DSM, etc., and processing the data by remote sensing image classification methods to achieve indirect vegetation extraction from LiDAR point cloud data [25,26]; (3) Based on the traits of LiDAR data, using machine-learning methods for classification or using traits such as full-waveform and multi-echo LiDAR data as the basis of vegetation information extraction [27,28,29,30]. The core idea behind the above methods involves learning the remote sensing image classification technique and then applying it to the discrete point set from the LiDAR data. Many studies do not take full advantage of the vectorial 3D spatial distribution characteristics provided by the LiDAR point cloud. Meanwhile, some LiDAR data do not have full-waveform and multi-echo characteristics, thus not providing the required traits for vegetation information extraction. Thus, a method for extracting vegetation information using only the basic characteristics shared by most types of LiDAR point cloud data would be significant.
There are considerable differences among the morphological structures of different landscape features; for example, trees have a rough crown, smooth planes constitute buildings, while transmission lines present a linear form. These structures are reflected in the LiDAR point cloud. The type of feature determines the 3D distribution structure in the LiDAR data. Accordingly, the spatial morphology features in LiDAR data can be an important factor in distinguishing vegetation. Thus, in our study, we aim to explore a new method that takes advantage of the 3D spatial morphology features of tall trees as a distinguishing factor, and uses a 3D fractal dimension to conduct a spatial morphological description for vegetation extraction. The 3D fractal dimension, based on fractal theory, is a statistical index describing ground features’ irregularity and roughness. By using 3D fractal dimensions, vegetation can be extracted solely on the basis of its spatial morphology. Based on the considerations above, we first studied ground features’ overall morphological structure, and then used 3D fractal dimensions as indices to analyze ground features’ morphology and to distinguish between different types of features. A new method is presented for extracting tall trees. First, we pretreat LiDAR point cloud data by filtering to obtain non-ground points. Second, we separate the point set by regional growth division. Finally, we evaluate the morphological characteristics of the segmented objects using 3D fractal dimensions to determine the type of vegetation and to obtain the point set for tall trees.

2. Study Area and Data Source

The study area (Figure 1) is located in the town of Avon in New York, USA, covering nearly 0.309 km2. Tall trees, shrubs, buildings, power lines, and other features can be found within the study area. With the abundant types and diverse set of surface features, this area is appropriate for extracting the vegetation features and analyzing the information.
Figure 1. Map and image of the study area.
Figure 1. Map and image of the study area.
Remotesensing 07 10815 g001
The LiDAR point cloud data (Figure 2) were obtained from the free data provided by the Rochester Institute of Technology’s SHARE 2012 program (http://www.rit.edu/cos/share2012/). The average point cloud density of the LiDAR dataset covering the study area is 35.480 pts/m2 with the WGS_1984_UTM_Zone_18N spatial reference system. Detailed information for the data is listed in Table 1. In this study, we have only used the spatial location information of the LiDAR point cloud covering the study area, without additional information such as the return intensity. Therefore, the approach presented in this paper will be applicable to most types of LiDAR data used in vegetation information extraction processes.
Figure 2. Scanning LiDAR data for the study area.
Figure 2. Scanning LiDAR data for the study area.
Remotesensing 07 10815 g002
Table 1. Detailed LiDAR parameters for the study area.
Table 1. Detailed LiDAR parameters for the study area.
SensorALS60
Data capture date9/12/2012
Total number of points5,875,674
Minimum height (m)162.140
Maximum height (m)208.990
Median height (m)179.264
Average point density (pts/m2)35.480

3. Methods

Our new method is divided into two inter-connected steps: first, the dataset is filtered to acquire non-ground points in pre-processing; second, 3D fractal dimension analysis is applied to the acquired non-ground points to obtain the vegetation point set. The first step provides input data (i.e., non-ground points) for the second step.
To extract ground features, we need to distinguish between ground points and non-ground points, which is achieved by LiDAR filtering [31]. Filtering helps to exclude the ground points to obtain the non-ground points. Based on the non-ground points, the accuracy and completeness of the extraction are significantly improved. Thus, filtering the LiDAR point cloud is very important for LiDAR vegetation extraction.
Ground points can be used to generate terrain models such as a DTM. However, non-ground points correspond to a variety of unknown types of feature. Vegetation information extraction involves identifying real ground feature points that belong to different classes of vegetation and separating them from the non-ground points. To extract vegetation entirely and exactly in our study, we combine 3D region segmentation and 3D fractal dimension analysis to evaluate the 3D spatial characteristics of the LiDAR point cloud. First, we use region segmentation to distinguish every feature and obtain some feature aggregation; then, we calculate the 3D fractal dimensions, analyze the 3D spatial morphology of features, and define the classification to identify the point cloud representing vegetation. Figure 3 shows the procedure of vegetation information extraction from the LiDAR point cloud.
Figure 3. Workflow of vegetation information extraction from LiDAR imagery.
Figure 3. Workflow of vegetation information extraction from LiDAR imagery.
Remotesensing 07 10815 g003

3.1. LiDAR Data Preparation

Researchers have extensively studied LiDAR filtering. After ten years of development, numerous successful and efficient filter algorithms have been obtained [31,32,33,34,35], which can be generally divided into mathematical morphological filtering, progressive densification filtering, surface-based methods, and segmentation-based methods [36]. These algorithms are based on the spatial structure of the LiDAR point cloud, applying different mathematical principles to distinguish ground and non-ground points. In our study, we applied morphological gradient filtering [36] to filter the LiDAR point cloud. The algorithm was tested using sample data provided by the International Society of Photogrammetry and Remote Sensing (ISPRS) [37] and the result shows that it can distinguish between the majority of features and noise points. According to this test, the filtering rate is more than 90%, and complicated and low features can be filtered well. Therefore, morphological gradient filtering was applied during preprocessing of the LiDAR point cloud to distinguish the ground points and the non-ground points, a prerequisite for region segmentation and 3D fractal dimension analysis.
The core idea of morphological gradient filtering is to identify the variance of height and provide a basis for computational filtering. Then, a mathematical morphological computation is used to revise the gradient-mutational points. If the height variance was more than the height variance threshold, that point was recognized as a non-ground point to optimize the filtering process. Therefore, determining the proper threshold is key to the filtering process, and filtering parameters vary according to different terrains and feature-covers. The study area contains a variety of features, including forests and buildings. Thus, single parameters cannot distinguish between the ground and non-ground points. To solve this problem, we manually divided the study area into four sub-areas with similar terrain and feature characteristics to conduct the morphological analysis separately, and then combined these filtering outcomes to obtain the results of preprocessing the LiDAR data (Figure 4).
Figure 4. Sub-area division chart.
Figure 4. Sub-area division chart.
Remotesensing 07 10815 g004

3.2. Three-Dimensional Fractal Dimension

Fractal geometry is a mathematical concept that is used to study irregular objects and chaotic motion [38]. It can be used to assess the complexity of objects. The theory has been widely used in geography, image processing, signal analysis, etc. Traditional Euclidean geometry takes the object as a regular geometric figure and the spatial dimension is an integer (e.g., one, two, or three dimensions). However, real-world objects are always irregular and complicated. Thus, the dimension is not an integer at all and fractal theory can be applied to describe the complexity and irregularity of objects.
A fractal dimension is a characteristic index measuring the fractal morphology of features [39]. The fractal dimension is an objective characteristic quantity characterized by scale invariance. It is an important index for describing irregularity and roughness. The spatial morphological structure of ground features constituted of LiDAR discrete points has its unique and class-related irregularity and roughness, such as the smooth planes of buildings, the irregular branches of trees, etc. The spatial morphological structure of these unique ground features can be described by a fractal dimension. In studies of fractal dimensions, some common definitions of fractal dimensions are the Hausdorff dimension, the box-counting dimension, the divider dimension, etc. Because the box-counting dimension is determined by the cover of an identical shape set, its calculation is easier than for other dimensions, and it is often used in geographical research. Therefore, we use the box-counting dimension (Figure 5) to calculate the 3D fractal dimensions of LiDAR feature points [40]. To calculate the box-counting dimension, first, we use a cube to completely cover the entire feature point cloud without cracks and count the non-empty cube number N; the dimension is defined as
D = lim (n→0) (log N / log(1/r))
where D is the box-counting dimension. D can be calculated when the side length r of the cube approaches 0.
Figure 5. Box-counting dimension calculating sketch map.
Figure 5. Box-counting dimension calculating sketch map.
Remotesensing 07 10815 g005
In the theoretical formula, the box-counting dimension is given by the limit when the radius approaches 0. However, this situation is impossible in reality. From the box-counting dimension, the relationship between r and N is
log (N) = − D log(r) + C
where C is a constant. Thus, for a series of different side lengths, r, applying the least square method to simulate the linear functions of log (N) and log(r) gives the slope of the linear function as equal to the box-counting dimension D. It should be noted that, unlike continuous objects, the point cloud consists of discrete points. Moreover, when the side length of the cube is smaller than the length of the interval, the number of non-empty cubes is determined and will not change when the side length reduces. Additionally, the spatial characteristics allow the smallest side length of the cube to be equal to the average point distance of the LiDAR point cloud.

3.3. Region Segmentation and 3D Fractal Dimension Analysis

The filtering result of the non-ground points is the mixing point set of all kinds of features. It is impossible to judge the classification by discrete points and so it is necessary to divide the dataset into single feature sets to analyze the attribute. Applying the trait that points belonging to the same type would be aggregated, and the characteristic distribution of the discrete point cloud in 3D, we can use the region-growing method to divide the point set [41]. The detailed procedure is as follows: (1) Based on a certain side length, build a quick 3D mesh for the whole non-ground points region and form a hierarchical bounding box for the non-ground points. To avoid a large bounding box’s influence on the accuracy of the division, the side length of the bounding box is twice the average pitch of the LiDAR point cloud; (2) Select a non-empty bounding box as a seed and expand it in 3D spaces. If the neighboring bounding box is not empty, feature points in the bounding box are points for a single feature and this bounding box is a part of the non-ground points; (3) Set the new bounding box as the seed, and repeat procedure (2) until no new point is added. In order to avoid the bounds of some features from becoming tangled and connected with each other in the expansion procedure, an expansion threshold must be set for the bounding box. If the number of points in the box is smaller than the threshold, this bounding box must be considered to be located on the edge of the feature and is not expandable (Figure 6).
Figure 6. Box-counting dimension calculation sketch map.
Figure 6. Box-counting dimension calculation sketch map.
Remotesensing 07 10815 g006
After obtaining a single type of feature by region segmentation, it is possible to calculate 3D fractal dimensions to describe the spatial shape of this kind of feature to define their class. In order to explore the different 3D fractal dimensions of features, region segmentation and 3D fractal dimension evaluations for trees, shrubs, buildings, power lines, and a variety of other features for different point densities were conducted. According to the variation of ground features’ 3D fractal dimensions at different point densities, we divided 3D fractal dimensions into distributions for high, medium, and low point densities (Table 2).
Table 2. Density range in three different conditions.
Table 2. Density range in three different conditions.
High DensityMedium DensityLow Density
Point density (pts/m2)≥205–20≤5
Figure 7 shows that 3D fractal dimensions of different features have their own distribution range, which match their specific spatial distribution morphology. The intervals do not overlap among features with obvious characteristics, making identification possible. Under the three point cloud densities, the 3D fractal dimensions of trees are the largest, those of buildings are intermediate, and those of power lines are the smallest. This is consistent with the actual situation of practical features. In reality, wires consist of poles, wires, and other one-dimensional lines. These shapes are simple to identify compared with other features. Houses consist of walls, roofs, and other two-dimensional surfaces. However, the shapes of trees are complicated and ground surfaces are irregular. Laser points falling on them are not only distributed on the surface but also within trees. These complex shapes determine the high 3D fractal dimension of trees. Shrubs are shorter than tall trees, and their surfaces are irregular. However, because of the limitations of the point density or blocking by other ground features, the number of discrete points falling on or within shrubs is often smaller than for trees. The 3D fractal dimension of shrubs ranges between that of trees and buildings, and may even coincide with trees that have a low-dimensional fractal dimension. That is because some low trees have shapes similar to shrubs and therefore similar 3D fractal dimensions. Nevertheless, there is a significant difference between shrubs and trees. Under different point cloud conditions, the 3D fractal dimensions of trees are unique and therefore easily identified. Hence, we use 3D fractal dimensions to extract information related to tall trees.
Figure 7. Statistil graphs of diverse features’ 3D fractal dimension distributions: (a) high-density; (b) medium-density; (c) low-density.
Figure 7. Statistil graphs of diverse features’ 3D fractal dimension distributions: (a) high-density; (b) medium-density; (c) low-density.
Remotesensing 07 10815 g007
Meanwhile, the lower the point density, the smaller the distribution area of the 3D fractal dimension (Figure 8). For high, medium, and low densities, the 3D fractal dimensions are distributed correspondingly between 1.47–1.92, 1.35–1.83, and 1.29–1.74, respectively. The range of 3D fractal dimensions has a left-shift phenomenon. Additionally, the smaller the point cloud density, the larger the difference in 3D fractal dimensions between tall trees and other types of ground features and vice versa. Especially under low-density point cloud conditions, the distribution extent of a ground feature’s 3D fractal dimension may expand, leading to a decrease in the difference in the 3D fractal dimensions among different types of ground features. Under specific point cloud conditions, it is necessary to apply the corresponding 3D fractal dimension to extract information related to tall trees to ensure the result is complete and accurate.
Figure 8. Changes in the distribution range of the different features’ 3D fractal dimensions for the three point cloud density conditions: (a) tree; (b) brush; (c): house; (d) electric line.
Figure 8. Changes in the distribution range of the different features’ 3D fractal dimensions for the three point cloud density conditions: (a) tree; (b) brush; (c): house; (d) electric line.
Remotesensing 07 10815 g008
According to the analysis above, we obtained the distribution and variation pattern of ground features’ fractal dimensions. By applying these dimensions to analyze the ground features’ type, tall tree points can be distinguished from other point types, and the extraction of tall trees from the LiDAR point cloud is achieved. The extraction process presented in this paper is summarized as follows:
(1) Preprocessing of the LiDAR point cloud. Use a filter on the original LiDAR point cloud to obtain a complete set of non-ground points, which provides the basic data for the classification of the point cloud.
(2) Region growing segmentation on LiDAR non-ground points. Use region growing segmentation to segment the original discrete points into “ground feature sets” consisting of several point set features, which provides the object for the spatial morphological analysis of ground features.
(3) Spatial morphological analysis and tall tree extraction on LiDAR non-ground points. Use 3D fractal dimensions to perform a spatial morphological evaluation on ground features consisting of LiDAR points, and then distinguish between the different types of ground features according to their 3D fractal dimension distribution characteristics, allowing tall trees to be extracted.

4. Results and Discussion

4.1. Vegetation Extraction Results

By using a morphological gradient filter test to preprocess the test data, we obtained all non-ground points (Figure 9), which provides the starting point for a spatial morphological analysis of ground features.
Figure 9. Non-ground points (a) and ground points (b) after filtering.
Figure 9. Non-ground points (a) and ground points (b) after filtering.
Remotesensing 07 10815 g009
Then, the non-ground points were segmented using region growing, and a 3D fractal evaluation was performed to achieve the 3D fractal dimension distribution (Figure 10).
Owing to the high average point density of LiDAR data in the study area, our study uses the 3D high-density point cloud fractal dimension (1.68–1.92) to study tall trees. The result is shown in Figure 11.
Figure 10. 3D fractal dimension distribution in the study area.
Figure 10. 3D fractal dimension distribution in the study area.
Remotesensing 07 10815 g010
Figure 11. Tall trees point set.
Figure 11. Tall trees point set.
Remotesensing 07 10815 g011

4.2. Accuracy Assessment

The result shows that the tall trees were extracted well. The extraction results are fairly complete and few other ground features are mixed in the result point set. Further quantitative assessment of the extraction results is carried out by calculating the accuracy and completeness. The reference data are classified point sets obtained from human visual interpretation, on the basis of LiDAR point cloud and a high-resolution remote sensing image. The results were analyzed using the Error Matrix Method (EMM) [42], and the form of the EMM employed in this study is listed in Table 3.
Table 3. The form of the Error Matrix Method (EMM).
Table 3. The form of the Error Matrix Method (EMM).
Classification Results
Tall TreesNon-Tall Trees
Reference dataTall treesAB
Non-tall treesCD
A is the number of correctly classified tall tree points, B is the number of tall tree points that were wrongly classified as other types, C is the number of non-tall tree points that were wrongly classified as tall trees, and D is the number of correctly classified non-tall tree points. According to the EMM, the completeness I, accuracy E, and comprehensive assessment index Kappa for the extraction of ground features can be calculated using
I = A ÷ (A + B)
E = A ÷ (A + C)
Kappa = (S× (A + D) – ∆) ÷ (S 2 – ∆)
where S = A + B + C + D, and ∆ = (A + B) × (A + C) + (B + D) × (C + D).
According the form of the EMM employed in this research, the counted results for extracted tall trees are listed in Table 4.
Table 4. The evaluation of the extraction of tall trees.
Table 4. The evaluation of the extraction of tall trees.
Classification Results
Tall TreesNon-Tall Trees
Reference dataTall trees3093606137046
Non-tall trees252747994201
Completeness95.57%Results91.83%
Index Kappa0.8006

4.3. Error Analysis and Discussion

Table 4 shows that the accuracy and completeness of the extraction are high, achieving over 90%, and demonstrating the feasibility and effectiveness of our extraction method. However, the fact that the morphology of a few short trees is similar to shrubs means that some of the shrub points were mistakenly classified as tall trees (Figure 12a). Meanwhile, tall trees located on boundaries were misclassified as non-vegetation points because of the incompleteness of their morphology (Figure 12b). Generally, tall trees with a complete morphology in the study area were successfully separated from other ground features using 3D fractal dimensions. In addition, the higher the density of the point cloud, the better the results will be, because the irregularity and roughness of the tall trees’ morphology will become more obvious.
Our new method has the following characteristics: (1) In contrast to some other methods, the conditions for input data are simple with no support required from remote sensing and aerial imagery. Moreover, the LiDAR cloud data does not need to be full-waveform, meaning this method can be applied to most LiDAR data to extract the required features. (2) The evaluation standard is the 3D morphological trait of the LiDAR point cloud itself. Instead of considering each discrete point or its derived image, we start from ground features’ overall morphological structure. This method focuses on the description of the 3D morphology of tall trees in the point cloud. The effect is comprehensive and accurate. (3) In the computational process, there is no requirement to build a TIN or other raster grids to organize the point cloud. Thus, the procedure has the advantages of being easy and quick to implement.
Figure 12. Error types for tall tree extraction: (a) Shrubs misrecognized as tall trees; (b) Missed tall trees.
Figure 12. Error types for tall tree extraction: (a) Shrubs misrecognized as tall trees; (b) Missed tall trees.
Remotesensing 07 10815 g012
This paper focuses on discussing the feasibility of extracting tall tree points from LiDAR data using 3D fractal dimensions. There are various ground features and morphologically intact terrain in the study area. The feasibility of extracting tall tree points using 3D fractal dimensions has been demonstrated in the study area and this approach achieved good results. However, because of the limited area and number of ground features in the study area, using 3D fractal dimensions to extract tall trees in more complex scenarios has not yet been tested. Considerable additional work is needed in other study areas with different features in order to examine the wider applicability of this approach. In addition, further characteristics of ground features’ morphological structures may be discovered in different aspects, and could be combined with those in 3D fractal dimensions, potentially resulting in a better classification result. Further research on the segmentation of non-ground points to improve the accuracy of 3D fractal dimension evaluation would also be informative. These considerations provide potential directions for future research.

5. Conclusions

This study generated a new approach to extract vegetation features from the LiDAR point cloud. The data were first preprocessed to obtain the non-ground points through filtering. The non-ground points were then divided using region growing segmentation and evaluated to obtain 3D fractal dimensions. All features were finally differentiated to extract vegetation. According to the analysis of 3D fractal dimensions for different features, each type of feature has a unique distribution and variation trend in its 3D fractal dimensions. Tall trees, because of their complex shape and rough surfaces, have larger 3D fractal dimensions than some other features with simpler shapes or smoother surfaces. Under different point densities, the medium 3D fractal dimension of tall trees is approximately 1.74. Additionally, the smaller the point cloud density, the larger the difference in 3D fractal dimensions between tall trees and other types of features. Based on the characteristics of tall trees’ 3D fractal dimensions discussed here, our method was tested and the results showed an accuracy and a completeness of over 90%. This method takes advantage of the 3D spatial morphology features of tall trees to extract them from the LiDAR point cloud data, and its efficiency is proved.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (41501558) and the Program for New Century Excellent Talents in University (NCET-13-0280).

Author Contributions

Haiquan Yang and Jiechen Wang provided the original idea, conceived and designed the experiments; Wenlong Chen performed the experiments and analyzed the data; Dingtao Shen offered the technique and data support for this work; Haiquan Yang, Tianlu Qian and Wenlong Chen wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bienert, A.; Scheller, S.; Keane, E.; Mohan, F.; Nugent, C. Tree detection and diameter estimations by analysis of forest terrestrial laserscanner point clouds. In Proceedings of the ISPRS Workshop on Laser Scanning, Espoo, Finland, 12–14 September 2007; pp. 50–55.
  2. Elseberg, J.; Borrmann, D.; Nuchter, A. Full wave analysis in 3D laser scans for vegetation detection in urban environments. In Proceedings of the Information, Communication and Automation Technologies (ICAT), Sarajevo, Bosnia and Herzegovina, 27–29 October 2011; pp. 1–7.
  3. Hilker, T.; Coops, N.C.; Coggins, S.B.; Wulder, M.A.; Brown, M.; Black, T.A.; Nesic, Z.; Lessard, D. Detection of foliage conditions and disturbance from multi-angular high spectral resolution remote sensing. Remote Sens. Environ. 2009, 113, 421–434. [Google Scholar] [CrossRef]
  4. Sánchez-Azofeifa, G.A.; Castro, K.; Wright, S.J.; Gamon, J.; Kalacska, M.; Rivard, B.; Schnitzer, S.A.; Feng, J.L. Differences in leaf traits, leaf internal structure, and spectral reflectance between two communities of lianas and trees: Implications for remote sensing in tropical environments. Remote Sens. Environ. 2009, 113, 2076–2088. [Google Scholar] [CrossRef]
  5. Wilson, B.A.; Brocklehurst, P.S.; Clark, M.J.; Dickinson, K. Vegetation survey of the Northern Territory, Australia; Technical Report for Explanatory Notes and 1:1,000,000 Map Sheets; Conservation Commission of the Northern Territory Australia: Palmerston, Australia, 1990. [Google Scholar]
  6. Wood, E.M.; Pidgeon, A.M.; Radeloff, V.C.; Keuler, N.S. Image texture as a remotely sensed measure of vegetation structure. Remote Sens. Environ. 2012, 121, 516–526. [Google Scholar] [CrossRef]
  7. Zhang, C.; Xie, Z. Combining object-based texture measures with a neural network for vegetation mapping in the Everglades from hyperspectral imagery. Remote Sens. Environ. 2012, 124, 310–320. [Google Scholar] [CrossRef]
  8. Han, W.; Zhao, S.; Feng, X.; Chen, L. Extraction of multilayer vegetation coverage using airborne LiDAR discrete points with intensity information in urban areas: A case study in Nanjing, China. Int. J. Appl. Earth Obs. Geoinform. 2014, 30, 56–64. [Google Scholar] [CrossRef]
  9. Hyyppa, J. Feasibility for estimation of single tree characteristics using laser scanner. In Proceedings of Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 24–28 July 2000; pp. 981–983.
  10. Wagner, W.; Hollaus, M.; Briese, C.; Ducic, V. 3D vegetation mapping using small-footprint full-waveform airborne laser scanners. Int. J. Remote Sens. 2008, 29, 1433–1452. [Google Scholar] [CrossRef]
  11. Ackermann, F. Airborne laser scanning—Present status and future expectations. ISPRS J. Photogramm. Remote Sens. 1999, 54, 64–67. [Google Scholar] [CrossRef]
  12. Wehr, A.; Lohr, U. Airborne laser scanning—An introduction and overview. ISPRS J. Photogramm. Remote Sens. 1999, 54, 68–82. [Google Scholar] [CrossRef]
  13. Chen, C.; Li, Y.; Li, W.; Dai, H. A multiresolution hierarchical classification algorithm for filtering airborne LiDAR data. ISPRS J. Photogramm. Remote Sens. 2013, 82, 1–9. [Google Scholar] [CrossRef]
  14. Yu, B.; Liu, H.; Zhang, L.; Wu, J. An object-based two-stage method for a detailed classification of urban landscape components by integrating airborne LiDAR and color infrared image data: A case study of downtown Houston. In Proceedings of the 2009 Joint Urban Remote Sensing Event, Shanghai, China, 20–22 May 2009; pp. 1–8.
  15. Sánchez-Lopera, J.; Lerma, J.L. Classification of LiDAR bare-earth points, buildings, vegetation, and small objects based on region growing and angular classifier. Int. J. Remote Sens. 2014, 35, 6955–6972. [Google Scholar] [CrossRef]
  16. Lafarge, F.; Mallet, C. Modeling Urban Landscapes from Point Clouds: A Generic Approach; Technical Report for Vision, Perception and Multimedia; HAL: Nice, France, 1–5 May 2011. [Google Scholar]
  17. Rutzinger, M.; Höfle, B.; Geist, T.; Stötter, J. Object based building detection based on airborne laser scanning data within GRASS GIS environment. In Proceedings of the UDMS 2006: Urban Data Management Symposium, Aalborg, Denmark, 15–17 May 2006; pp. 37–48.
  18. Rutzinger, M.; Höfle, B.; Hollaus, M.; Pfeifer, N. Object-based point cloud analysis of full-waveform airborne laser scanning data for urban vegetation classification. Sensors 2008, 8, 4505–4528. [Google Scholar] [CrossRef]
  19. Kim, H.B.; Sohn, G. Random forests-based multiple classifier system for power-line scene classification. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, XXXVIII-5/W12, 253–259. [Google Scholar] [CrossRef]
  20. Xu, G.S.; Vosselman, S.; Elberink, O. Multiple-entity based classification of airborne laser scanning data in urban areas. ISPRS J. Photogramm. Remote Sens. 2014, 88, 1–15. [Google Scholar] [CrossRef]
  21. Geerling, G.W.; Labrador Garcia, M.; Clevers, J.G.P.W.; Ragas, A.M.J.; Smits, A.J.M. Classification of floodplain vegetation by data fusion of spectral (CASI) and LiDAR data. Int. J. Remote Sens. 2007, 28, 4263–4284. [Google Scholar] [CrossRef]
  22. Secord, J.; Zakhor, A. Tree detection in urban regions using aerial LiDAR and image data. IEEE Geosci. Remote Sens. Lett. 2007, 4, 196–200. [Google Scholar] [CrossRef]
  23. Ramdani, F. Urban vegetation mapping from fused hyperspectral image and LiDAR data with application to monitor urban tree heights. J. Geo. Inf. St. 2013, 5, 404–408. [Google Scholar] [CrossRef]
  24. Reese, H.; Nordkvist, K.; Nyström, M.; Bohlin, J.; Olsson, H. Combining point clouds from image matching with SPOT 5 multispectral data for mountain vegetation classification. Int. J. Remote Sens. 2015, 36, 403–416. [Google Scholar] [CrossRef]
  25. Chen, X.; Vierling, L.; Rowell, E.; DeFelice, T. Using LiDAR and effective LAI data to evaluate IKONOS and Landsat 7 ETM+ vegetation cover estimates in a ponderosa pine forest. Remote Sens. Environ. 2004, 91, 14–26. [Google Scholar] [CrossRef]
  26. Heinzel, J.; Koch, B. Exploring full-waveform LiDAR parameters for tree species classification. Int. J. Appl. Earth Obs. Geoinform. 2011, 13, 152–160. [Google Scholar] [CrossRef]
  27. Antonarakis, A.S.; Richards, K.S.; Brasington, J. Object-based land cover classification using airborne LiDAR. Remote Sens. Environ. 2008, 112, 2988–2998. [Google Scholar] [CrossRef]
  28. Höfle, B.; Hollaus, M.; Hagenauer, J. Urban vegetation detection using radiometrically calibrated small-footprint full-waveform airborne LiDAR data. ISPRS J. Photogramm. Remote Sens. 2012, 67, 134–147. [Google Scholar] [CrossRef]
  29. Hug, C.; Ullrich, A.; Grimm, A. Litemapper-5600—A waveform-digitizing LiDAR terrain and vegetation mapping system. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 36, 24–29. [Google Scholar]
  30. Reitberger, J.; Krzystek, P.; Stilla, U. Analysis of full waveform LiDAR data for the classification of deciduous and coniferous trees. Int. J. Remote Sens. 2008, 29, 1407–1431. [Google Scholar] [CrossRef]
  31. Meng, X.; Currit, N.; Zhao, K. Ground filtering algorithms for airborne LiDAR data: A review of critical issues. Remote Sens. 2010, 2, 833–860. [Google Scholar] [CrossRef]
  32. Forlani, G. Adaptive filtering of aerial laser scanning data. Int. Archiv. Photogramm. Remote Sens. Spat. Inf. Sci. 2007, 36, 130–135. [Google Scholar]
  33. Sohn, G.I.D. Terrain surface reconstruction by the use of tetrahedron model with the MDL criterion. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 336–344. [Google Scholar]
  34. Roggero, M. Object segmentation with region growing and principal component analysis. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 289–294. [Google Scholar]
  35. Zhang, K.; Chen, S.C.; Whitman, D. A progressive morphological filter for removing nonground measurements from airborne LiDAR data. IEEE Geosci. Remote Sens. 2003, 41, 872–882. [Google Scholar] [CrossRef]
  36. Yong, L.; Wu, H. DEM extraction from LiDAR data by morphological gradient. In Proceedings of the IEEE Fifth International Joint Conference INC, IMS and IDC, Seoul, Korea, 10 December 2009; pp. 1301–1306.
  37. Sithole, G.; Vosselman, G. The Full Report: ISPRS Comparison of Existing Automatic Filters. Availbale online: http://www.itc.nl/isprswgIII-3/filtertest/ (accessed on 19 August 2015).
  38. Mandelbrot, B.D. Fractals: Form, Chance and Dimension, 1st ed.; W.H. Freeman and Company: San Francisco, SF, USA, 1977. [Google Scholar]
  39. Falconer, K.J. The Geometry of Fractal Sets; Cambridge University Press: Cambridge, UK, 1986. [Google Scholar]
  40. Bisoi, A.K.; Mishra, J. On calculation of fractal dimension of images. Pattern Recog. Lett. 2001, 22, 631–637. [Google Scholar] [CrossRef]
  41. Fan, J.; Yau, D.Y.; Elmagarmid, A.K.; Aref, W.G. Automatic image segmentation by integrating color-edge extraction and seeded region growing. IEEE Trans. Image Process. 2001, 10, 1454–1466. [Google Scholar] [PubMed]
  42. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Yang, H.; Chen, W.; Qian, T.; Shen, D.; Wang, J. The Extraction of Vegetation Points from LiDAR Using 3D Fractal Dimension Analyses. Remote Sens. 2015, 7, 10815-10831. https://doi.org/10.3390/rs70810815

AMA Style

Yang H, Chen W, Qian T, Shen D, Wang J. The Extraction of Vegetation Points from LiDAR Using 3D Fractal Dimension Analyses. Remote Sensing. 2015; 7(8):10815-10831. https://doi.org/10.3390/rs70810815

Chicago/Turabian Style

Yang, Haiquan, Wenlong Chen, Tianlu Qian, Dingtao Shen, and Jiechen Wang. 2015. "The Extraction of Vegetation Points from LiDAR Using 3D Fractal Dimension Analyses" Remote Sensing 7, no. 8: 10815-10831. https://doi.org/10.3390/rs70810815

APA Style

Yang, H., Chen, W., Qian, T., Shen, D., & Wang, J. (2015). The Extraction of Vegetation Points from LiDAR Using 3D Fractal Dimension Analyses. Remote Sensing, 7(8), 10815-10831. https://doi.org/10.3390/rs70810815

Article Metrics

Back to TopTop