Next Article in Journal
Path Planning of Spacecraft Cluster Orbit Reconstruction Based on ALPIO
Previous Article in Journal
Forest Carbon Flux Simulation Using Multi-Source Data and Incorporation of Remotely Sensed Model with Process-Based Model
Previous Article in Special Issue
The Accuracy and Consistency of 3D Elevation Program Data: A Systematic Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Absolute Accuracy Assessment of Lidar Point Cloud Using Amorphous Objects

1
KBR, Contractor to U.S. Geological Survey Earth Resources Observation and Science (EROS) Center, Sioux Falls, SD 57198, USA
2
U.S. Geological Survey, National Geospatial Program, Reston, VA 20192, USA
3
U.S. Geological Survey EROS Center, Sioux Falls, SD 57198, USA
4
USS, Contractor to U.S. Geological Survey EROS Center, Sioux Falls, SD 57198, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(19), 4767; https://doi.org/10.3390/rs14194767
Submission received: 25 August 2022 / Revised: 21 September 2022 / Accepted: 22 September 2022 / Published: 23 September 2022
(This article belongs to the Special Issue Environmental Monitoring and Mapping Using 3D Elevation Program Data)

Abstract

:
The accuracy assessment of airborne lidar point cloud typically estimates vertical accuracy by computing RMSEz (root mean square error of the z coordinate) from ground check points (GCPs). Due to the low point density of the airborne lidar point cloud, there is often not enough accurate semantic context to find an accurate conjugate point. To advance the accuracy assessment in full three-dimensional (3D) context, geometric features, such as the three-plane intersection point or two-line intersection point, are often used. Although the point density is still low, geometric features are mathematically modeled from many points. Thus, geometric features provide a robust determination of the intersection point, and the point is considered as a GCP. When no regular built objects are available, we describe the process of utilizing features of irregular shape called amorphous natural objects, such as a tree or a rock. When scanned to a high-density point cloud, an amorphous natural object can be used as ground truth reference data to estimate 3D georeferencing errors of the airborne lidar point cloud. The algorithm to estimate 3D accuracy is the optimization that minimizes the sum of the distance between the airborne lidar points to the ground scanned data. The search volume partitioning was the most important procedure to improve the computational efficiency. We also performed an extensive study to address the external uncertainty associated with the amorphous object method. We describe an accuracy assessment using amorphous objects (108 trees) spread over the project area. The accuracy results for ∆x, ∆y, and ∆z obtained using the amorphous object method were 3.1 cm, 3.6 cm, and 1.7 cm RMSE, along with a mean error of 0.1 cm, 0.1 cm, and 4.5 cm, respectively, satisfying the accuracy requirement of U.S. Geological Survey lidar base specification. This approach shows strong promise as an alternative to geometric feature methods when artificial targets are scarce. The relative convenience and advantages of using amorphous targets, along with its good performance shown here, make this amorphous object method a practical way to perform 3D accuracy assessment.

1. Introduction

As point densities of airborne lidar datasets increase, the need for full three-dimensional (3D) absolute accuracy assessments of the associated lidar point clouds is becoming more important. Casual users of lidar data may mistakenly believe that higher point density data directly correlates with higher accuracy data; however, lidar accuracy is actually a direct function of the error balance inherent in the system and its operation is independent of point density.
The metrological definition of the term “accuracy” is the mean difference of the data against the true value, and the term “precision” is the consistency of the repeated measurement. Accuracy in a conventional statement such as “lidar accuracy assessment” means “precision” in the metrological definition. Lidar data standards [1,2] require reporting the mean error compared to ground control points (GCPs) at a certain level or corrected before data release. Thus, assuming a statistically significant number of samples were used, the accuracy assessment in the practical sense means precision or uncertainty. Thus, the accuracy and uncertainty are interchangeably used in consideration of both conventional usage and the exact meaning.

1.1. Relative Accuracy and Absolute Accuracy

Lidar data quality assessment on both horizontal and vertical accuracy is challenging; therefore, in practice typically only absolute vertical accuracy is assessed, with the assumption that any deviation in horizontal accuracy will affect vertical accuracy. Horizontal accuracy could also be addressed using overlapping swaths resulting in relative horizontal accuracy assessments.
By comparing overlapping swaths from the lidar sensor, the systematic errors inherent in the instrument are described [3]. In the inter-swath data analyses to estimate boresighting and data alignment, geometric features are usually utilized. A popular approach is to extract planar features [4,5,6,7] using combinations of various methods: manual selection, region-growing, random sample consensus (RANSAC) segmentation, or the iterative closest point (ICP) method. Linear features are also commonly extracted in the overlapping swaths, and they are used for relative accuracy analysis [8,9,10,11]. Although these methods address relative accuracy using inter-swath point cloud, they are based on the same technical foundation of the absolute accuracy assessment in a full 3D context.
Absolute accuracy assessment is possible when the ground truth surveys on targets are collected independently of the airborne data collection. Elevated point targets with their true positions known from the survey can be used in studying the point location uncertainty of the airborne point cloud [12,13,14]. Elevated and isolated point targets along with ground truth survey are often used along with geometric features to perform 3D absolute accuracy assessments [15].

1.2. Accuracy Assessment

A conventional method for the positional accuracy assessment of a lidar point cloud compares the true GCPs and their conjugate points from the lidar point cloud data. A point pair consists of a GCP coordinate (x0, y0, z0) and its conjugate lidar point (x, y, z). Comparing the two points creates a positional difference (∆x, ∆y, ∆z) vector. The overall accuracy of the lidar point cloud uses the difference vectors from all point pairs. For example, if there are 100 check points, then the uncertainty along the x-axis is the root mean square error (RMSE) of 100 ∆x’s, and the uncertainty along the y-axis and z-axis are computed in the same manner. Although the calculated 3D uncertainty has three components ( σ x , σ y ,   σ z ), we use σ by dropping the subscript for notational simplicity.
Although the uncertainty σ is obtained from the differences of the two values, it is customary to take the calculated RMSE σ as the proxy of inherent uncertainty σ I of the lidar point data itself. The uncertainty associated with the GCP measurement is mainly determined by the positional uncertainty of the Global Navigation Satellite System (GNSS), σ G . In the error propagation equation, the uncertainty σ of the difference (∆x = x0x) is estimated by
σ = σ I 2 + σ G 2 .
However, we cannot assume that we know σ G and subsequently should not estimate pure inherent uncertainty of the data itself via σ I = σ 2 σ G 2 because σ is the only measurable value. The conventional approach of taking σ as the proxy of σ I is backed up by conventional wisdom that the accuracy of the reference GCP data should be at least three times better than the data accuracy requirement. For example, the U.S. Geological Survey (USGS) lidar base specification [1] requires the accuracy of quality level 1 (QL1) data to be less than 10 cm. According to conventional wisdom, the uncertainty of the GCPs that are used as reference data to evaluate the airborne data accuracy must be better than 3.33 cm. Because the best practical GCP accuracy based on state-of-the-art GNSS technology is about 3 cm, typical GNSS survey data can be used for USGS QL1 accuracy assessment of a specific airborne lidar dataset.
Regarding the conventional wisdom of “three times or better” for survey checkpoints, as lidar systems improve and routinely achieve excellent accuracy, the systems are approaching the accuracy limits of the survey tests. For example, to reflect the advancement of the lidar technology, if we want to set a much higher accuracy requirement (e.g., 6 cm), then the “three times or better” condition requires the accuracy of reference data to be better than 2 cm. Because the state-of-the-art GCP accuracy is only about 3 cm, the accuracy assessment cannot be properly attempted due to the limits of the reference data. The lidar community is discussing relaxing the “three times or better” argument to “two times or better”; However, this has not become an approved standard practice yet.

1.3. External Uncertainty

All of the above statements, however, do not emphasize a very important practical question; that is, what is the uncertainty associated with determining the point pair itself? It is important to compare like (a reference GCP) with like (a chosen point from the data point cloud as the GCP’s conjugate point). However, in the real world, uncertainty is always associated with picking the conjugate point, also known as the external uncertainty σ E , to give a contrasting meaning against the inherent uncertainty of the data itself. It could be called external uncertainty because it is originated solely from the specific method, but it has nothing to do with the inherent data uncertainty. Thus, the total uncertainty can be modeled [16] as
σ = σ G 2 + σ I 2 + σ E 2
For a vertical accuracy assessment, such as non-vegetated vertical accuracy (NVA) or vegetated vertical accuracy (VVA), the horizontal error is ignored. In the vertical accuracy assessment, the method to determine a conjugate point corresponding to a specific ith GCP coordinate (xi, yi, zi) is to search the horizontal coordinates of the airborne lidar point cloud near the ground measured horizontal coordinate (xi, yi) and to interpolate z-value of the airborne point cloud so that the difference (zzi) is obtained for use in the calculation of NVA or VVA. The ground measured coordinate is simply used in searching the airborne data. In other words, the essence of this specific method to make a point pair for NVA or VVA assumes the external uncertainty is zero, not because there is no horizontal error, but the low point density makes it difficult to estimate horizontal error. For example, 16 PPSM (points per square meter) airborne lidar data, something considered relatively high density in the current standard, has a 25-cm spot spacing where the laser beam footprint interacts with the features in the scene. Even if geometric features or intensity features are very strong, uncertainty is still large when picking a conjugate point. Thus, the vertical accuracy simply ignores the horizontal accuracy and just takes the risk of the unknown horizontal error affecting the vertical accuracy. Practically it is possible to mitigate the risk by setting up GCPs in low slope, homogeneous areas to account for the horizontal uncertainty.

1.4. Accuracy Assessment Using GCPs from Geometric Features

Several known methods utilize 3D geometrical features to estimate horizontal accuracy and vertical accuracy simultaneously. An apparent difference of this approach against the conventional GCP approach is that a GCP is not a directly measured point using a GNSS rover or a total station. A GCP is estimated from a geometric feature in the point cloud collected from ground truth survey. For instance, a GCP can be estimated as the intersection point from two mathematically modeled lines from surveyed point cloud data. Additionally, three planes modeled from a relatively complex roof structure yields a unique GCP. Stationary scanning lidar, total station with scanning capability, mobile lidar scanner, drone-based lidar, or any lidar system calibrated with a GNSS accuracy can collect point cloud of geometric features and the data is processed to extract GCPs from geometric features.
Although the point GCP approach has difficulty addressing the external uncertainty associated with picking a point due to low density, for the geometric feature-based approach the external uncertainty becomes a manageable concept. Although the point density is still low, the geometric feature is modeled using a large number of points, thus addressing the external uncertainty is possible. In the case of the three-plane approach, a general external uncertainty value can be derived using airborne lidar simulator data depending on multiple conditions, such as size of the object, total propagated uncertainty (TPU) of the system, and the point density of airborne data [16].

1.5. Accuracy Assessment Using Amorphous Objects

In contrast to these methods above, the suggested approach in this study uses amorphous objects. An amorphous object does not have any well-defined geometric features, thus GCP extraction from planar or linear fits is not possible. Instead, the amorphous object approach compares reference ground truth point cloud data and airborne point cloud data directly. The amorphous object approach estimates the difference vector between the two datasets via data alignment using optimization. Although initial impressions of the amorphous object method may not look very positive due to its brute-force nature, it yields quite stable results compared to other robust methods. Additionally, an extensive study of the external uncertainty for the amorphous object approach was performed and presented.

1.6. Properties of the Amorphous Object Method

Several properties of the proposed amorphous object method may be described as follows. Estimating the mean offset between the two 3D point cloud datasets belongs to a generic ICP method that performs iterative data association and alignment [17]. Data association estimates the correspondence between one point from the first point cloud and the other conjugate point from second point cloud. If the data association is known, singular value decomposition solves the offset in one computation step [18]. However, in most cases the data association is not known. Especially, the proposed amorphous object method targets natural objects, thus the data association is virtually impossible. The amorphous object method does not assume any data association, but iteratively finds closest point pairs. The amorphous object method can be used for point cloud from not only the solid surfaces but the volumetric point cloud. Lidar point cloud from the tree, which is the most common amorphous natural object, is basically volumetric point set. Thus, the computation of objective function to find data alignment cannot utilize point-to-plane approach because a local plane from a volumetric data is not feasible while a local plane using point cloud of a solid surface is feasible [19]. Thus, amorphous object method finds closest point pair using the straightforward point-to-point distance. Various approaches to improve data association such as feature based sampling, normal-space sampling, and weighting correspondences require preprocessing of the data [20]. However, amorphous method does not need any data processing but a simple data partitioning for computational efficiency, which suggests the amorphous method belongs to an ICP variant that uses point subsets. Furthermore, the amorphous method can be used for mutually incomplete point sets. For example, airborne data captures whole point cloud from an object, whereas ground-based lidar scanner may capture partial objects due to self-shadowing or access limitation. The amorphous object method handles these cases very well. Due to these properties of the amorphous object method, 3D accuracy assessment over a broad range of conditions is possible

2. Materials and Methods

2.1. Airborne Lidar Campaign and Ground Truth Survey

The source data for this research were collected during a USGS topobathy lidar data collection campaign. Details of the instrumentation and parameters can be found in the published dataset [21]. The ground truth survey was performed during the airborne sensor overpasses on several days. A wide range of survey instruments were used to ensure the highest quality ground truth data for quality accuracy assessment. The equipment included a GNSS base station antenna/transmitter, total station, terrestrial lidar scanner (TLS) with calibration targets, GNSS rovers, and special purpose field targets deployed before the flight. TLS scanning was optimized at about 1000 PPSM or greater. Special focus was on the built structures with various geometric features and isolated amorphous natural objects.

2.2. Amorphous Object Method

The goal of this method was to estimate the mean positional difference between two lidar point clouds based on amorphous objects. Unlike the point GCP method or extracted GCP from geometric feature method, the amorphous object method used all the sampled point cloud data. The usual data pair consisted of airborne lidar data with a typical low-resolution and the reference data with very high resolution. Finding isolated amorphous objects when sampling is desirable for ease of interpretation. For example, a tree with its canopy well separated from its neighbors would be considered an isolated object. A group of trees with their canopies connected but isolated is not as ideal but can be used as well.
Once a local point cloud of an amorphous object is extracted from the reference point cloud, a matching point cloud for the same area is extracted from the airborne point cloud. The iterative optimization algorithm computes the total sum of closest points until it reaches its minimum. If there are N points in the airborne point cloud, each point will find the closest point in the reference point cloud. Thus, the objective function of the optimization problem is the sum of all distances. The three parameters are ∆x, ∆y, and ∆z, representing the overall deviation between reference point cloud and airborne point cloud.

2.3. Efficient Optimization Tactics

These parameters (∆x, ∆y, ∆z) are updated iteratively until they converge. The iteration algorithm used in this study is the Nelder-Mead simplex adjustment [22]. The number of parameters (NP) defines the dimension of the parameter space. Then, a simplex is a NP dimensional parameter search geometry that consists of (NP + 1) end points. In our case, ∆x, ∆y, and ∆z define a 3D space, thus the simplex is a tetrahedron (3 + 1 end points). At each iterative step, the simplex will be modified to improve the orientation toward the local minimum. The simplex modification methods are reflection, expansion, outward contraction, inward contraction, and collapse.
An inherent problem with any optimization approach is whether the solution is a global minimum or one of the local minima. Thus, optimization is repeated by starting from a random initial estimation of ∆x, ∆y, and ∆z and the default 50 repetitions are used. The choice of 50 is somewhat arbitrary, but empirically it results in a stable convergence. The conventional wisdom of 30 or more samplings in the central limit theorem may work well, which will save a little time. However, less than 30 is not recommended in searching for the global minimum. Figure 1 describes a brief flowchart of the method using an amorphous object as well as a sampled point cloud from an amorphous object (tree) as an example. The very high-resolution reference TLS point cloud is shown in color (relatively colored by z), and the low-resolution airborne point cloud is shown in larger white dots.
An example of the optimization process (showing just one out of 50 repetitions) details how the three parameters (∆x, ∆y, ∆z) are adjusted at each iteration step and how they converged to a local minima vector in the upper plot in Figure 2. The lower plot shows how the associated objective function converges at around the 40th iteration to a minimized error, which is the sum of squared distances from all 5000 airborne lidar points of the tree.

2.4. Optimal Search Volume

Because the calculation of the objective function is basically a brute-force method in its native form, setting up an optimal point cloud search volume allows a faster implementation than blindly searching through the entire point cloud. Search volume is a function of the pre-selected point cloud corresponding to each data point cloud. A small volume of reference point cloud is assigned to each data point cloud. The search volume partitioning dramatically increases the computation efficiency as expected. Thus, search volume partitioning is an important procedure. Furthermore, use of multi-thread computing or even a graphics processing unit (GPU) will improve with time. However, the volume cannot be too small to allow room for the simplex variation to achieve a convergence. Figure 3 illustrates how the horizontal difference (∆x, ∆y) from 50 repeated optimizations is distributed. The mean of all 50 results is the final estimated difference (∆x, ∆y) between reference point cloud and airborne point cloud data using a single amorphous object. Figure 3 also shows the effect of search volume in the optimization.
The different distributions of repeated solutions depending on the search volume are shown in Table 1. First, the mean ∆x value starts from 3.3 cm to converging near 7 cm means the 10-cm search volume is too small for the simplex to find optimal parameters. Additionally, the improving precision with the increasing search volume provides similar insights. However, in this case the cost is increasing computation time. Increasing from 20 cm to 25 cm does not add substantial benefit but takes much longer to compute. Thus, in this case, the optimal search volume is 20 cm. We reiterate that optimal search volume depends on the specific parameters of the data. The reference to data ratio in Table 1 represents the ratio of points used in the optimization.

2.5. Algorithm Performance Comparison

Although the amorphous object method is mainly developed for any amorphous natural object, it is basically a generic method to evaluate mean 3D deviation between two sets of point clouds. Thus, it can also be applied to a point cloud with a well-defined geometric feature. For example, we extracted an intersection point from three modeled planes using three planes from a high-resolution TLS reference point cloud to create a GCP (Figure 4). Low-resolution large dots are airborne point cloud. There are two sets of three planes with different colors (one in light pink, the other in gray color). Thus, using the two conjugate points, the difference in ∆x, ∆y, and ∆z was estimated as −0.010 m, −0.067 m, and −0.019 m, respectively. The amorphous object method was applied to the same two sets of point cloud data. The result was estimated as −0.024 m (∆x), −0.062 m (∆y), −0.021 m (∆z) as shown in Figure 5. The estimated difference using the amorphous object method is very similar to the geometric feature method, which is generally considered more robust.
Another example of using this amorphous method on a geometric feature uses a hexagon gazebo, shown in Figure 5. Due to its six-plane geometry, two sets of three-plane point clouds were used, although many more combinations are possible. Using the identical method described above, the difference in ∆x, ∆y, and ∆z was estimated as (−0.031 m, −0.009 m, −0.039 m) for the first three-plane and (−0.025 m, −0.008 m, −0.028 m) for the second three-plane. Since the amorphous method uses the sampled point cloud at the same time, it yields only one difference vector as (−0.034 m, −0.020 m, −0.037 m), as shown in the lower plots in Figure 5. The difference is about 1 cm for all three axes, demonstrating comparable results of the amorphous object method against the robust geometric feature method.

2.6. Applicability to Two-Plane Object

A two-sided roof is the most common built object in the field. However, it is difficult to extract a GCP because two planes do not create an intersection point. One may think the two pitch lines make an intersection point to be used as a GCP; however, the associated uncertainty with the resultant GCP is too high to be of use. That means the external uncertainty associated with the pitch line crossing method is so high that it invalidates proper accuracy assessment.
In comparison, the amorphous method utilizes the entire point cloud, thus it is possible to estimate 3D difference. The result using the two-roof building as shown in Figure 6 yields a difference in ∆x, ∆y, and ∆z of −0.004 cm, −0.010 cm, and −0.042 cm, respectively, and the outcome is comparable to previous results.

3. Results of Accuracy Assessment

The full-scale application of the amorphous object method is to sample ground truth data scattered over a project area, especially in natural systems. In this study we used 108 amorphous objects (trees) for the accuracy assessment of the airborne point cloud. Figure 7 demonstrates how the raw TLS point cloud on the left is sampled for isolated tree objects. The 11 circular voids in the raw TLS point cloud represent the TLS scanning locations. In the right plot of Figure 7, the cleaned and isolated tree point cloud extraction shows single isolated trees, but some objects are the result of merged canopies from multiple trees.
Table 2 lists the result of positional differences between the reference point cloud and the airborne data point cloud using the amorphous method, where each result is the result from 50 iterations. Each site ID represents an extracted area of the TLS point cloud data, and the corresponding airborne lidar point cloud tile name is within the parentheses in Table 2.
Using all differences from 108 trees, the final accuracy results of the airborne point cloud are computed as shown in Table 3. Additionally, shown are the horizontal and vertical error distributions in Figure 8. The final uncertainty of the data around 3–4 cm horizontal and 2 cm vertical is an excellent result compared to typical airborne lidar accuracy. Note that USGS QL1 requirement is 10 cm. While the mean vertical error 4.5 cm is not negligibly small, the mean horizontal error is near zero, which is ideal.

4. External Uncertainty

We also derived a general external uncertainty model for the amorphous object method. The external uncertainty is solely dependent on the amorphous object method itself. Modeling external uncertainty should consider several dominant factors that affect the external uncertainty: size of the object, point density of the data, TPU of the lidar sensor, and point density of high-resolution reference system.

4.1. Airborne Lidar Simulator

The four dominant factors mentioned above need to be systematically controlled to investigate how they affect the external uncertainty. Preparing a large number of targets with varying sizes in combination with multiple point density settings and various TPU specifications of the sensors is not a practical research design in terms of feasibility. To test this, we used an airborne lidar simulator created for a companion study [16,23].
A lidar waveform is computed using the laser properties (pulse width, beam divergence angle, and the pulse distribution function), pulse distance determined by sensor altitude and scanner, environmental parameters (absorption and scattering coefficient of the atmosphere), and the 3D geometrical definition of a target. A full waveform solution uses the radiative transfer theory of a laser pulse [24]. The radiative transfer theory is numerically solved for the laser beam irradiance distribution function at a given propagation distance. Two irradiance distribution functions are computed: Irradiance due to the laser beam propagation and irradiance due to the receiver sensitivity propagation. The irradiance distribution function interacts with a target whose geometry is defined in the 3D coordinate system, and the waveform intensity at a given time is obtained by numerically solving a laser interaction governing equation [25].
The second major component of our airborne lidar simulator is the solution of a direct georeferencing equation. A scanner module (scanner type, scan frequency, and the field of view), a global navigation satellite system and strap-down inertial navigation system module for sensor position and orientation, the flight parameters (sensor altitude, flight speed), and the calibration parameters (boresighting and lever-arm) are the essential inputs required to solve a lidar direct georeferencing equation to estimate the 3D position of a laser-target interaction spot.
To model external uncertainty, we designed specific targets for the 3D uncertainty simulation. A tetrahedron/pyramid target was created, and a large array of these pyramid targets were simulated as a surface digital elevation model. As the simulator produces lidar point clouds with various realistic input parameters, the distribution of the point clouds shows spatially inhomogeneous patterns strongly influenced by the roll, pitch, and heading, as well as the scanner type. All these were necessary to simulate realistic situations for the general external uncertainty model development, as the real targets will be placed at a random position, affecting external uncertainty.

4.2. Simulation Design

The discrete values of each factor that were combined as the input parameters for the simulation are listed in Table 4. The point density of high-resolution reference data can easily be much higher, but the maximum simulation range of 600 is set because additional increase does not improve the performance of amorphous method. Although TPU is 3D in nature, the chosen values are assumed to represent overall system performance.
Several selected cases are visualized in the upper plot in Figure 9 regarding size and data density. The TPU values in Figure 9 represent overall sensor performance. The reference white dotted line in the lower plot of Figure 9 represents one pyramid side plane projected to a one-dimensional (1D) line. Thus, the distribution of the points above the line is related to the overall uncertainty (TPU) of a sensor.
The total number of combinations equals 960, which is obtained by multiplying 3, 4, 8, and 10 discrete input values as described in Table 4. Thus, there will be 960 external uncertainties at all three axes. To confirm the uncertainty value was statistically significant and stable, many objects were simulated for each given size and point density as illustrated in Figure 10. In fact, Figure 10 shows only the 5-by-5 pyramid object array created from lidar point cloud simulator to see the details that the point cloud is different for each pyramid. Hundreds of pyramids were created, and each pyramid was reused for optimization multiple times by giving additional random shift. Thus, several thousands of optimizations were performed, and the thousands of optimization results were used to compute one external uncertainty, and this procedure was repeated for all 960 cases.

4.3. External Uncertainty

Figure 11 shows selected examples of estimated external uncertainties on the x-axis (two upper plots) and z-axis (two lower plots). Uncertainties on the y-axis are very similar to the x-axis, as they both form a horizontal external uncertainty. For all plots, the top curve represents the worst sensor in terms of TPU (10.9 cm TPU), and the bottom curve represents the best sensor (3.3 cm TPU). The better sensor (lower curves with small TPU value) has a much smaller external uncertainty. Regardless of the sensor quality (for any TPU value), increasing point density of high-resolution reference data improves the external uncertainty. The most efficient way to display an entire external uncertainty result is to combine all results in a single plot as shown in Figure 12.
All external uncertainties on the x-axis are compiled in Figure 12. Uncertainties are divided into three groups depending on the object size. Unless the object is too small, the amorphous object method is a valid method because even a 2-m object along with high point density reduces external uncertainty to a small enough value. In each size group, increasing point density improves the performance of the amorphous object method. The biggest effect comes from increasing point density of high-resolution reference data, which has the largest gradient (green line) in Figure 12. This is a highly encouraging aspect because the high PPSM of the reference scanner data is easy to achieve. The second biggest effect comes from varying sensor quality (TPU). With all other factors fixed, using a better sensor (smaller TPU) dramatically reduces the external uncertainty of the amorphous object method, as the trend is illustrated along the red line in Figure 12. Using airborne data with larger point density reduces external uncertainty only marginally. Although Figure 12 shows external uncertainties simply listed in a 1D manner and it happened to show an apparent zig-zag pattern, it provides easy-to-understand insights on the effect of each variable in four-dimensional set up. Figure 13 compiles all 3D external uncertainties using dots with different colors without lines to provide a complete picture. A colored dot plot was chosen as a best chart type to avoid chaotic appearance when lines were added.

5. Conclusions

Full 3D accuracy assessment is still not common practice, at least in the mainstream large scale data collection campaigns being conducted today. Although lidar data standard documents clearly state a horizontal accuracy requirement as well as vertical accuracy, surveying many artificial objects with well-defined geometric features is costly compared to the conventional GNSS rover point measurements typically conducted to assess accuracy. Collecting artificial objects to assess accuracy usually involves buildings or other structures. Surveying private buildings is not an option without permission, but even collecting TLS data over public buildings and structures can require permissions to survey. Surveying mostly amorphous objects such as trees and other natural features improves the chance of getting permission, even on private land. The disadvantage of using a ground-based sensor is the difficulty of collecting point cloud in the nadir-viewing perspective, especially on taller structures. Amorphous object scanning is much easier than buildings, which requires careful selection of scan position to avoid self-shadowing. Most of all, the amorphous object method can produce a small external uncertainty, creating a comparable quality accuracy assessment result similar to the more robust geometric feature-based method, but with fewer permission issues. In the example above, in the Niobrara River survey, the horizontal accuracy just over 3 cm is almost the best possible result considering the inherent uncertainty of GNSS system.
The purpose of the extensive simulation was to quantify external uncertainty associated with the amorphous object method, which is modeled in Equation (2). The essential argument regarding this research is, when a certain accuracy assessment technique is suggested, associated external uncertainty must be provided. Thus, any suggested method should be practiced in a manner such that the external uncertainty is not too large because too large external uncertainty will overwhelm the inherent accuracy of the data, thus improper methods would invalidate the accuracy assessment itself. A practical way to use the concept of external uncertainty can be proposed as follows. A lidar data quality guideline can set up a requirement to use a specific accuracy assessment method only if the associated external uncertainty is under a maximum tolerance limit. For example, if the maximum allowed external uncertainty is set to 3 cm, then a horizontal line (red line in Figure 13) as an upper limit can be drawn along 3 cm. Thus, any ground truth and accuracy assessment must find the proper combination of the relevant factors to satisfy the requirement. The easiest solution would be the combination of increasing the point density of reference scanner data and using better quality airborne sensor (smaller TPU) as illustrated in Figure 12. The full 3D accuracy assessment is still challenging and rare. However, once it becomes routine practice, the amorphous object method would be potentially one of the top choices due to its general applicability and robustness, as demonstrated in this research.

Author Contributions

Conceptualization, M.K. and J.S.; Methodology, M.K.; Validation, S.P., J.I. and J.D.; Formal analysis, M.K. and S.P.; Resources, J.D.; Data curation, J.D., J.I. and M.K.; Writing—original draft preparation, M.K.; Writing—review and editing, J.S. and J.I.; Visualization, S.P.; Supervision, J.D.; Project administration, J.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the U.S. Geological Survey National Geospatial Program (NGP) 3D Elevation Program (3DEP) under USGS Contract 140G0121D0001.

Data Availability Statement

Niobrara River Topobathymetric Lidar Validation Survey Data, 2022, dataset, https://doi.org/10.5066/P9PJ0Q8D.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Heidemann, H.K. Lidar Base Specification; (Version 1.3, February 2018); U.S. Geological Survey Standards: Reston, VA, USA, 2018; p. 101. [CrossRef]
  2. ASPRS. ASPRS Positional accuracy standards for digital geospatial data. Photogramm. Eng. Remote Sens. 2015, 81, A1–A26. [Google Scholar] [CrossRef]
  3. Habib, A.; Bang, K.I.; Kersting, A.P.; Lee, D.C. Error budget of LiDAR systems and quality control of the derived data. Photogramm. Eng. Remote Sens. 2009, 75, 1093–1108. [Google Scholar] [CrossRef]
  4. Hebel, M.; Stilla, U. Simultaneous calibration of ALS systems and alignment of multiveiw LiDAR scans of urbans areas. IEEE Trans. Geosci. Remote Sens. 2012, 50, 2364–2379. [Google Scholar] [CrossRef]
  5. Skaloud, J.; Lichti, D. Rigorous approach to bore-sight self-calibration in airborne laser scanning. ISPRS J. Photogramm. Remote Sens. 2006, 61, 47–59. [Google Scholar] [CrossRef]
  6. Tulldahl, M.; Bissmarck, F.; Larsson, H.; Grönwall, C.; Tolt, G. Accuracy evaluation of 3D lidar data from small UAV. In Proceedings of the SPIE 9649, Electro-Optical Remote Sensing, Photonic Technologies, and Applications IX, Auditorium Antoine de Saint-Exupery, Toulouse, France, 21–22 September 2015; p. 964903. [Google Scholar]
  7. Demr, N. Use of airborne laser scanning data and image-based three-dimensional (3-D) edges for automated planar roof reconstruction. Lasers Eng. 2015, 3, 173–205. [Google Scholar]
  8. Keyetieu, R.; Seube, N. Automatic data selection and boresight adjustment of LiDAR system. Remote Sens. 2019, 11, 1087. [Google Scholar] [CrossRef]
  9. Huang, R.; Zheng, S.; Hu, K. Registration of aerial optical images with LiDAR data using the closest point principle and collinearity equations. Sensors 2018, 18, 1770. [Google Scholar] [CrossRef] [PubMed]
  10. Rottensteiner, F.; Trinder, J.; Clode, S.; Kubik, K. Automated delineation of roof planes from lidar data. In Proceedings of the ISPRS Workshop “Laser scanning 2005”, Enschede, The Netherlands, 12–14 September 2005. [Google Scholar]
  11. Dal Poz, A.P.; Fernandes, V.J.M. Building roof boundary extraction from lidar and image data based on Markov random field. In Proceedings of the Isprs Hannover Workshop: Hrigi 17-Cmrt 17-Isa 17-Eurocow 17, Hannover, Germany, 6–9 June 2017; pp. 339–344. [Google Scholar]
  12. Canavosio-Zuzelski, R.; Hogarty, J.; Rodarmel, C.; Lee, M.; Braun, A. Assessing LiDAR accuracy with hexagonal retro-reflexive targets. Photogramm. Eng. Remote Sens. 2013, 79, 663–670. [Google Scholar] [CrossRef]
  13. Barber, D.; Mills, J.; Smith-Voysey, S. Geomatric validation of a ground-based mobile laser scanning system. ISPRS J. Photogramm. Remote Sens. 2008, 63, 128–141. [Google Scholar] [CrossRef]
  14. Grejner-Brzezinska, D.A.; Toth, C.K.; Sun, H.; Wang, X.; Rizos, C. A robust solution to high-accuracy geolocation: Quadruple integration of GPS, IMU, Pseudolite, and terrestrial laser scanning. IEEE Trans. Instr. Meas. 2011, 60, 3694–3708. [Google Scholar] [CrossRef]
  15. Kim, M.; Park, S.; Irwin, J.; McCormick, C.; Danielson, J.; Stensaas, G.; Sampath, A.; Bauer, M.; Burgess, M. Positional accuracy assessment of lidar ooint cloud from NAIP/3DEP Pilot Project. Remote Sens. 2020, 12, 1974. [Google Scholar] [CrossRef]
  16. Kim, M.; Park, S.; Danielson, J.; Irwin, J.; Stensaas, G.; Stoker, J.; Nimetz, J. General external uncertainty models of three-plane intersection point for 3D absolute accuracy assessment of lidar point cloud. Remote Sens. 2019, 11, 2737. [Google Scholar] [CrossRef]
  17. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. In Sensor Fusion IV: Control Paradigms and Data Structures; SPIE: Boston, MA, USA, 1992; Volume 1611, pp. 586–606. [Google Scholar]
  18. Hanson, R.J.; Norris, M.J. Analysis of measurements based on the singular value decomposition. SIAM J. Sci. Stat. Comput. 1981, 2, 363–373. [Google Scholar] [CrossRef]
  19. Chen, Y.; Medioni, G. Object modeling by registration of multiple range images. Image Vis. Comput. 1992, 10, 145–155. [Google Scholar] [CrossRef]
  20. Pomerleau, F.; Colas, F.; Siegwart, R. Comparing ICP variants on real-world data sets. Auton. Robot. 2013, 34, 133–148. [Google Scholar] [CrossRef]
  21. Niobrara River Topobathymetric Lidar Validation Survey Data; USGS Organization: Reston, VA, USA, 2022. [CrossRef]
  22. Nelder, J.A.; Mead, R. A simplex method for function minimization. Comput. J. 1965, 7, 308–313. [Google Scholar] [CrossRef]
  23. Kim, M. Airborne waveform lidar simulator using the radiative transfer of a laser pulse. Appl. Sci. 2019, 9, 2452. [Google Scholar] [CrossRef]
  24. Feygels, V.; Kopilevich, Y.; Kim, M.; RaRocque, P.; Pe’eri, S.; Philpot, W. Basic concepts and system design. In Airborne Laser Hydrography II (Blue Book II); Philpot, B., Ed.; Cornell University Library: Ithaca, NY, USA, 2019. [Google Scholar]
  25. Kim, M.; Kopilevich, Y.; Feygels, V.; Park, J.Y.; Wozencraft, J. Modeling of airborne bathymetric lidar waveforms. J. Coast. Res. 2016, 76, 18–30. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Brief flowchart of amorphous object method (left). Sampled airborne and TLS point clouds from an amorphous object (right).
Figure 1. Brief flowchart of amorphous object method (left). Sampled airborne and TLS point clouds from an amorphous object (right).
Remotesensing 14 04767 g001
Figure 2. Example results of Nelder-Mead optimization. Convergence of the three parameters (upper) and the associated total error as a sum of squared distances (lower).
Figure 2. Example results of Nelder-Mead optimization. Convergence of the three parameters (upper) and the associated total error as a sum of squared distances (lower).
Remotesensing 14 04767 g002
Figure 3. Effect of search volume, 10, 15, 20, and 25 cm from left to right.
Figure 3. Effect of search volume, 10, 15, 20, and 25 cm from left to right.
Remotesensing 14 04767 g003
Figure 4. Reference TLS and airborne point cloud from a multi-plane building (left) and the distribution of repeated amorphous object-based solutions from a multi-plane building (right).
Figure 4. Reference TLS and airborne point cloud from a multi-plane building (left) and the distribution of repeated amorphous object-based solutions from a multi-plane building (right).
Remotesensing 14 04767 g004
Figure 5. Geometric feature-based method, where the numbers represent plane identifier (upper) and two point cloud sets, and the result using the amorphous method (lower).
Figure 5. Geometric feature-based method, where the numbers represent plane identifier (upper) and two point cloud sets, and the result using the amorphous method (lower).
Remotesensing 14 04767 g005
Figure 6. Two-sided roof point cloud (left) and the difference optimization result (right).
Figure 6. Two-sided roof point cloud (left) and the difference optimization result (right).
Remotesensing 14 04767 g006
Figure 7. Examples of sampling isolated amorphous objects from the point cloud data, where the numbers represent identifiers of amorphous objects.
Figure 7. Examples of sampling isolated amorphous objects from the point cloud data, where the numbers represent identifiers of amorphous objects.
Remotesensing 14 04767 g007
Figure 8. Horizontal (left) and vertical (right) accuracy assessment results using amorphous object method.
Figure 8. Horizontal (left) and vertical (right) accuracy assessment results using amorphous object method.
Remotesensing 14 04767 g008
Figure 9. Selected simulation inputs. Projected area of a pyramid (A), data density (P), and sensor performance in terms of TPU.
Figure 9. Selected simulation inputs. Projected area of a pyramid (A), data density (P), and sensor performance in terms of TPU.
Remotesensing 14 04767 g009aRemotesensing 14 04767 g009b
Figure 10. Examples of pyramid array created from airborne lidar simulator.
Figure 10. Examples of pyramid array created from airborne lidar simulator.
Remotesensing 14 04767 g010
Figure 11. Selected examples of simulated external uncertainty. (a) x-uncertainty, 2 m × 2 m, 2 PPSM, (b) z-uncertainty, 2 m × 2 m, 2 PPSM, (c) x-uncertainty, 4 m × 4 m, 16 PPSM, (d) z-uncertainty, 4 m × 4 m, 16 PPSM. The curves in all plots represent varying TPU; from top curve (10 cm TPU) to bottom curve (3 cm TPU).
Figure 11. Selected examples of simulated external uncertainty. (a) x-uncertainty, 2 m × 2 m, 2 PPSM, (b) z-uncertainty, 2 m × 2 m, 2 PPSM, (c) x-uncertainty, 4 m × 4 m, 16 PPSM, (d) z-uncertainty, 4 m × 4 m, 16 PPSM. The curves in all plots represent varying TPU; from top curve (10 cm TPU) to bottom curve (3 cm TPU).
Remotesensing 14 04767 g011
Figure 12. External uncertainty model for amorphous object method on the x-axis.
Figure 12. External uncertainty model for amorphous object method on the x-axis.
Remotesensing 14 04767 g012
Figure 13. External uncertainty model for the amorphous object method.
Figure 13. External uncertainty model for the amorphous object method.
Remotesensing 14 04767 g013
Table 1. Dependency of optimized solution based on search volume.
Table 1. Dependency of optimized solution based on search volume.
Search Volume10 cm15 cm20 cm25 cm
Reference/Data20458111872006
ComputationTime [relative]1.01.63.25.1
x [m]0.0330.0550.0660.068
y [m]0.0110.0150.0160.018
z [m]−0.029−0.049−0.050−0.055
Table 2. Three-dimensional positional differences using the amorphous object method (TLS—airborne data).
Table 2. Three-dimensional positional differences using the amorphous object method (TLS—airborne data).
Niobrara_Valley_Preserve_Building (14TMN1537, 14TMN1637)
Remotesensing 14 04767 i001IDx (m)y (m)z (m)
10.0640.016−0.050
20.0210.041−0.056
3−0.0220.031−0.041
40.0240.054−0.046
5−0.0050.013−0.060
6−0.0060.016−0.059
7−0.0290.004−0.057
80.0080.052−0.048
90.0420.043−0.066
100.0020.022−0.061
110.0140.007−0.066
120.0230.036−0.073
130.0270.037−0.045
140.0170.032−0.049
150.0040.029−0.077
160.0700.084−0.034
170.061−0.004−0.058
18−0.0130.006−0.044
19−0.016−0.005−0.030
200.019−0.035−0.053
210.016−0.038−0.042
220.0090.039−0.039
Niobrara_Churches (14TNN7833)
Remotesensing 14 04767 i002IDx (m)y (m)z (m)
1−0.048−0.048−0.040
2−0.036−0.069−0.017
3−0.003−0.022−0.025
4−0.0150.011−0.040
50.010−0.026−0.034
6−0.003−0.059−0.034
70.1000.009−0.032
80.015−0.017−0.044
90.022−0.010−0.026
10−0.023−0.047−0.034
11−0.052−0.071−0.042
12−0.056−0.052−0.082
130−0.017−0.024
140.018−0.019−0.052
Niobrara_School_Buildings (14TNN7933)
Remotesensing 14 04767 i003IDx (m)y (m)z (m)
10−0.005−0.037
20.0270.021−0.080
3−0.0160.016−0.043
4−0.036−0.019−0.020
5−0.033−0.028−0.030
6−0.017−0.018−0.028
7−0.037−0.009−0.044
8−0.0380.005−0.038
9−0.0370.033−0.052
10−0.0480.066−0.043
11−0.0340.036−0.013
12−0.018−0.008−0.054
130.0190.021−0.043
14−0.028−0.033−0.022
15−0.043−0.032−0.038
16−0.015−0.058−0.036
17−0.036−0.013−0.037
18−0.022−0.017−0.023
19−0.0200.003−0.022
20−0.008−0.025−0.032
21−0.078−0.006−0.038
22−0.048−0.014−0.023
23−0.0500.032−0.027
Fort_Niobrara_NWR_Out_Building (14TLN7949, 14TLN7950)
Remotesensing 14 04767 i004IDx (m)y (m)z (m)
10.014−0.013−0.074
20.022−0.010−0.065
30.009−0.033−0.066
40.026−0.044−0.072
50.009−0.089−0.067
60.0250.024−0.052
70.039−0.014−0.051
80.017−0.025−0.082
90.040−0.040−0.068
100.026−0.045−0.079
110.022−0.073−0.061
120.009−0.077−0.059
130.027−0.037−0.039
140.035−0.001−0.059
15−0.018−0.060−0.055
16−0.005−0.081−0.034
170.006−0.041−0.062
Fort_Niobrara_NWR_Visitor_Building (14TLN7949, 14TLN7950)
Remotesensing 14 04767 i005IDx (m)y (m)z (m)
1−0.0020.021−0.033
20.016−0.022−0.064
3−0.018−0.003−0.066
4−0.0140−0.071
50.0060.010−0.084
60.020−0.019−0.050
7−0.033−0.001−0.038
8−0.020−0.014−0.030
Niobrara_State_Park_Hexagon_Building (14TNN7735)
Remotesensing 14 04767 i006IDx (m)y (m)z (m)
10.0250.018−0.027
20.0230.027−0.031
30.0420.025−0.025
40.0280.086−0.054
50.0170.078−0.062
60.0240.009−0.031
70.0330.018−0.038
80.047−0.034−0.022
90.0110.052−0.013
100.0050.035−0.012
110.0080.061−0.04
Niobrara_State_Park_Pond_Building (14TNN7735)
Remotesensing 14 04767 i007IDx (m)y (m)z (m)
10.0160.026−0.046
2−0.0080.043−0.037
3−0.0150.038−0.030
4−0.0020.014−0.056
50.023−0.009−0.061
6−0.0470.027−0.021
70.024−0.010−0.048
8−0.0280.014−0.032
9−0.0080.013−0.036
10−0.0110.011−0.033
110.0290.009−0.017
120.0250.027−0.029
130.017−0.015−0.047
Table 3. Final accuracy of airborne point cloud using the amorphous object method.
Table 3. Final accuracy of airborne point cloud using the amorphous object method.
x (m)y (m)z (m)
Mean (m)−0.001−0.001−0.045
RMSE (m)0.0310.0360.017
Table 4. Discrete values used in the simulation.
Table 4. Discrete values used in the simulation.
FactorsDiscrete Values
Pyramid size [m]2 × 2 × 2, 3 × 3 × 3, 4 × 4 × 4
Data density [PPSM]2, 4, 9, 16
Reference density [PPSM]50, 70, 100, 140, 200, 300, 430, 600
TPU [cm]3.3, 3.6, 4.0, 4.5, 5.2, 6.0, 7.0, 8.2, 9.4, 10.9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, M.; Stoker, J.; Irwin, J.; Danielson, J.; Park, S. Absolute Accuracy Assessment of Lidar Point Cloud Using Amorphous Objects. Remote Sens. 2022, 14, 4767. https://doi.org/10.3390/rs14194767

AMA Style

Kim M, Stoker J, Irwin J, Danielson J, Park S. Absolute Accuracy Assessment of Lidar Point Cloud Using Amorphous Objects. Remote Sensing. 2022; 14(19):4767. https://doi.org/10.3390/rs14194767

Chicago/Turabian Style

Kim, Minsu, Jason Stoker, Jeffrey Irwin, Jeffrey Danielson, and Seonkyung Park. 2022. "Absolute Accuracy Assessment of Lidar Point Cloud Using Amorphous Objects" Remote Sensing 14, no. 19: 4767. https://doi.org/10.3390/rs14194767

APA Style

Kim, M., Stoker, J., Irwin, J., Danielson, J., & Park, S. (2022). Absolute Accuracy Assessment of Lidar Point Cloud Using Amorphous Objects. Remote Sensing, 14(19), 4767. https://doi.org/10.3390/rs14194767

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop