Next Article in Journal
Effect of Process Parameters on Stress Field of Laser Additive Manufacturing
Previous Article in Journal
A Study of 2D Contour Measurement System at Tool Center Point of Machine Tools
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

3D Environment Mapping with a Variable Resolution NDT Method

School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200444, China
*
Author to whom correspondence should be addressed.
Machines 2022, 10(12), 1200; https://doi.org/10.3390/machines10121200
Submission received: 14 November 2022 / Revised: 4 December 2022 / Accepted: 6 December 2022 / Published: 10 December 2022
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)

Abstract

:
With the continuous development of the 3D LiDAR (Light Detection And Ranging) mapping algorithm and its application in various fields, the size of the point cloud map becomes a bottleneck that limits the 3D LiDAR mapping algorithm from running for a long time. In this paper, a 3D LiDAR mapping method based on scan-to-map and variable resolution NDT (normal-distributions transform) registration is proposed. When updating the global map, variable resolution processing can reduce the size of the global map and improve the accuracy of map construction. In addition, the size of the map created by the mapping algorithm is proportional to the size of the space and does not grow infinitely over time. The mapping experiments using a rotating LiDAR in the room, corridor, and outdoor environments show that the algorithm has higher mapping accuracy and smaller map size than without considering the variable resolution strategy. The experimental results of the map construction for a long time in an appropriate test area illustrate that the map built by the algorithm does not grow infinitely with time in the fixed space. In summary, by adjusting the map resolution adaptively according to the curvature of different areas in the 3D LiDAR mapping process, the proposed variable resolution strategy can maintain the size of the global map almost proportional to the size of the space. Moreover, the mapping accuracy can be improved as well.

1. Introduction

3D environment map construction technology has a wide range of application prospects, such as precision agriculture [1], building reconstruction [2], search and rescue [3], tree surveying [4], terrain mapping [5], historical building appearance mapping [6], and so on. Therefore, it is of great significance to study the 3D environment map construction technology.
LiDAR (Light Detection And Ranging) and camera are mainly used for 3D mapping. For example, a monocular camera is used for real-time UAV (Unmanned Aerial Vehicle) dense mapping [7], an RGB-D camera is used for indoor unbounded RGB-D dense point cloud 3D reconstruction with high accuracy [8], and a LiDAR is used for environment map construction to cope with challenging agricultural scenarios [9]. The camera can only be used in a well-lit environment, while the use of LiDAR is almost unaffected by the environment. Compared with the camera, LiDAR has higher accuracy and a larger field of view, so we use 3D LiDAR to conduct research on 3D map construction.
3D environment maps are represented in many different ways, including point cloud [10], mesh [11], surfel [4], grid [12,13], polarized LiDAR Map [14], etc. The point cloud is an effective way to represent the real environment. With the rapid development of LiDAR, there is more and more research on LiDAR-based mapping algorithms, such as [15,16,17]; all are based on research of 3D LiDAR mapping algorithms. Since the point cloud can represent the surrounding environment simply and intuitively, it is more convenient to use than other types of map representation, so we use point cloud as the map representation.
When using the point cloud as the map representation, the map will become denser and denser over time, even with the limited environment. The conventional methods to solve this problem include removing redundant points according to the distance between point clouds [18,19] or the time interval over the capturing time of the point cloud [20]. Some researchers also tackle the above question by detecting the overlap degree of sub-maps and carrying out map fusion [21,22,23]. The method of removing redundant points in the map according to the distance between point clouds is similar to uniform downsampling, which is convenient to implement, but it cannot distinguish objects with different features. The strategy of removing redundant points according to the time when the point cloud is added to the global map does not make use of the previous point cloud information, which is not suitable for constructing a point cloud map. The method of detecting the overlap degree of the sub-maps requires maintaining the sub-maps, which is difficult to implement.
To solve the problem of significant redundancy caused by repeated scanning of the same space, we propose to process the map with a variable resolution strategy and adaptively adjust the point cloud density of different locations in the map according to their curvature. Thus, different locations in the map have reasonable point cloud density. This ensures that the map size varies with the size of the space rather than growing infinitely over time. In addition, in the process of variable resolution processing of the map, redundant points are removed according to the probability density value of each point in the global map, which is helpful in improving the accuracy of map construction.
The main contributions of this paper are as follows:
  • A novel strategy of variable resolution processing for point cloud maps is proposed, which can reduce the map size effectively without decreasing the mapping accuracy;
  • The size of the map generated with the proposed strategy increases proportionately to the environmental extent rather than the scanning time;
  • The experimental results in both structured and unstructured environments demonstrated the efficiency of the proposed method.
The rest of the paper is organized as follows. Section 2 reviews some research related to this paper. Section 3 introduces the algorithm of the whole system. Section 4 introduces the experiment results and discussion. The experimental environment includes a room environment, a corridor environment, an outdoor environment, and a conference room environment. Section 5 summarizes the paper and puts forward suggestions for future modification.

2. State of the Art

This section introduces some 3D mapping systems and measures to reduce the redundant points in the map. The strategies for removing redundant points in the map include removing redundant points according to the distance between point clouds or the time when the point cloud was added to the global map, detecting the overlap degree of sub-maps and merging the sub-maps with higher overlap degree, point cloud compression, and multi-resolution maps, etc.
Removing redundant points according to the distance between point clouds is a common method to remove redundant points. There are different ways to implement it. In paper [18], before adding a new point to the global map, it is necessary to search within a certain radius of the point, and this point will be added to the map only if no results are returned. If there are already points within the radius, only the point with the least uncertainty is kept. In paper [19], the points whose distance is less than a certain threshold are directly removed. The nature of this way of removing redundant points is similar to the uniform downsampling of the point cloud, but the disadvantage is that different features cannot be distinguished.
The strategy of removing redundant points based on when the point cloud was added to the map is crude. In paper [20], a planar feature is stored in each voxel of the global map. When the uncertainty of the planar feature in a voxel converges, all the historical points are discarded, and the estimated planar parameters are retained. Once a new point appears, only the latest 10 points are kept, and the new plane features composed of these 10 points are calculated. This way of updating the map only uses the newly obtained information of a few point clouds and does not consider the information of the previous point clouds at the location. In this case, the global map is represented by plane elements, and the strategy of removing redundant points according to the time when the point cloud was added to the global map is feasible, but it may not be appropriate for the point cloud map.
It is a good way to reduce the size of the map by merging sub-maps. The map update strategy in paper [21] is to search all the sub-maps involved in the closed loop when a closed loop is detected, and then fuse these sub-maps together. Sub-map fusion can save memory by preventing new sub-maps from being created when the same space is revisited. This has the advantage of making the reconstruction complexity proportional to the amount of space explored, rather than the duration of the exploration. However, this scheme only works when the closed loop is detected, and searching all the sub-maps after the closed loop is detected also needs more time. Paper [22] introduces a sub-map fusion strategy based on overlapping scan space on the basis of paper [21]. Sub-map overlap is calculated by comparing volume reconstruction and occupancy information stored in each sub-map. If the overlap of two sub-maps exceeds the threshold, it indicates that there is significant redundancy between the sub-maps, and map fusion is required. The calculation of sub-map overlap also requires traversing a large number of sub-maps, resulting in a large amount of computation. In paper [23], a method to calculate the map increment is proposed, and only the incremental part is added when the map is updated. It is also a time-consuming operation to calculate the map increment in this method.
There have also been some studies on reducing the size of the map by compressing the point cloud. In paper [24], a Recurrent Neural Network is used to compress the point cloud. In paper [25], a clustering method is used to compress the point cloud. In paper [26], a voxelized point cloud compression scheme is proposed. In paper [27], a point cloud coding scheme is proposed. Although these point cloud compression schemes can significantly reduce the size of point cloud maps, they are more suitable for point cloud storage and transmission than for online map construction, because point cloud maps need to be compressed and decompressed frequently when this method is used for online mapping.
In some studies, multi-resolution maps are used to remove redundant information in maps. In paper [28], a multi-resolution voxel map representation method is proposed to observe the occupancy probability of different voxels, and only the voxels with higher occupancy probability are retained. This method is suitable for occupancy voxel maps. In paper [29], different resolutions are set according to the distance between the point cloud and the sensor. This method is suitable for scenarios where the sensor is fixed, but it is not easy to implement when the sensor is moving.
The variable resolution strategy proposed in this paper can consider different types of features in the map, and adopt different resolutions for different positions adaptively, which can better express the information in the environment. In addition, the map update is carried out in the local area covered by the point cloud of the current LiDAR scan, which is more convenient to implement. These features can be very helpful for some applications, such as building reconstruction and search rescue. The proposed mapping algorithm can significantly reduce the map data size during building reconstruction and search rescue while preserving adequate environmental information.

3. Method

This section introduces the entire 3D LiDAR mapping system in detail. Section 3.1 describes the overall structure of the algorithm process, Section 3.2 describes the scan-to-map based NDT (normal-distributions transform) registration algorithm, and Section 3.3 describes the variable resolution update strategy of the global map.
For clarity, some denotations used in this paper are presented as follows. The coordinate system { M } is defined as the global coordinate system, and the point cloud in { M } is denoted as M C , where M C = { M p 0 , M p 1 , M p 2 , } , M p i R 3 is a point in point cloud M C . { L k } is the LiDAR frame at time k, the point cloud in frame { L k } is denoted as L k C , where L k C = { L k p 0 . L k p 1 , L k p 2 , } , L k p i R 3 is a point in point cloud L k C . The 3D geometry transformation from frame { L k } to global frame { M } is denoted as L k M T , L k M T S E ( 3 ) , where S E ( 3 ) is the special Euclidean group.

3.1. System Overview

The pipeline of the 3D LiDAR mapping system proposed in this paper is shown in Figure 1. The measurement error of LiDAR scanning is relatively large at a far distance, and the measured points with a small range are induced by the vehicle body itself. Therefore, a distance filter is applied to remove the aforementioned disturbing measurements. The filtered measurement is L k C . Then, the constant turn rate and velocity (CTRV) model [30] is used to correct the distortion in L k C , which results in a new point cloud L k C . Then, the reference point cloud M C r e f is extracted from the global map M C to overlap with L k C . A scan-to-map NDT algorithm is used to register L k C and M C r e f , and the transformation L k M T from frame { L k } to the global frame { M } is obtained. L k C can be fine-tuned using the registration results, which outputs a corrected point cloud L k C . L k C is merged to M C with an adaptive variable resolution strategy, which, as it is mentioned above, provides simplified and accurate environment models.
The global map is managed using an Octree [31] for efficient voxel queries, where the point cloud belonging to a voxel is stored in the corresponding cell. The voxel-related properties, such as the mean and covariance of the local point cloud, the curvature, and the number of the points, are stored in a hash table [32], denoted as H, for fast information extraction.

3.2. Scan-to-Map NDT Registration

Compared with the scan-to-scan registration method, the scan-to-map registration method can largely avoid cumulative registration errors [33], so the scan-to-map method is used for point cloud registration in this paper.

3.2.1. Select the Global Reference Cloud

To reduce the computational burden, the scan-to-map registration method in this paper is not to register the current point cloud L k C directly with the global point cloud M C but to select a sub-map point cloud M C r e f in M C that has overlapping areas with L k C .
L k C is converted to the global frame { M } with Equation (1), and denoted by M C .
M p = L k M R p r e · L k p + L k M t p r e
where L k p L k C , M p M C . M p are the points after L k p is transformed to frame { M } , L k M R p r e R 3 × 3 and L k M t p r e R 3 are the rotation matrix and translation vector of the predicted frame { L k } with respect to the global frame { M } . L k M R p r e and L k M t p r e are calculated from Equations (2) and (3), respectively, using the CTRV model.
L k M R p r e = L k 1 M R · L k 1 L k 2 R
L k M t p r e = L k 1 M R · L k 1 L k 2 t + L k 1 M t
where L k 1 M R R 3 × 3 and L k 1 M t R 3 are the rotation matrix and translation vector of the frame { L k 1 } with respect to the frame { M } , and L k 1 L k 2 R R 3 × 3 and L k 1 L k 2 t R 3 are the rotation matrix and translation vector of the frame { L k 1 } with respect to the frame { L k 2 } .
In order to guarantee a sufficient overlap between M C and M C r e f in the subsequent registration process, 26 voxels V a d j adjacent to V i in the global map point cloud M C are searched and added to the Octree of M C r e f . Since the pose discrepancy between M C and M C is limited after applying the CTRV model, a single layer expansion with 26 adjacent voxels is enough to ensure an overlap between M C and M C r e f . Only the mean and covariance matrix of the point clouds in voxels are useful in the NDT registration process. Therefore, we only need to record the voxel index in the Octree of M C r e f , and the corresponding voxel properties, including the mean and covariance matrix, can be quickly found in the global hash table H.

3.2.2. The NDT Registration

The NDT is a method of compactly representing a surface. The transform maps a point cloud to a smooth surface representation, described as a set of local probability density functions (PDFs), each of which describes the shape of a section of the surface. The NDT registration is to find L k M T that maximizes the likelihood that the points of the current scan lie on the reference scan surface [34].
The initial guess of the L k M T is denoted as L k M T p r e , which includes L k M R p r e and L k M t p r e . Denote the difference between the optimal solution of L k M T and the L k M T p r e as Δ T , Δ T S E ( 3 ) . The likelihood function is defined as
Ψ = i = 1 n ξ ( h ( Δ T , M p i ) )
or, equivalently, in its negative logistic form:
l o g Ψ = i = 1 n l o g ( ξ ( h ( Δ T , M p i ) ) )
where M p i M C . The definition of h ( Δ T , M p i ) is given in Equation (6), and the definition of ξ ( p ) is given in Equation (7).
h ( Δ T , M p i ) = Δ T · M p i
ξ ( p ) = 1 ( 2 π ) 3 / 2 i 1 / 2 e x p 1 2 ( p μ i ) T i 1 ( p μ i )
where ξ ( p ) is the probability density function corresponding to voxel V i of point cloud M C r e f , p R 3 . μ i R 3 is the mean of points in voxel V i , and i R 3 × 3 is the covariance matrix of points in voxel V i .
Points with small probability density significantly deteriorate the registration results. A mixture of a normal distribution and a uniform distribution can be used to approximate Equation (5) to eliminate the influence of outliers on the results.
f ( Δ T ) = i = 1 n g ( h ( Δ T , M p i ) )
where the definition of g ( p ) is given in Equation (9).
g ( p ) = d 1 · e x p ( d 2 2 ( p μ i ) T i 1 ( p μ i ) )
where d 1 and d 2 are constants. The readers are referred to the article [34] for details.
The Newton method is used to solve the following nonlinear optimization problems.
Δ T = a r g m i n Δ T f ( Δ T )
Δ T = Δ R Δ t 0 1
where Δ R R 3 × 3 is the rotation matrix and Δ t R 3 is the translation vector.
Accordingly, the rotation L k M R and translation L k M t of the LiDAR frame can be updated, as shown in Equations (12) and (13).
L k M R = Δ R · L k M R p r e
L k M t = Δ R · L k M t p r e + Δ t

3.3. Map Maintenance

With the pose of the current scan registered, the new point cloud needs to be merged into the global map, which is called global map updating. The first step is to refine the point cloud correction for L k C with the optimized registration results. The rotation matrix L k L k 1 R and translation vector L k L k 1 t between the frame k and k 1 is calculated in Equations (14) and (15).
L k L k 1 R = L k 1 M R 1 · L k M R
L k L k 1 t = L k 1 M R 1 · ( L k M t L k 1 M t )
The re-corrected scan L k C is converted to the global frame { M } with Equation (16), and denoted as M C .
M p = L k M R · L k p + L k M t
where L k p L k C , M p M C .
The essence of our variable resolution method is to maintain a geometric-adaptive point density in the map updating process. Denote the changed voxel set as V. Firstly, the curvature α i of the point cloud in voxel V i V is calculated according to Equation (17).
α i = λ 0 _ i λ 0 _ i + λ 1 _ i + λ 2 _ i
where λ 0 _ i , λ 1 _ i and λ 2 _ i are the eigenvalues of the covariance matrix of the point cloud in voxel V i , and satisfy λ 0 _ i λ 1 _ i λ 2 _ i .
Then, the density upper limit ρ l i m _ i of the point cloud in voxel V i is calculated according to Equation (18).
ρ l i m _ i = f m a x _ d e n s i t y ( α i ) = ρ m i n α i < α m i n η · α i α m i n α i α m a x ρ m a x α i > α m a x
where ρ m i n is the lower bound of ρ l i m _ i , ρ m a x is the upper bound of ρ l i m _ i , η is a user-defined coefficient, α m i n is the lower bound of α i , and α m a x is the upper bound of α i . They can be tuned under the constraints α m i n = ρ m i n η and α m a x = ρ m a x η . The higher the curvature of the point cloud in the voxel, the higher the upper limit of the density of the point cloud. If we need a denser map, we can set ρ m i n , ρ m a x , and η larger, and vice versa.
Then, the upper limit n m a x _ i of points number in voxel V i can be calculated, as shown in Equation (19).
n m a x _ i = ρ l i m _ i · L 3
where L is the edge length of the voxel.
If the point number n i after map merging is larger than n m a x _ i , it implies that redundant points exist in the V i and need to be removed. Firstly, γ n m a x _ i points with larger probability density values are separated as the maintaining points, then ( 1 γ ) n m a x _ i points are randomly selected from the left part and treated as the maintaining points as well, the other points which do not belong to the maintaining points will be removed. Here, γ is an adjustable coefficient. The probability density value of each point in voxel V i is calculated as shown in Equation (7). In the process of NDT registration, outliers will have a certain impact on the registration results, so reserving the points with high probability density will help improve the accuracy of NDT registration and then improve the accuracy of map construction.
The map maintenance process described above is concluded as in Algorithm 1. In addition, we need to update the mean μ i and covariance matrix i of the new point cloud in each voxel V i V in the global hash table H for further registration usage.   
Algorithm 1:Variable Resolution Strategy
Machines 10 01200 i001

4. Experiment Results and Discussion

We verified the variable resolution NDT method in three different environments (room environment, corridor environment, and outdoor environment) for performance evaluation. In addition, we carried out a long-time experiment to illustrate the advantages of our mapping algorithm in decreasing the map data size.

4.1. Equipment Introduction

In the experiment, a mobile robot equipped with a 16-line rotating LiDAR (as shown in Figure 2a) was used to conduct the environment mapping experiment. The LiDAR can rotate around the X-axis shown in Figure 2a with respect to the vehicle body to obtain a larger field of view for environmental mapping. Specific device parameters are shown in Table 1. During the data-capturing process, the robot navigates along the given path, and the LiDAR captures the point cloud continuously. The ground truth of the environment is captured by a high-accuracy laser scanner system that can rotate around its Z-axis, as shown in Figure 2b, whose system accuracy is 0.003 m in the range of 5 m. Specific device parameters are shown in Table 2.

4.2. Mapping Accuracy Analysis

To verify the efficiency of the variable resolution strategy on the accuracy of mapping, we carried out experiments in the room, corridor, and outdoor environments, respectively, and collected the environment truth value to compare the point-to-point map error with the algorithm without variable resolution strategy.

4.2.1. Room Environment

The room environment mapping experiment is carried out in the scene as shown in Figure 3. The scene is a laboratory with some experimental equipment and desks placed against the wall.
The environment ground truth is shown in Figure 4(a-1,a-2), the mapping result without considering the variable resolution strategy is shown in Figure 4(b-1,b-2), and the mapping result considering the variable strategy is shown in Figure 4(c-1,c-2). The full map is on the left and the interior details are on the right. Because the LiDAR can scan through the glass to get the outdoor scene, the map is not only the indoor scene but also part of the outdoor railing. There is no point cloud on the ground in the upper right part of Figure 4(b-2,c-2), because the mobile robot moves in the lower left part of the scene when collecting the point cloud data. It can be seen from Figure 4(c-1,c-2) that the density of the point cloud in different parts of the map is different. The density of the point cloud is low in areas with small curvature (such as ceiling, wall, and floor) and high in areas with large curvature (such as the experimental equipment). This effectively reduces the number of redundant points in the map.
The CloudCompare software is used to align the point clouds shown in Figure 4(b-1,c-1) with the ground truth shown in Figure 4(a-1), and then, we randomly select some areas shown in Figure 5 to calculate the mean absolute error (MAE). The results are shown in Table 3. We selected a total of 8 areas to calculate the mapping error, among which areas 1–3 are the ceiling, areas 5 and 8 are the walls, area 4 is a piece of experimental equipment, and areas 6 and 7 are the desks. The mapping errors in these areas are smaller when the variable resolution processing strategy is used than when the strategy is not used. The accuracy of area 2 is improved most obviously, and the error is reduced by 14.58 % .

4.2.2. Corridor Environment

The corridor environment mapping experiment is carried out in the scene as shown in Figure 6. The scene is a U-shaped corridor with walls on one side and railings on the other.
The environment ground truth is shown in Figure 7(a-1,a-2), the mapping result without considering the variable resolution strategy is shown in Figure 7(b-1,b-2), and the mapping result considering the variable strategy is shown in Figure 7(c-1,c-2). It can be seen that after considering the variable resolution strategy for the map, the density of the point cloud in the areas with low curvature (such as ceiling and floor) in the map is low, and the density of the point cloud in the areas with high curvature (such as railing) is high.
We selected a total of 16 areas (as shown in Figure 8) to calculate the mapping error (as shown in Table 4), among which areas 1–5 are the ceiling, areas 6 and 7 are the floor, areas 8–13 are the railings, and areas 14–16 are the walls. When considering the variable resolution strategy, the mapping errors in these regions are smaller than those without considering the strategy. The accuracy of area 14 is improved most obviously, and the error is reduced by 56.65 % .

4.2.3. Outdoor Environment

The outdoor environment mapping experiment is carried out in the scene as shown in Figure 9. This scene is a section of the road on the campus. Several cars are parked on the road, many tall trees on the right side, a flower bed on the left, and some short trees on the flower bed.
The environment ground truth is shown in Figure 10(a-1,a-2), the mapping result without considering the variable resolution strategy is shown in Figure 10(b-1,b-2), and the mapping result considering the variable strategy is shown in Figure 10(c-1,c-2). The ground section in Figure 10(a-1,a-2) has several circular vacancies evenly spaced. The reason is that the portable laser scanner cannot scan the area under the device while collecting environmental truth values, and the scanners in adjacent positions are far apart, so they cannot cover each other’s scanning vacancies. If adjacent scannings are captured in a small distance, as in the indoor scene, the whole process would be time-consuming and inefficient. Figure 10(c-2) shows the effect of the variable resolution strategy. In the map, the part with small curvature (such as ground) has a small density of point clouds, while the part with large curvature (such as trees and cars) has a large density of point clouds.
We selected 30 areas (as shown in Figure 11) to calculate the mapping error (as shown in Table 5), among which areas 1–5 are the ground, areas 6–16 are the flower bed, areas 17–19 are the cars, areas 20–23 are the tall trees, and areas 24–30 are the short trees. The mapping errors of 28 regions with the variable resolution strategy are smaller than those without the strategy. The accuracy of area 4 is improved most obviously, and the error is reduced by 47.50 % . In the above three scenes, the most apparent error improvement is in the plane region, that is, the region with small curvature. In the variable resolution map, more abnormal points will be removed from the area with small curvature, so the area with small curvature is more likely to have a significant error improvement.

4.3. Map Size Comparison

In the three experiments in Section 4.2, the comparison of the map size is shown in Table 6. For the room environment, the size of the global map with variable resolution strategy is reduced to 61.02 % of that without variable resolution strategy. For the corridor environment, the map size is reduced to 62.67 % . For the outdoor environment, the map size is reduced to 79.86 % . Because there are relatively few areas with small curvature in the outdoor environment, the proportion of map size reduction is smaller than in the room and corridor environment.

4.4. Long-Time Running Experiment

In order to verify the advantages of the variable resolution map update strategy proposed in this paper to build maps in the same space for a long time, but the map does not grow infinitely over time, we conducted repeated mapping for a long time in the experimental scene of a conference room as shown in Figure 12 to cover most details in the scene. This conference room scene has several tables and chairs in the middle of the room and some chairs at the side of the room. The roof of the conference room is a strip multi-storey non-planar structure.
The environment ground truth is shown in Figure 13(a-1,a-2), the mapping result without considering the variable resolution strategy is shown in Figure 13(b-1,b-2), and the mapping result considering the variable strategy is shown in Figure 13(c-1,c-2). Since there are many flat parts in this environment (for example, the ground and walls), most of the point cloud density is small after the variable resolution strategy is applied, as shown in Figure 13(c-1,c-2). Since the ceiling in this environment is not a planar structure, the curvature is relatively high, and the point cloud density is also rather large when the variable resolution strategy is used, as shown in Figure 13(c-1).
We selected 18 areas (as shown in Figure 14) to calculate the mapping error (as shown in Table 7), among which areas 1–6 are the ceiling, areas 7–9 are the floor, areas 10 and 11 are the walls, areas 12–15 are the chairs, and areas 16–18 are the tables. The mapping errors of 17 regions are smaller when the variable resolution strategy is considered compared with when the strategy is not considered.
The size of the maps obtained by the two algorithms is 17,677,016 (without variable resolution strategy) and 6,438,981 (with variable resolution strategy), respectively. The size of the global map with the variable resolution strategy is reduced to 36.43 % of that without the variable resolution strategy. The data reduction is expected to be more significant if the experiment time is further extended.
In the mapping process of the two algorithms, the comparison of the size of the global map over time is shown in Figure 15. When we consider using the variable resolution strategy on the map during map construction, the size of the global map will almost stop growing after the scope of the map covers the experimental space rather than increase infinitely with time, as shown in the blue curve in Figure 15. When the variable resolution strategy is not considered for the map construction, the size of the global map will increase infinitely over time, as shown in the red curve in Figure 15. This advantage of the variable resolution strategy proposed in this paper is suitable for long-time mapping because the map size grows with space rather than time.
Figure 16 shows the mapping result of the three time points in the blue line in Figure 15. Figure 16a represents the mapping result at 60 s, at which most of the information in the scene was already constructed. Figure 16b shows the mapping result at 90 s, at which all the scene’s information was almost constructed. Figure 16c illustrates the mapping result at the last moment, at which the detailed information in the scene was further improved. After the 90 s, the scope of map construction covered all the space in the scene, so the map size almost stayed the same.

5. Conclusions

In order to solve the problem that the global map grows infinitely with time even in a fixed space in the process of 3D LiDAR mapping, this paper proposes a 3D LiDAR mapping algorithm based on a scan-to-map and NDT registration algorithm, which adopts a variable resolution strategy when updating the global map. The density of the point cloud in the global map adaptively changes with the curvature of different positions. Using this strategy, the size of the map can be reduced with the premise of ensuring the accuracy of the map construction so that the global map changes with space instead of growing infinitely with time.
Experiments in a room environment, corridor environment, outdoor environment, and conference room environment verify the effectiveness of the LiDAR mapping algorithm.
In the future, we may study LiDAR mapping in large-scale dynamic environments.

Author Contributions

Conceptualization, Y.F. and Y.X.; methodology, Y.F. and Y.X.; software, Y.F., Z.G. and J.Z.; validation, Y.F., Z.G. and J.Z.; formal analysis, Y.F., Z.G. and J.Z.; investigation, Y.F. and Z.G.; resources, Y.F. and Z.G.; writing—original draft preparation, Y.F.; writing—review and editing, Y.X.; visualization, Y.F.; supervision, Y.X.; project administration, H.S. and Y.X.; funding acquisition, H.S. and Y.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62173220 and Natural Science Foundation of Shanghai, grant number 20ZR1419100.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Potena, C.; Khanna, R.; Nieto, J.; Siegwart, R.; Nardi, D.; Pretto, A. AgriColMap: Aerial-ground collaborative 3D mapping for precision farming. IEEE Robot. Autom. Lett. 2019, 4, 1085–1092. [Google Scholar] [CrossRef] [Green Version]
  2. Huang, J.; Stoter, J.; Peters, R.; Nan, L. City3D: Large-Scale Building Reconstruction from Airborne LiDAR Point Clouds. Remote. Sens. 2022, 14, 2254. [Google Scholar] [CrossRef]
  3. Chatziparaschis, D.; Lagoudakis, M.G.; Partsinevelos, P. Aerial and ground robot collaboration for autonomous mapping in search and rescue missions. Drones 2020, 4, 79. [Google Scholar] [CrossRef]
  4. Pierzchała, M.; Giguère, P.; Astrup, R. Mapping forests using an unmanned ground vehicle with 3D LiDAR and graph-SLAM. Comput. Electron. Agric. 2018, 145, 217–225. [Google Scholar] [CrossRef]
  5. Lombard, C.D.; van Daalen, C.E. Stochastic triangular mesh mapping: A terrain mapping technique for autonomous mobile robots. Robot. Auton. Syst. 2020, 127, 103449. [Google Scholar] [CrossRef] [Green Version]
  6. Karachaliou, E.; Georgiou, E.; Psaltis, D.; Stylianidis, E. UAV for mapping historic buildings: From 3D modelling to BIM. Remote. Sens. Spat. Inf. Sci. 2019, 42, 397–402. [Google Scholar] [CrossRef] [Green Version]
  7. Wang, K.; Ding, W.; Shen, S. Quadtree-accelerated real-time monocular dense mapping. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1–9. [Google Scholar]
  8. Pan, Z.; Hou, J.; Yu, L. Optimization algorithm for high precision RGB-D dense point cloud 3D reconstruction in indoor unbounded extension area. Meas. Sci. Technol. 2022, 33, 055402. [Google Scholar] [CrossRef]
  9. Le, T.; Gjevestad, J.G.O.; From, P.J. Online 3D mapping and localization system for agricultural robots. IFAC-PapersOnLine 2019, 52, 167–172. [Google Scholar] [CrossRef]
  10. Jiang, Z.; Zhu, J.; Lin, Z.; Li, Z.; Guo, R. 3D mapping of outdoor environments by scan matching and motion averaging. Neurocomputing 2020, 372, 17–32. [Google Scholar] [CrossRef]
  11. Kühner, T.; Kümmerle, J. Large-scale volumetric scene reconstruction using lidar. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6261–6267. [Google Scholar]
  12. Wirges, S.; Stiller, C.; Hartenbach, F. Evidential occupancy grid map augmentation using deep learning. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 668–673. [Google Scholar]
  13. Ho, B.J.; Sodhi, P.; Teixeira, P.; Hsiao, M.; Kusnur, T.; Kaess, M. Virtual occupancy grid map for submap-based pose graph SLAM and planning in 3D environments. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 2175–2182. [Google Scholar]
  14. Tao, Q.; Hu, Z.; Zhou, Z.; Xiao, H.; Zhang, J. SeqPolar: Sequence Matching of Polarized LiDAR Map with HMM for Intelligent Vehicle Localization. IEEE Trans. Veh. Technol. 2022, 71, 7071–7083. [Google Scholar] [CrossRef]
  15. Chen, P.; Luo, Z.; Shi, W. Hysteretic mapping and corridor semantic modeling using mobile LiDAR systems. ISPRS J. Photogramm. Remote. Sens. 2022, 186, 267–284. [Google Scholar] [CrossRef]
  16. Koide, K.; Yokozuka, M.; Oishi, S.; Banno, A. Globally consistent 3D LiDAR mapping with GPU-accelerated GICP matching cost factors. IEEE Robot. Autom. Lett. 2021, 6, 8591–8598. [Google Scholar] [CrossRef]
  17. Sun, L.; Yan, Z.; Zaganidis, A.; Zhao, C.; Duckett, T. Recurrent-octomap: Learning state-based map refinement for long-term semantic mapping with 3-d-lidar data. IEEE Robot. Autom. Lett. 2018, 3, 3749–3756. [Google Scholar] [CrossRef] [Green Version]
  18. Cong, Y.; Chen, C.; Yang, B.; Li, J.; Wu, W.; Li, Y.; Yang, Y. 3D-CSTM: A 3D continuous spatio-temporal mapping method. ISPRS J. Photogramm. Remote. Sens. 2022, 186, 232–245. [Google Scholar] [CrossRef]
  19. Ibrahim, M.; Akhtar, N.; Jalwana, M.A.; Wise, M.; Mian, A. High Definition LiDAR mapping of Perth CBD. In Proceedings of the 2021 Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, Australia, 29 November–1 December 2021; pp. 1–8. [Google Scholar]
  20. Yuan, C.; Xu, W.; Liu, X.; Hong, X.; Zhang, F. Efficient and probabilistic adaptive voxel mapping for accurate online lidar odometry. IEEE Robot. Autom. Lett. 2022, 7, 8518–8525. [Google Scholar] [CrossRef]
  21. Wang, Y.; Funk, N.; Ramezani, M.; Papatheodorou, S.; Popović, M.; Camurri, M.; Fallon, M. Fallon, M. Elastic and efficient lidar reconstruction for large-scale exploration tasks. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 5035–5041. [Google Scholar]
  22. Wang, Y.; Ramezani, M.; Mattamala, M.; Fallon, M. Scalable and Elastic LiDAR Reconstruction in Complex Environments Through Spatial Analysis. In Proceedings of the 2021 European Conference on Mobile Robots (ECMR), Bonn, Germany, 31 August–3 September 2021; pp. 1–8. [Google Scholar]
  23. Kim, G.; Kim, A. LT-mapper: A modular framework for lidar-based lifelong mapping. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 7995–8002. [Google Scholar]
  24. Tu, C.; Takeuchi, E.; Carballo, A.; Takeda, K. Point cloud compression for 3d lidar sensor using recurrent neural network with residual blocks. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 3274–3280. [Google Scholar]
  25. Sun, X.; Ma, H.; Sun, Y.; Liu, M. A novel point cloud compression algorithm based on clustering. IEEE Robot. Autom. Lett. 2019, 4, 2132–2139. [Google Scholar] [CrossRef]
  26. Gu, S.; Hou, J.; Zeng, H.; Yuan, H.; Ma, K.K. 3D point cloud attribute compression using geometry-guided sparse representation. IEEE Trans. Image Process. 2019, 29, 796–808. [Google Scholar] [CrossRef]
  27. Wang, Q.; Jiang, L.; Sun, X.; Zhao, J.; Deng, Z.; Yang, S. An Efficient LiDAR Point Cloud Map Coding Scheme Based on Segmentation and Frame-Inserting Network. Sensors 2022, 22, 5108. [Google Scholar] [CrossRef]
  28. Ryde, J.; Hu, H. 3D mapping with multi-resolution occupied voxel lists. Auton. Robot. 2010, 28, 169–185. [Google Scholar] [CrossRef]
  29. Droeschel, D.; Stückler, J.; Behnke, S. Local multi-resolution representation for 6D motion estimation and mapping with a continuously rotating 3D laser scanner. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 5221–5226. [Google Scholar]
  30. Sun, Y.; Li, Z.; Yang, Z.; Shao, K.; Chen, W. Motion model-assisted GNSS/MEMS-IMU integrated navigation system for land vehicle. GPS Solut. 2022, 26, 1–15. [Google Scholar] [CrossRef]
  31. Meagher, D.J. Octree Encoding: A New Technique for the Representation, Manipulation and Display of Arbitrary 3-d Objects by Computer; Electrical and Systems Engineering Department Rensseiaer Polytechnic Institute Image Processing Laboratory: New York, NY, USA, 1980. [Google Scholar]
  32. Maurer, W.D.; Lewis, T.G. Hash table methods. ACM Comput. Surv. 1975, 7, 5–19. [Google Scholar] [CrossRef]
  33. Hess, W.; Kohler, D.; Rapp, H.; Andor, D. Real-time loop closure in 2D LIDAR SLAM. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1271–1278. [Google Scholar]
  34. Magnusson, M. The Three-Dimensional Normal-Distributions Transform: An Efficient Representation for Registration, Surface Analysis, and Loop Detection; Örebro University: Örebro, Sweden, 2009. [Google Scholar]
Figure 1. The pipeline of the 3D LiDAR mapping system.
Figure 1. The pipeline of the 3D LiDAR mapping system.
Machines 10 01200 g001
Figure 2. The equipment used in the experiment. (a) shows the mobile robot equipped with a 16-line rotating LiDAR. (b) shows the portable laser scanner used in the experiment to obtain the environmental truth value.
Figure 2. The equipment used in the experiment. (a) shows the mobile robot equipped with a 16-line rotating LiDAR. (b) shows the portable laser scanner used in the experiment to obtain the environmental truth value.
Machines 10 01200 g002
Figure 3. The scene of the room environment.
Figure 3. The scene of the room environment.
Machines 10 01200 g003
Figure 4. The result of the room environment mapping experiment (colored by depth). (a-1,a-2) show the ground truth. (b-1,b-2) show the mapping result without variable resolution strategy. (c-1,c-2) show the mapping result with variable resolution strategy. The full maps are on the left, and the interior details are on the right.
Figure 4. The result of the room environment mapping experiment (colored by depth). (a-1,a-2) show the ground truth. (b-1,b-2) show the mapping result without variable resolution strategy. (c-1,c-2) show the mapping result with variable resolution strategy. The full maps are on the left, and the interior details are on the right.
Machines 10 01200 g004
Figure 5. The selected areas of the room environment used to calculate the mapping error. Several kinds of regions are selected and colored separately, with each type of region colored in the same color. Areas 1–3 are the ceiling, areas 5 and 8 are the walls, area 4 is a piece of experimental equipment, and areas 6 and 7 are the desks.
Figure 5. The selected areas of the room environment used to calculate the mapping error. Several kinds of regions are selected and colored separately, with each type of region colored in the same color. Areas 1–3 are the ceiling, areas 5 and 8 are the walls, area 4 is a piece of experimental equipment, and areas 6 and 7 are the desks.
Machines 10 01200 g005
Figure 6. The scene of the corridor environment.
Figure 6. The scene of the corridor environment.
Machines 10 01200 g006
Figure 7. The result of the corridor environment mapping experiment (colored by depth). (a-1,a-2) show the ground truth. (b-1,b-2) show the mapping result without variable resolution strategy. (c-1,c-2) show the mapping result with variable resolution strategy. The full maps are on the left, and the interior details are on the right.
Figure 7. The result of the corridor environment mapping experiment (colored by depth). (a-1,a-2) show the ground truth. (b-1,b-2) show the mapping result without variable resolution strategy. (c-1,c-2) show the mapping result with variable resolution strategy. The full maps are on the left, and the interior details are on the right.
Machines 10 01200 g007
Figure 8. The selected areas of the corridor environment used to calculate the mapping error. Several kinds of regions are selected and colored separately, with each type of region colored in the same color. Areas 1–5 are the ceiling, areas 6 and 7 are the floor, areas 8–13 are the railings, and areas 14–16 are the walls.
Figure 8. The selected areas of the corridor environment used to calculate the mapping error. Several kinds of regions are selected and colored separately, with each type of region colored in the same color. Areas 1–5 are the ceiling, areas 6 and 7 are the floor, areas 8–13 are the railings, and areas 14–16 are the walls.
Machines 10 01200 g008
Figure 9. The scene of the outdoor environment.
Figure 9. The scene of the outdoor environment.
Machines 10 01200 g009
Figure 10. The result of the outdoor environment mapping experiment (colored by depth). (a-1,a-2) show the ground truth. (b-1,b-2) show the mapping result without variable resolution strategy. (c-1,c-2) show the mapping result with variable resolution strategy. The full maps are on the left, and the interior details are on the right.
Figure 10. The result of the outdoor environment mapping experiment (colored by depth). (a-1,a-2) show the ground truth. (b-1,b-2) show the mapping result without variable resolution strategy. (c-1,c-2) show the mapping result with variable resolution strategy. The full maps are on the left, and the interior details are on the right.
Machines 10 01200 g010
Figure 11. The selected areas of the outdoor environment used to calculate the mapping error. Several kinds of regions are selected and colored separately, with each type of region colored in the same color. Areas 1–5 are the ground, areas 6–16 are the flower bed, areas 17–19 are the cars, areas 20–23 are the tall trees, and areas 24–30 are the short trees.
Figure 11. The selected areas of the outdoor environment used to calculate the mapping error. Several kinds of regions are selected and colored separately, with each type of region colored in the same color. Areas 1–5 are the ground, areas 6–16 are the flower bed, areas 17–19 are the cars, areas 20–23 are the tall trees, and areas 24–30 are the short trees.
Machines 10 01200 g011
Figure 12. The scene of a conference room.
Figure 12. The scene of a conference room.
Machines 10 01200 g012
Figure 13. The result of the conference room environment mapping experiment (colored by depth). (a-1,a-2) show the ground truth. (b-1,b-2) show the mapping result without variable resolution strategy. (c-1,c-2) show the mapping result with variable resolution strategy. The full map is on the left and the interior details are on the right.
Figure 13. The result of the conference room environment mapping experiment (colored by depth). (a-1,a-2) show the ground truth. (b-1,b-2) show the mapping result without variable resolution strategy. (c-1,c-2) show the mapping result with variable resolution strategy. The full map is on the left and the interior details are on the right.
Machines 10 01200 g013
Figure 14. The selected areas of the conference room environment used to calculate the mapping error. Several kinds of regions are selected and colored separately, with each type of region colored in the same color. Areas 1–6 are the ceiling, areas 7–9 are the floor, areas 10 and 11 are the walls, areas 12–15 are the chairs, and areas 16–18 are the tables.
Figure 14. The selected areas of the conference room environment used to calculate the mapping error. Several kinds of regions are selected and colored separately, with each type of region colored in the same color. Areas 1–6 are the ceiling, areas 7–9 are the floor, areas 10 and 11 are the walls, areas 12–15 are the chairs, and areas 16–18 are the tables.
Machines 10 01200 g014
Figure 15. The comparison of map size changes over time under two algorithms.
Figure 15. The comparison of map size changes over time under two algorithms.
Machines 10 01200 g015
Figure 16. The result of the conference room environment mapping experiment with variable resolution strategy in three time points (colored by depth). (a) shows the result at 60 s. (b) shows the result at 90 s. (c) shows the result at the last moment.
Figure 16. The result of the conference room environment mapping experiment with variable resolution strategy in three time points (colored by depth). (a) shows the result at 60 s. (b) shows the result at 90 s. (c) shows the result at the last moment.
Machines 10 01200 g016
Table 1. The parameters of the spin LiDAR equipped on the mobile robot.
Table 1. The parameters of the spin LiDAR equipped on the mobile robot.
Sensor TypeLine NumberFOVScan FrequencyFrequency around X AxisSystem Accuracy
VLP-1616 270 10 Hz0.2 Hz0.006 m *
∗ System accuracy in the range of 5 m.
Table 2. The parameters of the portable laser scanner.
Table 2. The parameters of the portable laser scanner.
Sensor TypeLine NumberFOVScan FrequencyRotate Range around Z AxisSystem Accuracy
Hokuyo UST-30LX1 270 40 Hz 180 0.003 m *
∗ System accuracy in the range of 5 m.
Table 3. The mapping error (MAE/m) comparison of the room environment.
Table 3. The mapping error (MAE/m) comparison of the room environment.
Area IndexWithout Variable ResolutionWith Variable ResolutionArea IndexWithout Variable ResolutionWith Variable Resolution
10.00390.003150.00550.0052
20.00480.004160.01150.0112
30.00620.006170.00970.0093
40.00640.006380.00460.0042
Smaller errors are in bold.
Table 4. The mapping error (MAE/m) comparison of the corridor environment.
Table 4. The mapping error (MAE/m) comparison of the corridor environment.
Area IndexWithout Variable ResolutionWith Variable ResolutionArea IndexWithout Variable ResolutionWith Variable Resolution
10.07430.069490.07870.0627
20.03040.0284100.06570.0435
30.04180.0394110.02430.0217
40.05690.0503120.03820.0283
50.09770.0973130.05710.0560
60.02070.0188140.07290.0316
70.01860.0171150.01760.0167
80.06820.0573160.06230.0444
Smaller errors are in bold.
Table 5. The mapping error (MAE/m) comparison of the outdoor environment.
Table 5. The mapping error (MAE/m) comparison of the outdoor environment.
Area IndexWithout Variable ResolutionWith Variable ResolutionArea IndexWithout Variable ResolutionWith Variable Resolution
10.05450.0417160.03270.0327
20.05220.0416170.03580.0356
30.05790.0517180.02770.0287
40.04000.0210190.02990.0290
50.02180.0194200.04900.0485
60.02880.0254210.05460.0541
70.01960.0188220.06400.0630
80.02840.0261230.07370.0735
90.02980.0269240.04370.0423
100.03440.0340250.04660.0453
110.02410.0234260.03710.0359
120.03900.0382270.04740.0456
130.04170.0374280.05360.0516
140.03240.0294290.06420.0621
150.02450.0218300.05150.0499
Smaller errors are in bold.
Table 6. The map size comparison.
Table 6. The map size comparison.
MethodMap Size (Point Number)
RoomCorridorOutdoor
without variable resolution6,173,08610,355,56210,986,566
with variable resolution3,767,0886,490,3288,774,072
Smaller numbers are in bold.
Table 7. The mapping error (MAE/m) comparison of the conference room environment.
Table 7. The mapping error (MAE/m) comparison of the conference room environment.
Area IndexWithout Variable ResolutionWith Variable ResolutionArea IndexWithout Variable ResolutionWith Variable Resolution
10.02450.0231100.00830.0074
20.01470.0129110.00720.0057
30.01530.0138120.01480.0125
40.00490.0043130.01040.0076
50.01100.0102140.00880.0075
60.00750.0070150.01010.0092
70.01400.0119160.00550.0107
80.00750.0064170.00920.0088
90.01770.0153180.00780.0073
Smaller errors are in bold.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Feng, Y.; Gao, Z.; Zhang, J.; Shi, H.; Xie, Y. 3D Environment Mapping with a Variable Resolution NDT Method. Machines 2022, 10, 1200. https://doi.org/10.3390/machines10121200

AMA Style

Feng Y, Gao Z, Zhang J, Shi H, Xie Y. 3D Environment Mapping with a Variable Resolution NDT Method. Machines. 2022; 10(12):1200. https://doi.org/10.3390/machines10121200

Chicago/Turabian Style

Feng, Yang, Zhiyuan Gao, Jinghan Zhang, Hang Shi, and Yangmin Xie. 2022. "3D Environment Mapping with a Variable Resolution NDT Method" Machines 10, no. 12: 1200. https://doi.org/10.3390/machines10121200

APA Style

Feng, Y., Gao, Z., Zhang, J., Shi, H., & Xie, Y. (2022). 3D Environment Mapping with a Variable Resolution NDT Method. Machines, 10(12), 1200. https://doi.org/10.3390/machines10121200

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop