General commercial LiDAR sensors can only obtain sparse point clouds, which leads to tremendous difficulties in identifying objects at a long distance. The custom-made dense scanning system in this paper aims to provide more abundant geometric information comparable to the resolution of an RGB camera.
2.1. System Introduction
The mechanical structure of the dense scan system is shown in
Figure 3. The system consists of a laser sensor, a motion generation subsystem, a power transmission subsystem, and a support frame. The laser sensor used is a VLP-16 LiDAR with a vertical field of view of 30° and a horizontal field of view of 360°, attached to a rotational axle with a holder. The motion generation subsystem is driven by a stepping motor connected to a gearbox to generate high-precision controllable rotary motion output. Two synchronous belt systems connect the power transmission subsystem to transfer the rotation motion from the gearbox axle to the sensor bearing axle with a 1:1 ratio.
The laser beams from the VLP-16 have a fixed gap of 2° between each other from −15° to 15° in the vertical direction. The additional head node rotation from the mechanical system helps to fill the gap with a resolution of 0.1°. The original data from the VLP-16 at each nod position needs to be transformed and combined in a static coordinate system with respect to the scanning system since the pose of the LiDAR coordinate system varies with the nod angle.
The coordinate systems are defined in
Figure 3. The origin of the LiDAR coordinate system {
A} is located at the optical center of the sensor, the
Y-axis points forward, and the
Z-axis points upward with respect to the sensor. The origin of the scanner coordinate system is set as the center of the rotation axle, the
X-axis is the center axis of the axle, and the
Z-axis is vertically upward.
The measurements of each point include horizontal angle (
α), vertical angle (
β), range (
d), and reflectivity (
ref). Through the first three values, we can get the three-dimensional coordinates of the scanned object in {
A}. As illustrated by
Figure 4, the coordinates
of a scanned point can be calculated with Equation (1).
For a point
P in the space,
are its coordinates in {
A} and {
B}, respectively. When the stepping motor rotates the laser sensor, the origin of {
A} rotates around the
X-axis of {
B} by an angle of
θ. The transformation of the point coordinates between {
A} and {
B} can be written as
where
and
are the rotation matrix and translation vector between the coordinate {
A} and {
B}, respectively, which are determined by the mechanical design of the system.
2.2. Intrinsic and External Parameters Calibration
Each beam is a ray emanating from a laser sensor in a multi-beam scanner. Ideally, they lie in a vertical plane and intersect at an origin. However, each sensor deviates from its ideal pose due to intrinsic parameter errors [
10]. Such error could cause shape distortion in 3D models of the environments. An example is shown in
Figure 5, where the measurement of a plane wall by the dense scan system clearly splits into discontinuous blocks. The inconsistency in surface modeling is mainly caused by inaccurate alignments of the point clouds from different beams. The shape deviation could be more damaging due to the destruction of their surface shapes. In addition, the extrinsic parameter error, which is due to the manufacturing error by our customized nodding mechanism, causes extra location and orientation displacements of the measured points [
11].
To diminish the effect of such parameter inaccuracy, we propose a full-constrained calibration method. When parameter errors exist, the alignment of the coordinate systems for different beams deviate from their theoretical conditions. A general description of such deviation is modeled as additional rotational and translational matrices between each other. Choose the coordinate system of the first laser beam as a reference; the coordinate system deviations of other beams are defined as
and
, where
i = 1, 2, …, 15 is the index difference of the beam with respect to the reference. Correspondingly, the relationship between
and
in Equation (2) is modified as Equation (3).
The
and
in 3D space have 6 error parameters corresponding to the rotation and translation on the X, Y, and Z axis, respectively, as shown in Equations (4) and (5).
where
are rational angle error parameters and translation error parameters of X axis, Y axis, and Z axis, respectively.
Calibration needs to constrain 6 degrees of freedom in the 3D space. To fully consider the 6 DOF constraints, the calibration scene is designed as in
Figure 6. The scanner is placed in front of a U-shape wall, which provides three planes as the calibration references. There are three kinds of constraints to be considered: (1) The flatness of the point clouds for all three planes, (2) the continuity of the blocks in plane 1 generated by neighbor beams, and (3) the asymmetry of the relative location relationship between the blocks on plane 2 and 3. The corresponding errors are denoted as
, respectively, where
is the plane index. Inherently, the three kinds of constraints limit the 6 DOF motion of one coordinate system to another, as listed in
Table 1. In the following content, we present the error cost formulation for each constraint.
The cost function for the calibration is defined in Equation (6). Since the calibration parameters have the same effect on each plane, the constraint weights of the planes are equal. However, there are two constraints related to plane 1: flatness constraint
and continuity constraint
. We define a weight
k to assign the emphasis between the two. In the following content, we present the error cost formulation for each constraint.
For the constraints of plane flatness, the point clouds belonging to the same plane are fitted with a function of a plane, and the fitting error is used as the calibration cost. We use the principal component analysis (PCA) [
30] method to realize the plane fitness since there are no significant outliers in the calibration data and PCA is proven to be robust to measurement noise. The detailed procedure to obtain the plane cost
is presented below.
For the constraint of plane continuity in plane 1, we use curve distance to quantify the splits between two adjacent beam blocks. The scanner rotates downward 20 times with a resolution of 0.1°, which provides a total of 21 scan curves for each beam. The cone of each beam intersects with the three walls with a parabolic curve, as shown in
Figure 7. Since the gap between the original beams from the VLP-16 is 2°, the very first curve of the lower beam block should overlap with the last one of the upper beam blocks in the front plane (plane 1), as shown in
Figure 8.
The detailed procedure to obtain the continuity cost is presented below.
Input the point set , which is the last scanned line of the block, and is the number of elements in ;
Define the fitted curve function in the YZ plane as Equation (10), where
Y and
Z are the coordinates matrix in YZ plane. For the
block,
i = 1, 2, …, 15, its last beam curve can be fitted with a quadratic curve in the YZ plane, as in Equation (11).
Use the least square method with Equation (12) to calculate the coefficient
.
Input the point set , where is the first scanned line of the block, and is the number of elements in ;
The error of distance constraint is the average distance of points in
to the fitted curve in Equation (10). The corresponding continuity cost
are obtained by Equation (13), where
are the coordinates of each point in YZ plane in
.
For the asymmetry constraint, we quantify the error by examining the difference of the gap widths of adjacent blocks between the left and right walls. Using the same procedure of calculating
, we can obtain the gap distance between any two blocks on planes 2&3, denoted as
and
. Then the symmetry error
is defined as
The experimental scene is shown in
Figure 9a, which is a corner of a stairwell. The data generated for calibration is shown in
Figure 9b, which provides the U-shape geometry we need. Minimize
in Equation (14) using OQNLP iterative algorithm [
31], with the initial value set as zero matrices. The resulting optimal calibration parameters results are shown in
Table 2. Both the rotational and translational corrections of the beam coordinate systems are minor, which is reasonable considering the small dimensions of the light components in the scanner, the improvement for the environmental modeling accuracy is evident.
Table 3 shows the comparisons of the three planes before and after calibration from two viewpoints. The flatness is much improved by the side views, and there are no overlaps or unreasonable gaps between any two adjacent point cloud blocks. At the same time, the calibration does not cause any unreasonable changes to plane shape by inspecting the front views. The fitted curve distance constraints ensure the continuity between the blocks.
From the side views of the front plane, the inaccurate system parameters caused more serious problems on plane flatness than continuity. Therefore, we set the value of
k in Equation (6) as
k = 0.2 to emphasize more on flatness cost. The calibration results for the two costs are shown in
Figure 10. The large flatness errors have been vastly decreased, and the continuity also improves. The calibration goal is fully accomplished. The readers can tune the weight
k according to their scanning system status.
In this section, we proposed a 3D Lidar calibration method using three U-shape planes as reference. With all 6 DOF constraints restrained in the cost function, the internal parameters and external parameters are combined in the model to be calibrated. It takes 30 min to optimize with 64,164 points in the data, which is acceptable because it is only done once offline.