Next Article in Journal
Ore-Waste Discrimination Using Supervised and Unsupervised Classification of Hyperspectral Images
Next Article in Special Issue
Multi-Modal Multi-Stage Underwater Side-Scan Sonar Target Recognition Based on Synthetic Images
Previous Article in Journal
Evaluation of Hybrid Wavelet Models for Regional Drought Forecasting
Previous Article in Special Issue
Filtered Convolution for Synthetic Aperture Radar Images Ship Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Calibration between Multi-Lines LiDAR and Visible Light Camera Based on Edge Refinement and Virtual Mask Matching

1
Shunde Innovation School, University of Science and Technology Beijing, Foshan 528300, China
2
Beijing Engineering Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology, Beijing 100083, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Remote Sens. 2022, 14(24), 6385; https://doi.org/10.3390/rs14246385
Submission received: 26 September 2022 / Revised: 5 December 2022 / Accepted: 8 December 2022 / Published: 17 December 2022
(This article belongs to the Special Issue Pattern Recognition in Remote Sensing)

Abstract

:
To assist in the implementation of a fine 3D terrain reconstruction of the scene in remote sensing applications, an automatic joint calibration method between light detection and ranging (LiDAR) and visible light camera based on edge points refinement and virtual mask matching is proposed in this paper. The proposed method is used to solve the problem of inaccurate edge estimation of LiDAR with different horizontal angle resolutions and low calibration efficiency. First, we design a novel calibration target, adding four hollow rectangles for fully automatic locating of the calibration target and increasing the number of corner points. Second, an edge refinement strategy based on background point clouds is proposed to estimate the target edge more accurately. Third, a two-step method of automatically matching between the calibration target in 3D point clouds and the 2D image is proposed. Through this method, i.e., locating firstly and then fine processing, corner points can be automatically obtained, which can greatly reduce the manual operation. Finally, a joint optimization equation is established to optimize the camera’s intrinsic and extrinsic parameters of LiDAR and camera. According to our experiments, we prove the accuracy and robustness of the proposed method through projection and data consistency verifications. The accuracy can be improved by at least 15.0% when testing on the comparable traditional methods. The final results verify that our method is applicable to LiDAR with large horizontal angle resolutions.

1. Introduction

At present, traditional large-scale digital map technology has been well developed and applied, such as Google Maps, Baidu Maps, etc. However, the work related to fine terrain reconstruction in a small scale is relatively limited. In order to obtain the fine 3D topographic map in a small scale, the simplest way is to use unmanned aerial vehicles (UAVs) to reconstruct terrain using the tilt photography technology. Tilt photography technology uses pure 2D image analysis and modeling to restore the 3D structure from 2D information, and its calculation accuracy is directly affected by image quality and imaging environment. Recently, more and more research has been conducted on the fine 3D terrain reconstruction technology using UAVs equipped with light detection and ranging (LiDAR) and visible light cameras. The visible light camera can obtain high-resolution color information, but it is particularly vulnerable to the influence of external weather, illumination, and other factors; it also lacks the 3D information of the target. LiDAR can quickly obtain the 3D information of space objects, but it cannot get the texture, color, and other information of objects. Therefore, the LiDAR and visible light camera can achieve excellent complementary effects in performance. This will greatly improve the performance of existing UAVs 3D terrain reconstruction or low-altitude remote sensing works [1,2,3,4,5,6,7]. However, since the data obtained by the two sensors are based on their respective coordinate systems, and data fusion requires that the data collected by two sensors should be expressed in LiDAR or camera coordinate system. It is necessary to determine the transformation matrix between two coordinate systems through joint calibration, that is, the extrinsic parameters between LiDAR and the camera together with the intrinsic parameters of the camera.
Currently, according to the characteristics of different methods, we divide the calibration between the LiDAR and visible light camera into a target-based method and a target-less method. For details and representative works, refer to Figure 1. The target-based method is to find the 2D feature points in an image coordinate system and the 3D feature points in a LiDAR coordinate system with the help of a standard target to establish geometric constraints and solve the extrinsic parameters of sensors through perspective-n-point (PNP) or nonlinear optimization. According to the shape of calibration target, it can be divided into 1D objects, such as line-feature-based target [8,9,10], 2D objects, such as chessboard [11,12,13] or circular holes [14], and 3D objects [15,16], such as spherical targets [17]. Refs. [18,19] detected the corner points in images and point clouds by the intersection of edge fitting lines, respectively, and solved calibration parameters by a linear equation. The calibration target they used was a monochrome board. This is mainly due to the fact that two different colors have different effects on LiDAR ranging, which will affect the accuracy of plane fitting. Refs. [20,21,22] introduced nonlinear optimization to improve the calibration accuracy. Ref. [23] adopted the method of space joint calibration combining coarse measurement and fine adjustment. The innovation of [24] was to use a known plane to estimate the 3D corners. Refs. [24,25] used a LiDARTag and intensity information to locate target, which was similar to the method in [26].
The target-less method can be divided into mutual-information-based approach [27,28], motion-based method [29,30], and learning-based technique [31,32,33,34,35]. The mutual-information-based approach can make full use of environmental information to complete online calibration. However, the disadvantage is that it needs to be carried out in the natural scene, and if the scene characteristics cannot meet the expectations, it will lead to considerable deviation of results. The motion-based method takes the calibration of LiDAR and camera as a hand–eye calibration problem, which can recover the right transformation by a series of rigid transformations. This method needs accurate motion information, so the static system is not applicable. Many learning networks use the convolutional neural network (CNN) to solve this task. These networks can calibrate the LiDAR–camera system without using calibration targets, matching information, and motion information, but the accuracy is dependent on the size of the training data set and the structure of the CNN. In the application, it requires high texture information of the environment and high computational power, and often needs to use a graphic processing unit to accelerate data processing.
The target-based method is still the mainstream because of its stability and low computational performance requirements. Therefore, sensors will undergo stable and high-precision target-based calibration methods to calibrate the extrinsic parameters before leaving the factory. The existing methods mainly have the following problems. First, the adaptability of LiDAR with different horizontal angle resolutions (θ) of calibration methods is not considered. For example, the method based on edge estimation [18,19] had a great difference in the effectiveness of LiDAR with θ = 0.1° and large θ = 0.4°. The second is that the camera intrinsic parameters, such as distortion coefficients, are not considered in the optimization process. The two-stage method, first calibrating the intrinsic parameters and then solving the extrinsic parameters, brings a lot of inconveniences to large-scale commercial applications in terms of efficiency. Third, for the locating of the calibration target, manual filtering is not conducive to the improvement of the automation level. The method of using intensity information to locate special marks will have different effects on LiDAR ranging due to two different color objects.
To solve the adaptability of edge fitting under different θs, we propose an approximation edge fitting technology based on background point clouds, which can refine the edge points under different θs by using a known geometric size of the calibration target. According to the geometric shape of the designed calibration board, a set of corner points extraction schemes of full-automatic locating and fine processing are designed, which can greatly reduce the manual operation and do not depend on the intensity information of LiDAR and special marks. Finally, camera intrinsic parameters are taken into account, and their initial values are sent to the optimization function, which avoids the inefficiency caused by the two-stage method when calibrating a large amount of equipment. The main contributions of this paper are as follows: We design a novel calibration target and propose a joint automatic calibration method based on edge refinement and virtual mask matching. An improved edge refinement scheme is introduced to refine the edge points. The maximum error of the edge estimation of the calibration board in this method does not exceed 5.0 mm, which greatly improves the accuracy of the corner points. The high-precision corner detection method makes the final calibration accuracy better than state-of-the-art techniques, especially in scenes with sparse point clouds, such as θ is 0.4°. An automatic location method, locating firstly and then fining processing, is proposed to get the corner points, which do not need any parameters set by users in the calibration process. In the experiments, we take multi-lines LiDAR as an example, such as 64 lines, and verify the accuracy and robustness of proposed method.
In the following sections, Section 2 is a brief introduction of the proposed method, Section 3 is the concrete introduction, Section 4 is the experiment and discussion, and Section 5 is the conclusion.

2. Proposed Calibration System and Computational Flow Chart

Figure 2a,b show the design of the calibration board. We need to place the calibration board about 30.0 cm in front of a flat wall to better obtain the points of the background wall for edge refinement, as shown in Figure 2c. In Section 3.1, we will use Euclidean distance to cluster all objects. In order to prevent the calibration board and the flat wall from being clustered into a category, the calibration board needs to keep a certain distance from the wall, e.g., 30.0 cm is considered in this paper. Actually, other suitable distances can also be used in practical applications. Holes dug in the calibration board are convenient for us to use geometric information for locating. Figure 2d is a diagrammatic sketch of this method, which involves the locating and matching of 3D feature points, the locating and refinement of 2D feature points, the solution of optimization equations, and the 3D–2D projection results. If we unify the data to the LiDAR coordinate system, we can get the point clouds with color, or unify the data to the camera coordinate system, and we can also get an image with sparse depth information. The symbols and function we will use in the following sections are given in Table 1.

3. Proposed Method

3.1. Automatic Locating Calibration Board

In this section, we describe how to automatically locate the point clouds of the calibration board ( P C b o a r d ) and the background wall ( P C w a l l ). First, to reduce the influence of ground points on 3D clustering, we use p a t c h w o r k [36] to remove the ground. This is a robust algorithm for removing the ground, which can reduce the r i n g filtering parameters. The effect is illustrated in Figure 3a. We can cluster all objects according to Euclidean distance (Figure 3b), and each object after clustering is called   P C i . The normal vector V P C i of P C i is obtained by plane fitting, shown in Figure 3c. Performing rotation transformation according to V P C i and V X ( 1 , 0 , 0 ) to transform the P C i to the YOZ plane, which is called P C i . We take the geometric centroid of P C i as the center and generate a box with the same size as our calibration board in the YOZ plane, as shown in the Figure 3d,e. We calculate the matching S c o r e between each P C i and the preset box, and redefine P C i   as the set of P C i n and P C o u t :
P C i = P C i n   P C o u t
where P C i n is the set of red points in Figure 3d,   P C o u t is the set of black and blue points, and the S c o r e ( P C i )   is :
S c o r e ( P C i ) = S c o r e A × S c o r e B × S c o r e C
where S c o r e A = N ( P C i n ) / N ( P C o u t ); S c o r e B is the ratio of the short side ( w ) to the long side ( h ) of the B o x ( P C i n ) , S c o r e B = w / h ; S c o r e C is the ratio of S ( B o x ( P C i n ) ) to the actual calibration board square, S c o r e C = S ( B o x ( P C i n ) ) / S ( b o a r d ) . Since our calibration board area is just 1.0 m2, S c o r e C = S ( B o x ( P C i n ) ) .
Let us take Score as the evaluation function to screen P C i . Three parts of S c o r e   can filter out the objects with a large or small number of points to ensure that the cluster where the calibration board is located can be filtered out with a high score. Figure 3e is the one with the highest Score among many point clouds. In Section 4.2, we will analyze the impact of ScoreA, ScoreB, and ScoreC on the automatic locating calibration board. It can be found that the connecting rod of the calibration board is also selected, shown in Figure 3e. The results of Figure 3c,d are ideal. In fact, due to the Euclidean clustering, the connecting rod of the calibration board will also be clustered into one category. We need to discard it for it should not participate in the calculation. Finally, we can delete the ring with a small number of point clouds by traversing each ring, as shown in the Figure 3f. The obtained red point cloud is P C b o a r d . We select a 1.0 m × 1.0 m × 1.0 m box in the center of P C b o a r d to ensure that the P C w a l l (blue point clouds and green point clouds in Figure 3g) is framed together. Therefore, the calibration board is required to be placed within 1.0 m in front of the background wall. So far, we have acquired P C b o a r d and P C w a l l through automatic locating, and we can further refine the edge points by the method described in next section.

3.2. Approximation Edge Fitting

This section introduces an approximation edge fitting method, which can better fit the problems of inaccurate edge estimation caused by θ. As shown in Figure 4a, due to θ of LiDAR, two adjacent laser beams, one on the target and the other outside the target, cannot really scan the edge of the object. It can be seen from Figure 4b,c that although the laser spots of LiDAR have randomness within a certain range, overall, the blue laser spots with θ = 0.4° are far from the real edge. Figure 4d–f show the sparsity of different θs The distance between the measured value and the real edge is closely related θ and the test distance, and the larger the θ, the worse the edge estimation of the target. Except the θ, vertical angle resolution, related to the number of rings of the LiDAR, may also be another factor. From Figure 4d–f, we can find that under the same number of lines of the LiDAR, the measurement results under different θs have different density levels of point clouds. Therefore, in the following sections, we take the LiDAR with 64 lines as an example (details are listed in Section 4.1) to research the calibration of different θs, while ensuring the same vertical angle resolution of the LiDAR, paying attention to the effect improvement in the case of different θs.
As shown in the Figure 5a, for the partial schematic diagram of the calibration board scanned by LiDAR, we use r i n g i as an example, where the green background edge point is p i b ( x i b , y i b , z i b ) , the red foreground edge point on the board is p i f ( x i f , y i f , z i f ) , the direction vector (blue dotted line) between these two points is V i ( x i , y i , z i ) , where x i = x i b x i f   ,   y i = y i b y i f ,     z i = z i b z i f . The set of background edge points is P b , p i b P b ; the set of edge points of the foreground calibration object is P f , p i f P f .
Due to θ, the red line l f fitted by P f is still a certain distance from the actual black edge line, so the quadrilateral fitted with these edge points is often smaller than the actual size, and with the increase of test distance or θ, the difference between the quadrilateral and the real size is larger (the verification is shown in Section 4.2). It is necessary to expand P f to approach the actual edge. First, we should get the points on the calibration board ( P C b o a r d ) and background wall ( P C w a l l ), which have been achieved in Section 3.1. Then, the edge points on the background, i.e., yellow points in Figure 5c, should be projected to the calibration board plane according to the direction of the laser beam of LiDAR. We match the foreground point (red point in Figure 5b) and background point (green point in Figure 5b) of each scanning r i n g i , which can be easily realized by the sorting algorithm. At this time, we can obtain the background edge point P i b and calibration board edge point P i f . We choose V i , the direction of the blue arrow in Figure 5a,c as the outward expansion direction. The outward length of the expansion is the distance between two points multiplied by a s c a l e coefficient, s c a l e ( 0   ~   0.5 ) , and the blue middle edge point p i m ( x i m , y i m , z i m ) in Figure 5c can be obtained:
p i m = p i f + s c a l e V i
where p i m must be located between p i f and p i b . By traversing all the r i n g s, we can obtain the remaining edge point set P m after approximation. We use P m as the new edge points to fit the edge line and obtain the blue edge line l 1 , shown in Figure 5b. Similarly, the edge line l 2 ,   l 3 , and l 4   can also be computed.
As for the selection of s c a l e , we use the actual size of the calibration board (Figure 2b) for reference. Taking s c a l e = 0.1 as the initial value. The newly expanded edge points are used to fit a rectangle by random sample consensus (RANSAC). The termination condition of RANSAC fitting is the difference between the calculated rectangle size and the actual physical size. If this difference is less than the th, the outward expansion can be stopped, and the edge of the current outward expansion should be saved. If the difference is larger than th, it indicates that there is still a gap between the measured result and the actual size, and it needs to be further expanded. That is, scale needs to continue to increase with a step size (ss):
s c a l e = s c a l e + s s
where scale is not recommended to exceed 0.5, because when scale is greater than 0.5, p i m is closer to the green point in Figure 5. If scale is less than 0.5, p i m is closer to the red point in Figure 5. We prefer the edge point to be the point on the calibration board rather than outside. The s c a l e is expanded to 0.4 in ss of 0.01 in the later experiment.
In the subsequent experiments of this paper, th = 0.005 m is selected as the experimental data. The smaller the t h , the higher the estimation of the edge point accuracy, but the longer the detection time. By repeating the above process, a rectangular edge point cloud with an error of no more than 5 mm from the actual physical size can be measured (we will verify this in Section 4.2).

3.3. 3D Virtual Mask Matching

This section introduces a scheme for obtaining 3D feature points of the calibration board by using virtual mask matching. The edge point clouds,   P C e d g e   and P C b o a r d , are combined into a new point clouds ( P C c a l i b ) ,   P C c a l i b = P C e d g e P C b o a r d . We generate a virtual mask P C m a s k with N ( P C m a s k ) random points in YOZ plane with the same size as our board, shown in Figure 6, set N ( P C m a s k ) = 104. We use the iterative closest point (ICP) method to match the P C m a s k   with the P C c a l i b , and the matching result is a rigid transformation matrix   H . The feature points P ( p 1 L , p 2 L , p 20 L ) we need can be calculated by P * H , where P ( p 1 L , p 2 L , p 20 L ) are feature points set on the P C m a s k .

3.4. 2D Corner Points Detected in Image

We design an automatic matching method according to the characteristics of our calibration board, which can automatically find corners in images and finely process corners. It is mainly divided into two steps: corner points locating and corner points refinement. First, we use a region growing algorithm (RGA) to locate 2D feature points roughly. We divide the input image into c × c cells, and use the center point of each cell as the seed point of RGA, as shown in Figure 7a. In the next experiment, we choose c = 10, that is, the original image is divided into 10 equal parts horizontally and vertically, and 100 seed points are obtained. Since our calibration board is pure white, we only grow the seed points whose gray value is greater than ε, and set ε to 127, half of the maximum grayscale value, which can quickly filter out invalid seed points. Then, rectangle detection is carried out for the RGA results of each seed point, shown in Figure 7b. It is obvious that multiple rectangles are detected at the same location, shown in Figure 7c, which is not conducive to the selection of corner points. Finally, we perform the k-mean clustering method on all rectangles. The clustering index is the distance between the center points of each rectangular box. The k-value of clustering is five, which exactly corresponds to five rectangles to be detected. The clustering result is five clusters of rectangular boxes, and each cluster has at least one rectangle. The mean value of corner points of each cluster of rectangular boxes is taken as the corner points for roughly locating position, as red points shown in Figure 7d.
After obtaining the corner points of roughly locating, it can be clearly found that the locating position is not accurate (red points in Figure 7d–f). This is because the rectangle detection has poor robustness and accuracy. It is necessary to further refine the corner points. Taking q 1 and q 2 , the green points in Figure 7g as an example, we connect q 1 and q 2 to obtain a straight line l q 1 q 2 , count all the points close to l q 1 q 2 , which are circled by a yellow ellipse in Figure 7g. We use these points to refit the line and get the refined   l q 1 q 2 . We solve the connecting lines between the other points in the same way and estimate the refined corner points by the intersection of straight lines. As described in the green dots, it can be found that the green points after the fine processing are more accurate than the red points detected before. In Figure 7g, for convenience of representation, we visualize the result in pixels. However, in fact, the coordinates of these corner points are sub-pixel. Finally, according to the order of feature points in Figure 7h, each refined corner point is sorted to facilitate pairing with 3D feature points.

3.5. Optimization Equation Modelling

In this section, an optimization equation is established to find the best intrinsic parameters of the camera and solve the extrinsic parameters of LiDAR and camera. The optimization process is intended to find the best LiDAR–camera extrinsic parameters and camera intrinsic parameters to minimize the re-projection error between the 3D–2D feature points. Through the methods of Section 3.1, Section 3.2, Section 3.3 and Section 3.4, we can obtain 3D and 2D feature points after refinement processing. We use P i ( p i , 1 L , p i , 2 L , p i , 20 L ) and Q i ( q i , 1 , q i , 2 , q i , 20 ) to represent 3D/2D feature points set measured at the i th position, i ( 1 , n ) , n is the number of test positions. The projection process from 3D points to 2D points needs to go through a rigid transformation model, pinhole imaging model, and distortion model. The detailed formulas are as follows:
[ x C y C z C 1 ] = [ R * | T * ] [ x L y L z L 1 ]
[ q u q v 1 ] = [ f x 0 u x 0 f y v y 0 0 1 ] [ x C y C 1 ]
where x C = x C / z C , y C = y C / z C ,   R * represents the best rotation matrix, T * represents the best translation matrix, and K * = { f x , f y , u x , v y } represents the best camera intrinsic, q u and q v are the coordinates of a 2D point in the image. Now we consider the distortion model of the camera, and we use D = { k 1 , k 2 } to represent it. The final imaging of the camera is as follows:
r 2 = x C x C + y C y C
x C = x C ( 1 + k 1 r 2 + k 2 r 4 ) y C = y C ( 1 + k 1 r 2 + k 2 r 4 )
[ q u q v 1 ] = [ f x 0 u x 0 f y v y 0 0 1 ] [ x C y C 1 ]
According to Equations (5)–(9), we establish the following optimization equations to solve R , T , K , and the best distortion parameter D :
[ R , T , K , D ] = arg   min R , T , K , D i = 1 n j = 1 20 H u b e r L o s s { E i , j [ q i , j ,   K D ( R p i , j L + T ) ] }
where the form of E i , j (·) is:
E i , j ( q , q ) = ( q u q u ) 2 + ( q v q v ) 2
and H u b e r L o s s ( · ) is:
H u b e r L o s s ( δ ) = { δ δ 1 2 δ 1 δ > 1
Therefore, our optimization problem can be transformed into the optimization of f(Q,P). The initial value of the camera’s intrinsic parameters is given in advance, which can come from the coarse calibration result of the camera, or the initial value given by the camera manufacturer. We use ceres optimizer (http://ceres-solver.org/, accessed on 1 December 2022) to optimize the intrinsic and extrinsic parameters. In the optimization, we set the maximum iteration as 500 and stopping tolerance as 10−10.
f ( Q , P ) = P i P Q i Q i = 1 n p i , j L P i q i , j Q i j = 1 20 H u b e r L o s s { E i , j [ q i , j ,   K D ( R p i , j L + T ) ] }

4. Experiments and Discussions

4.1. Proposed Experiment System

We setup the system with a LeiShen LiDAR with 64 lines and DaHeng camera with resolution of 1920 × 1200, shown in Figure 8a, and the field of view (FOV) of these two sensors is shown in Figure 8b. The size of the calibration target has been shown in Figure 2b. The maximum value of LiDAR’s detection range is 100.0 m. The wavelength is 905 nm. The rotation speeds of our LiDAR can be set to 300.0 rpm (i.e., 5.0 Hz), 600.0 rpm (i.e., 10.0 Hz), and 1200.0 rpm (i.e., 20.0 Hz), and the corresponding θ are 0.1°, 0.2°, and 0.4°, respectively. We simulate the LiDAR with different θs by switching the rotation speed of the LiDAR motor. We mainly analyze the influence of θ on calibration, so other variables are fixed. In the controlled experiments of different methods in this section, only the θ is different.

4.2. Experiments

  • Experiment 1: 3D feature points detected
This section verifies the accuracy of approximation edge points. We take the measurement results of the size of the calibration board as the evaluation index and analyze the evaluation index by the original edge points and the approximated edge points. Table 2 displays the measurement results under three θs when the test distance is 7.0, 10.0, and 12.0 m. Without any processing, due to the influence of θ, the length of the quadrilateral was lower than the actual value (1000.0 mm × 1000.0 mm), which also means that the selection of edge points was not accurate. The size of the bounding box (BB) will be much larger than the actual size or much smaller, which indicates the instability of initial edge points. After the approximation processing, when scale = 0.4, the measurement error reached within 5.0 mm. Compared with the results of BB, our method is more accurate for the estimation of edge points. The result can be seen that our approximation processing can make the measurement result reach the true size. In Figure 9, we can find that the edge estimation results at different θs are preferable to the initial values. In the following experiments, we will apply this approximation refining strategy to [18,19,22]. The final calibration results are also greatly improved.
  • Experiment 2: Automatic locating results
The experiment in this section is to verify the effect of automatic locating of the calibration board. After clustering in one frame data, nine objects, shown in Figure 10 and Table 3, can be obtained. It can be found that objects with many points, shown in Figure 10a,b, will lead to the reduction of overall final Score due to the reduction of ScoreA. For objects with a small number of point clouds, the final Score will reduce due to the decrease of ScoreB and ScoreC, shown in Figure 10d–i. Only our target board, whose ScoreA, ScoreB, and ScoreC all maintain the value close to 1.0, has the highest final Score. It can be said that our locating index is not affected by the number and shape of point clouds. For this reason, in Euclidean clustering, there is no need to specify the parameter of the minimum number of cluster point clouds. In this experiment, we set the minimum number of clusters as 30. Therefore, our method does not require the user to set additional parameters. In addition, the distance parameter of clustering needs to be specified additionally. The distance between LiDAR and the calibration board is often placed between 5.0–10.0 m. Within this distance range, the distance between adjacent points is not sensitive to the change of distance between the calibration board and the LiDAR origin point. We can set it as 30.0 cm, which requires that our calibration board cannot be interfered by other objects within 30.0 cm around during the calibration process.
  • Experiment 3: Re-projection error analyses
This section is the evaluation of calibration accuracy. We compare the proposed method with other comparable methods. For each method, we select at least 100 pairs of feature points for a re-projection error test. Figure 11a–r are the error distribution diagrams of three methods. We can show the error distribution of different methods more clearly in the form of error circles. Figure 12a–o show the experimental results of different frame numbers. Table 4 is the qualitative and quantitative analysis comparing the results.
After testing a large number of feature points, our proposed method has a maximum error of no more than 2.0 pixels at each angle resolution, shown in Figure 11a–c. The method of [18] used half of the average distance between adjacent scanning points, as a new position of an edge point was greatly affected by θ. It can also show up from the error circles of Figure 11d–f, which is not suitable for using in LiDAR with large θ. It can be observed in the re-projection error analysis that the projection error is about five times of ours in Table 4. On the one hand, it is related to the poor estimation of edge points, and on the other hand, it is related to the linear equation solution without considering the camera distortion. We apply the approximation edge point fitting method to solve corner points and solve the calibration parameters with the same linear equation. Finally, we can find that the re-projection error is significantly reduced, shown in Table 4. Ref. [22] used a two-stage calibration method, and the deviation of camera intrinsic parameter calibration affected the final calibration effect. In addition, the feature point pairs of [22] are manually selected, which is extremely inconvenient for large-scale applications. Ref. [19] obtained the point clouds at different positions by means of distance statistical filtering, which to a certain extent solved the error caused by LiDAR in ranging to the calibration system. However, the amount of data processed is too heavy (300 frames at least). Compared with the method that we only use one frame of data, our method is obviously better. Similarly, we use our edge refinement method to replace the steps of extracting corner points in [19,22], and conduct experiments on the re-projection error of corner points. It can be found in Table 4 that the accuracy had been improved. Ref. [22] ignored the influence of θ on edge estimation, whilst Ref. [19] only considered the accuracy of edge estimation from one side. Our method considers four sides of the calibration board at the same time, and takes the actual size of the four sides as the reference index for fine processing, which has a higher accuracy. Through the comparison experiment with [18,19,22], we can find that improving the accuracy of corner points can effectively improve the accuracy of calibration. The edge refinement method proposed in this paper is helpful to improve the accuracy of corner points. The work of [20] applied two chessboards and one auxiliary calibration object to calibrate the extrinsic parameters of the camera and LiDAR system, which was not convenient. Ref. [24] had high accuracy, and the 3D feature points obtained were the same as the actual physical size. However, when applied to LiDAR with large θ, it is not as stable as our method. This is because when θ is large, the distance between the edge point of the calibration board and the real edge is often large, which will affect the matching accuracy. Finally, we make a qualitative evaluation of various methods in Table 4. We can clearly find from Figure 11a–c that, using our method, the re-projection error will not change significantly with the increase of θ, and the overall error remains within 2.0 pixels (red circles). In general, our method is superior to other methods in stability and automation level.
Figure 12a–o analyze the re-projection results under different numbers of frames. It is obvious that other methods need at least 3–5 frames of data to converge, and with the increase of θ, the error of final convergence will also increase. The proposed method has a low re-projection error in the first frame, and with the increase of the number of frames, the error change is not significant. By increasing the position of the calibration board, more data can be obtained. We use 2–10 positions to carry out the calibration experiment, shown in Table 5. It can be clearly found that with the increase of data volume, the average value and variance of re-projection error are reduced, which also shows the robustness of the proposed method. Our calibration board can add 20 more pairs of feature points without adding a position, which is larger than the data volume of other methods, and the results are more convincing.
  • Experiment 4: Parameter consistency check
The calibration of LiDAR and camera is a no ground true problem, so the measurement means of calibration effect is not only the re-projection error, but also the data consistency. We collect data from a total of 100 positions, randomly select data from 10 positions for calibration experiments, randomly select 100 times, and record the intrinsic parameters { f x , f y , u x , v y , k 1 , k 2 } of camera and extrinsic parameters { R { r x , r y , r z } ,   T { t x , t y , t z } }. The mean and Std are shown in the following Table 6; the change of intrinsic parameters is less than 1.0%, the change of rotation matrix is about 0.1°, and the change of translation matrix is about 4.0 mm, which indicates that the calibration parameters of our method are consistent.
  • Experiment 5: Qualitative analysis of calibration accuracy
The following experiment is a qualitative measurement of calibration effect. In order to verify the detection effect of UAVs under low-altitude remote sensing, we select different reference objects 3.0–100.0 m away from the LiDAR for qualitative evaluation. We project the point clouds onto the image plane and color them according to the distance. Figure 13 is the projection of the indoor environment and Figure 14, Figure 15 and Figure 16 are the projection of the outdoor environment. The farthest projection distance in Figure 14, Figure 15 and Figure 16 is about 100.0 m, which is enough to explain that our method has not offset in this distance. The buildings and vehicles can also show that the proposed method can add accurate depth information for nearby objects. The 3D reconstruction effect of Figure 17 also shows that the proposed calibration method is helpful to the advanced application of the LiDAR–camera system.

4.3. Discussions

Low-altitude UAVs have been used in many remote sensing fields [37] in recent years such as urban management, crop monitoring, and industrial patrol inspection, etc. It is essential to apply multi-sensor fusion technology to low-altitude UAVs. Calibration is the basic requirement for multi-sensor platforms where data need to be represented in a common coordinate system for the purpose of analysis and information fusion [38]. The target-based calibration method has become the first choice for the calibration methods because of its stability and high accuracy. This paper designs a special calibration target and its corresponding calibration algorithm, which can achieve high precision and great calibration efficiency. While considering the calibration accuracy, we also consider the implementation efficiency of this method. The fully automatic data processing flow and joint optimization scheme make this method suitable for large-scale commercial applications.
At present, there are few methods to consider the effect of LiDAR with different θs on calibration. Experiment 3 in this paper shows that LiDAR with different θs has significant differences in calibration. The method of fine edge processing based on background point clouds proposed in this method is to solve the problem of insufficient calibration accuracy caused by large θ. After using the proposed method, the calibration accuracy and robustness have been improved to some extent, as analyzed in Table 4, and the demand for data volume has also been decreased, as shown in Figure 12 and Table 5. In addition, the method in this paper focuses on large-scale application, so every step of the design is to minimize the artificial setting of parameters, so as to improve the implementation efficiency of calibration. The average processing time of this method for one frame of data, including point cloud and image, is 7.5 s. It has higher time complexity than the traditional method, which is one aspect that can be optimized in the future. However, we are more focused on the improvement of calibration accuracy and automation level. Higher accuracy and less human participation are our final goals. In terms of optimizing functions, we need to give the initial extrinsic parameters in advance, which can be easily given by PnP [39,40].
Here, we discuss the advantages of the calibration board we designed. We have designed a special hollow monochromatic plane plate, which has the following advantages: (1) the monochromatic flat plate avoids the interference of different colors when the LiDAR is ranging, which is conducive to plane fitting and plane normal vector extraction. (2) The hollow calibration object is convenient for automatically locating its position in the point clouds by using geometric information. As for these two advantages, we conduct the following comparative experiments. We paste a small piece of checkerboard on the calibration object and find that the depth discontinuity with other positions appears at the place where the LiDAR scans, shown in Figure 18. This phenomenon is related to the intensity information performance of LiDAR itself. When we use the LiDAR to detect the calibration object, the chessboard in black and white areas will make the ranging depth uneven [18,19], while the monochromatic calibration board can reflect a flat plane. Our method does not depend on the intensity information [41] or additional sensors [42], and only uses geometric information to locate the calibration board; thus, it can be used with any types of LiDAR device and has better universality.
The advantages of the method described in this paper are as follows: (1) The refined edge points enable the LiDAR with large horizontal angle resolution (θ = 0.4) to complete calibration with high accuracy; (2) The two-step corner points detection method designed by us can realize fully automatic data processing, greatly reduce manual participation, and does not need parameters to be manually set in the calibration process; (3) Our method does not require a high amount of calibration data. For 5–10 positions, one frame of data per position can complete the calibration with high accuracy. This is also related to the results of our 2D and 3D feature point refinement processing because the feature points with high accuracy can reduce the amount of data involved in optimization. The experimental scheme in this paper also has some shortcomings. For example, our research is based on LiDAR with 64 lines, and has not been verified on the LiDAR with fewer lines (32 lines or less) [43,44], and the vertical angle resolution needs to be further considered, which requires further research and improvements. In future work, we will apply the results of joint calibration of LiDAR and visible light camera to the application of low-altitude UAVs remote sensing, ground object recognition, depth information completion, and 3D terrain reconstruction, etc. Our application scenario is to use UAV with a visible light camera and LiDAR to conduct high-precision 3D mapping in a small range. The joint calibration of these sensors is the pre-work of this application. In this paper, this calibration method is completed in an indoor environment, as shown in Figure 2a. As for the verification of an outdoor environment, in order to show the projection effect at different distances, we chose the scene with very rich features, the nearest distance is 3.0 m, and the farthest distance is 100.0 m, as shown in Figure 14, Figure 15 and Figure 16. The final projection effect shows that the projection of our method at different distances is accurate. As for the limitations of our method on the application level, the efficiency of the algorithm is lower than that of a pure image algorithm, which is also related to the increase of 3D data. In addition, we pay more attention to the ground object detection within 100.0 m of low-altitude, and do not consider the objects with a distance greater than 100.0 m, which is related to the inherent properties of our LiDAR.

5. Conclusions

In this paper, we propose a novel, fully automatic joint calibration method of LiDAR and visible light camera. Compared with the existing methods, we propose an approximate edge fitting technology, which has obvious improvements based on the existing methods and has definite accuracy improvement. Our method does not need to use the intensity information of LiDAR, which has better universality for the method itself and can also be applied when the LiDAR’s horizontal resolution is sparse, or the intensity information is poor. For the automatic process, we propose a strategy of first locating and then refinement, which can fully realize 2D and 3D corner points detection. In the calibration process, the user does not need manual intervention, which greatly increases the efficiency of calibration. In the aspect of effect display, we project the 3D point clouds onto the image plane and show the calibration effect in different test distances. The 2D image with depth information can be used for advanced visual applications. We also color the 3D point cloud according to the calibrated parameters, which is instructive for 3D reconstruction.

Author Contributions

Conceptualization, C.C., J.L., H.L., S.C. and X.W.; data curation, C.C.; formal analysis, C.C., J.L., H.L. and X.W.; funding acquisition, J.L. and H.L.; investigation, J.L., H.L., S.C. and X.W.; methodology, C.C., H.L. and X.W.; project administration, J.L., H.L. and S.C.; resources, J.L., H.L. and S.C.; software, C.C., H.L., S.C. and X.W.; supervision, H.L.; validation, C.C., H.L. and S.C.; visualization, C.C., H.L. and X.W.; writing—original draft, C.C. and H.L.; writing—review and editing, C.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Scientific and Technological Innovation Foundation of Foshan, USTB under Grant BK20AF007, the National Natural Science Foundation of China under Grant 61975011, the Fund of State Key Laboratory of Intense Pulsed Radiation Simulation and Effect under Grant SKLIPR2024, and the Fundamental Research Fund for the China Central Universities of USTB under Grant FRF-BD-19-002A.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goian, A.; Ashour, R.; Ahmad, U.; Taha, T.; Almoosa, N.; Seneviratne, L. Victim Localization in USAR Scenario Exploiting Multi-Layer Mapping Structure. Remote Sens. 2019, 11, 2704. [Google Scholar] [CrossRef] [Green Version]
  2. Hong, Z.; Zhong, H.; Pan, H.; Liu, J.; Zhou, R.; Zhang, Y.; Han, Y.; Wang, J.; Yang, S.; Zhong, C. Classification of Building Damage Using a Novel Convolutional Neural Network Based on Post-Disaster Aerial Images. Sensors 2022, 22, 5920. [Google Scholar] [CrossRef] [PubMed]
  3. Raman, M.; Carlos, E.; Sankaran, S. Optimization and Evaluation of Sensor Angles for Precise Assessment of Architectural Traits in Peach Trees. Sensors 2022, 22, 4619. [Google Scholar] [CrossRef] [PubMed]
  4. Zhu, W.; Sun, Z.; Peng, J.; Huang, Y.; Li, J.; Zhang, J.; Yang, B.; Liao, X. Estimating Maize Above-Ground Biomass Using 3D Point Clouds of Multi-Source Unmanned Aerial Vehicle Data at Multi-Spatial Scales. Remote Sens. 2019, 11, 2678. [Google Scholar] [CrossRef] [Green Version]
  5. Wang, D.; Xing, S.; He, Y.; Yu, J.; Xu, Q.; Li, P. Evaluation of a new Lightweight UAV-Borne Topo-Bathymetric LiDAR for Shallow Water Bathymetry and Object Detection. Sensors 2022, 22, 1379. [Google Scholar] [CrossRef]
  6. Chen, S.; Nian, Y.; He, Z.; Che, M. Measuring the Tree Height of Picea Crassifolia in Alpine Mountain Forests in Northwest China Based on UAV-LiDAR. Forests 2022, 13, 1163. [Google Scholar] [CrossRef]
  7. Song, J.; Qian, J.; Li, Y.; Liu, Z.; Chen, Y.; Chen, J. Automatic Extraction of Power Lines from Aerial Images of Unmanned Aerial Vehicles. Sensors 2022, 22, 6431. [Google Scholar] [CrossRef]
  8. Zhang, Z. Camera Calibration with One-dimensional Objects. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 2004, 26, 892–899. [Google Scholar] [CrossRef]
  9. Wu, F.; Hu, Z.; Zhu, H. Camera Calibration with Moving One-dimensional Objects. Pattern Recognit. 2005, 38, 755–765. [Google Scholar] [CrossRef]
  10. Bai, Z.; Jiang, G.; Xu, A. LiDAR-Camera Calibration Using Line Correspondences. Sensors 2020, 20, 6319. [Google Scholar] [CrossRef]
  11. Geiger, A.; Moosmann, F.; Car, Ö.; Schuster, B. Automatic Camera and Range Sensor Calibration Using a Single Shot. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (IEEE ICRA), Saint Paul, MN, USA, 14–18 May 2012; pp. 3936–3943. [Google Scholar]
  12. Cai, H.; Pang, W.; Chen, X.; Wang, Y.; Liang, H. A Novel Calibration Board and Experiments for 3D LiDAR and Camera Calibration. Sensors 2020, 20, 1130. [Google Scholar] [CrossRef] [Green Version]
  13. Zhou, L.; Li, Z.; Kaess, M. Automatic Extrinsic Calibration of a Camera and a 3D LiDAR Using Line and Plane Correspondences. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5562–5569. [Google Scholar]
  14. Guindel, C.; Beltrán, J.; Martin, D.; Garcia, F. Automatic Extrinsic Calibration for Lidar-stereo Vehicle Sensor Setups. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–6. [Google Scholar]
  15. Gong, X.; Lin, Y.; Liu, J. 3D LIDAR-Camera Extrinsic Calibration Using an Arbitrary Trihedron. Sensors 2013, 13, 1902–1918. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Pusztai, Z.; Hajder, L. Accurate Calibration of Lidar-camera Systems Using Ordinary Boxes. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 394–402. [Google Scholar]
  17. Kümmerle, J.; Kühner, T. Unified Intrinsic and Extrinsic Camera and LiDAR Calibration under Uncertainties. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6028–6034. [Google Scholar]
  18. Park, Y.; Yun, S.; Won, C.S.; Cho, K.; Um, K.; Sim, S. Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [Green Version]
  19. Xu, X.; Zhang, L.; Yang, J.; Liu, C.; Xiong, Y.; Luo, M.; Tan, Z.; Liu, B. LiDAR–camera Calibration Method Based on Ranging Statistical Characteristics and Improved RANSAC Algorithm. Robot. Auton. Syst. 2021, 141, 103776–103789. [Google Scholar] [CrossRef]
  20. An, P.; Ma, T.; Yu, K.; Fang, B.; Zhang, J.; Fu, W.; Ma, J. Geometric Calibration for LiDAR-camera System Fusing 3D-2D and 3D-3D Point Correspondences. Opt. Express 2020, 28, 2122–2141. [Google Scholar] [CrossRef]
  21. Ye, Q.; Shu, L.; Zhang, W. Extrinsic Calibration of a Monocular Camera and a Single Line Scanning LiDAR. In Proceedings of the 2019 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China, 4–7 August 2019; pp. 1047–1054. [Google Scholar]
  22. Liao, Q.; Chen, Z.; Liu, Y.; Wang, Z.; Liu, M. Extrinsic Calibration of Lidar and Camera with Polygon. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 200–205. [Google Scholar]
  23. Yao, Y.; Huang, X.; Lv, J. A Space Joint Calibration method for LiDAR and Camera on Self-driving Car and Its Experimental Verification. In Proceedings of the 6th International Symposium on Computer and Information Processing Technology (ISCIPT), Changsha, China, 11–13 June 2021; pp. 388–394. [Google Scholar]
  24. Huang, J.; Grizzle, J. Improvements to Target-Based 3D LiDAR to Camera Calibration. IEEE Access 2020, 8, 134101–134110. [Google Scholar] [CrossRef]
  25. Huang, J.; Wang, S.; Ghaffari, M.; Grizzle, J. LiDARTag: A Real-Time Fiducial Tag System for Point Clouds. In Proceedings of the IEEE Robot and Automation Letters, 31 March 2021; pp. 4875–4882. [Google Scholar]
  26. Wang, W.; Sakurada, K.; Kawaguchi, N. Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard. Remote Sens. 2017, 9, 851. [Google Scholar] [CrossRef] [Green Version]
  27. Pandey, G.; McBride, J.R.; Savarese, S.; Eustice, R.M. Automatic Extrinsic Calibration of Vision and LiDAR by Maximizing Mutual Information. J. Field Robot. 2014, 32, 696–722. [Google Scholar] [CrossRef] [Green Version]
  28. Taylor, Z.; Nieto, J. A Mutual Information Approach to Automatic Calibration of Camera and LiDAR in Natural Environments. In Proceedings of the Australasian Conference on Robotics and Automation (ACRA), Victoria University of Wellington, Wellington, New Zealand, 3 December 2012; pp. 3–5. [Google Scholar]
  29. Taylor, Z.; Nieto, J. Motion-based Calibration of Multimodal Sensor Extrinsics and Timing Offset Estimation. IEEE Trans. Robot. 2016, 32, 1215–1229. [Google Scholar] [CrossRef]
  30. Taylor, Z.; Nieto, J. Motion-based Calibration of Multimodal Sensor Arrays. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 4843–4850. [Google Scholar]
  31. Schneider, N.; Piewak, F.; Stiller, C.; Franke, U. RegNet: Multimodal Sensor Registration Using Deep Neural Networks. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1803–1810. [Google Scholar]
  32. Iyer, G.; Ram, R.; Murthy, J.; Krishna, K. CalibNet: Geometrically Supervised Extrinsic Calibration Using 3D Spatial Transformer Networks. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1110–1117. [Google Scholar]
  33. Yuan, K.; Guo, Z.; Wang, Z. RGGNet: Tolerance Aware LiDAR-Camera Online Calibration with Geometric Deep Learning and Generative Model. IEEE Robot. Autom. Lett. 2020, 5, 6956–6963. [Google Scholar] [CrossRef]
  34. Lv, X.; Wang, B.; Dou, Z.; Ye, D.; Wang, S. LCCNet: LiDAR and Camera Self-Calibration using Cost Volume Network. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 2888–2895. [Google Scholar]
  35. Rotter, P.; Klemiato, M.; Skruch, P. Automatic Calibration of a LiDAR–Camera System Based on Instance Segmentation. Remote Sens. 2022, 14, 2531. [Google Scholar] [CrossRef]
  36. Lim, H.; Myung, H. Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor. IEEE Robot. Autom. Lett. 2021, 6, 6458–6465. [Google Scholar] [CrossRef]
  37. Giyenko, A.; Cho, Y. Intelligent UAV in Smart Cities Using IoT. In Proceedings of the 16th international Conference on Control,Automation and Systems (ICCAS’16), Gyeongju, Korea, 16–19 October 2016; pp. 207–210. [Google Scholar]
  38. Unnikrishnan, R.; Hebert, M. Fast Extrinsic Calibration of a Laser Rangefinder to a Camera; Robotics Institute: Pittsburgh, PA, USA, Technical Report, CMU-RI-TR-05-09; 2005. [Google Scholar]
  39. Lepetit, V.; Moreno-Noguer, F.; Fua, P. EPnP: An Accurate O(n) Solution to the PnP Problem. Int. J. Comput. Vis. 2009, 81, 155–166. [Google Scholar] [CrossRef] [Green Version]
  40. Kneip, L.; Li, H.; Seo, Y. UPnP: An optimal O(n) Solution to The Absolute Pose Problem with Universal Applicability. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; Springer: Berlin, Germany, 2014; pp. 127–142. [Google Scholar]
  41. Grammatikopoulos, L.; Papanagnou, A.; Venianakis, A.; Kalisperakis, I.; Stentoumis, C. An Effective Camera-to-LiDAR Spatiotemporal Calibration Based on a Simple Calibration Target. Sensors 2022, 22, 5576. [Google Scholar] [CrossRef]
  42. Nnez, P.; Jr, P.D.; Rocha, R.; Dias, J. Data Fusion Calibration for a 3D Laser Range Finder and a Camera Using Inertial Data. In Proceedings of the European Conference on Mobile Robots (ECMR), Dubrovnik, Croatia, 23–25 September 2009; pp. 31–36. [Google Scholar]
  43. Kim, E.; Park, S. Extrinsic Calibration between Camera and LiDAR Sensors by Matching Multiple 3D Planes. Sensors 2020, 20, 52. [Google Scholar] [CrossRef] [PubMed]
  44. Chai, Z.; Sun, Y.; Xiang, Z. A Novel Method for LiDAR Camera Calibration by Plane Fitting. In Proceedings of the 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Auckland, New Zealand, 9–12 July 2018; pp. 286–291. [Google Scholar]
Figure 1. The representative methods of calibration between the LiDAR and visible light camera.
Figure 1. The representative methods of calibration between the LiDAR and visible light camera.
Remotesensing 14 06385 g001
Figure 2. Calibration board and flow chart of proposed method, (a) is the calibration board, (b) is the geometric dimension drawing (unit: mm), (c) is the location of calibration board, red points represent the feature points we need, and (d) is the diagrammatic sketch.
Figure 2. Calibration board and flow chart of proposed method, (a) is the calibration board, (b) is the geometric dimension drawing (unit: mm), (c) is the location of calibration board, red points represent the feature points we need, and (d) is the diagrammatic sketch.
Remotesensing 14 06385 g002
Figure 3. Automatic locating calibration object processing procedure. (a) is the point clouds with ground, (b) is the result of clustering, (c) is a schematic diagram of 3D to 2D conversion, (d) is an auxiliary diagram for locating, (e) is the point clouds with the highest matching score, (f) is the point clouds after removing the connecting rod, (g) is point clouds of calibration board (red points) and background point clouds (green and blue points).
Figure 3. Automatic locating calibration object processing procedure. (a) is the point clouds with ground, (b) is the result of clustering, (c) is a schematic diagram of 3D to 2D conversion, (d) is an auxiliary diagram for locating, (e) is the point clouds with the highest matching score, (f) is the point clouds after removing the connecting rod, (g) is point clouds of calibration board (red points) and background point clouds (green and blue points).
Remotesensing 14 06385 g003aRemotesensing 14 06385 g003b
Figure 4. Point clouds distribution map with different θs, (a) is the top view of LiDAR, (b) is point clouds of three different θs and (c) is a partial diagram (black rectangle) of (b). Red point in (d) is θ = 0.1°, green point in (e) is θ = 0.2°, and blue point in (f) is θ = 0.4°.
Figure 4. Point clouds distribution map with different θs, (a) is the top view of LiDAR, (b) is point clouds of three different θs and (c) is a partial diagram (black rectangle) of (b). Red point in (d) is θ = 0.1°, green point in (e) is θ = 0.2°, and blue point in (f) is θ = 0.4°.
Remotesensing 14 06385 g004
Figure 5. Illustration of edge points extraction. (a) is a partial view of (b), and (c) is a top view. Yellow points indicate the scanning points of LiDAR, red points indicate the edge points on the board, green points indicate the points projected to the board plane, and blue points indicate the improved points by the proposed method. The black dotted line is the scanning line of the LiDAR, the red line is fitted by the red points, and the blue line is fitted by the blue points.
Figure 5. Illustration of edge points extraction. (a) is a partial view of (b), and (c) is a top view. Yellow points indicate the scanning points of LiDAR, red points indicate the edge points on the board, green points indicate the points projected to the board plane, and blue points indicate the improved points by the proposed method. The black dotted line is the scanning line of the LiDAR, the red line is fitted by the red points, and the blue line is fitted by the blue points.
Remotesensing 14 06385 g005
Figure 6. Schematic diagram of virtual mask matching. Red points represent the 3D feature points we need, green points represent the refined edge points, and blue points are generated for matching.
Figure 6. Schematic diagram of virtual mask matching. Red points represent the 3D feature points we need, green points represent the refined edge points, and blue points are generated for matching.
Remotesensing 14 06385 g006
Figure 7. Automatic extraction process of 2D feature points. (a) shows the original image and seed points in each of cells, (b) presents the rectangle detected after the RGA by one seed point, (c) is the local graph of (b), it can be clearly seen that multiple rectangles are detected at the same location, green points in (d) illustrate refined corner points, red points illustrate roughly detected corner points, (e,f) are the partial diagram of (d), (g) is the detected and sorted corner points, the points in the yellow ellipse are used to fit l q 1 q 2 , and (h) shows the final corner points colored in the original image.
Figure 7. Automatic extraction process of 2D feature points. (a) shows the original image and seed points in each of cells, (b) presents the rectangle detected after the RGA by one seed point, (c) is the local graph of (b), it can be clearly seen that multiple rectangles are detected at the same location, green points in (d) illustrate refined corner points, red points illustrate roughly detected corner points, (e,f) are the partial diagram of (d), (g) is the detected and sorted corner points, the points in the yellow ellipse are used to fit l q 1 q 2 , and (h) shows the final corner points colored in the original image.
Remotesensing 14 06385 g007aRemotesensing 14 06385 g007b
Figure 8. Setup of proposed method. (a) is the coordinate system definition, and (b) is the common FOV of two sensors.
Figure 8. Setup of proposed method. (a) is the coordinate system definition, and (b) is the common FOV of two sensors.
Remotesensing 14 06385 g008
Figure 9. Refined edge point visualization results. Red points are the points scanned by LiDAR; green points are refined edge points. The blue lines are the boundary lines as large as the calibration object. (a) is the display of refined points when θ = 0.1°, (b) is the result when θ = 0.2°, and (c) is the result when θ = 0.4°.
Figure 9. Refined edge point visualization results. Red points are the points scanned by LiDAR; green points are refined edge points. The blue lines are the boundary lines as large as the calibration object. (a) is the display of refined points when θ = 0.1°, (b) is the result when θ = 0.2°, and (c) is the result when θ = 0.4°.
Remotesensing 14 06385 g009
Figure 10. Results of Euclidean clustering when θ = 0.1°. (ai) are point clouds of different clusters, sorted in descending order according to the number of point clouds.
Figure 10. Results of Euclidean clustering when θ = 0.1°. (ai) are point clouds of different clusters, sorted in descending order according to the number of point clouds.
Remotesensing 14 06385 g010aRemotesensing 14 06385 g010b
Figure 11. Error circle of different methods, the u e r r o r   a n d   v e r r o r   are projection errors in the horizontal and vertical directions, respectively. (ac) are proposed methods, respectively, corresponding to θ = 0.1, 0.2, and 0.4; we select one frame data for each position, (df) are methods in [18], respectively, corresponding to θ = 0.1, 0.2, and 0.4; (gi) are the methods in [22], respectively, corresponding to θ = 0.1, 0.2, and 0.4, (jl) are the methods in [19], respectively, corresponding to θ = 0.1, 0.2, and 0.4, (mo) are the methods in [20], respectively, corresponding to θ = 0.1, 0.2, and 0.4, and (pr) are the methods in [24], respectively, corresponding to θ = 0.1, 0.2, and 0.4.
Figure 11. Error circle of different methods, the u e r r o r   a n d   v e r r o r   are projection errors in the horizontal and vertical directions, respectively. (ac) are proposed methods, respectively, corresponding to θ = 0.1, 0.2, and 0.4; we select one frame data for each position, (df) are methods in [18], respectively, corresponding to θ = 0.1, 0.2, and 0.4; (gi) are the methods in [22], respectively, corresponding to θ = 0.1, 0.2, and 0.4, (jl) are the methods in [19], respectively, corresponding to θ = 0.1, 0.2, and 0.4, (mo) are the methods in [20], respectively, corresponding to θ = 0.1, 0.2, and 0.4, and (pr) are the methods in [24], respectively, corresponding to θ = 0.1, 0.2, and 0.4.
Remotesensing 14 06385 g011aRemotesensing 14 06385 g011b
Figure 12. Re-projection error of different numbers of frames. The symbols and vertical lines represent the mean and 3σ range of re-projection errors calculated by different methods. (ac) are the results compared with [18], (df) are compared with [22], (gi) are compared with [19], (jl) are compared with [20], (mo) are compared with [24].
Figure 12. Re-projection error of different numbers of frames. The symbols and vertical lines represent the mean and 3σ range of re-projection errors calculated by different methods. (ac) are the results compared with [18], (df) are compared with [22], (gi) are compared with [19], (jl) are compared with [20], (mo) are compared with [24].
Remotesensing 14 06385 g012aRemotesensing 14 06385 g012b
Figure 13. Results of indoor projection effect. (a) is projection in initial parameters, that is, calibration is not performed, (bd) are the projection of θ which is equal to 0.1, 0.2, and 0.4, respectively. The contents in yellow rectangles can provide detailed reference.
Figure 13. Results of indoor projection effect. (a) is projection in initial parameters, that is, calibration is not performed, (bd) are the projection of θ which is equal to 0.1, 0.2, and 0.4, respectively. The contents in yellow rectangles can provide detailed reference.
Remotesensing 14 06385 g013aRemotesensing 14 06385 g013b
Figure 14. Results of outdoor scene test, θ = 0.1°. The contents in yellow rectangles in (ad) are projections at different distances.
Figure 14. Results of outdoor scene test, θ = 0.1°. The contents in yellow rectangles in (ad) are projections at different distances.
Remotesensing 14 06385 g014
Figure 15. Results of outdoor scene test, θ = 0.2°. The contents in yellow rectangles in (ad) are projections at different distances.
Figure 15. Results of outdoor scene test, θ = 0.2°. The contents in yellow rectangles in (ad) are projections at different distances.
Remotesensing 14 06385 g015
Figure 16. Results of outdoor scene test, θ = 0.4°. The contents in yellow rectangles in (ad) are projections at different distances.
Figure 16. Results of outdoor scene test, θ = 0.4°. The contents in yellow rectangles in (ad) are projections at different distances.
Remotesensing 14 06385 g016
Figure 17. Results of 3D reconstruction effect. The test distances of (ac) are 3.0 m, 5.0 m, and 7.0 m, respectively, (d) is the test result of outdoor scene. The red point is the point in the non-common FOV of the LiDAR and camera. The contents in yellow rectangles can provide detailed reference.
Figure 17. Results of 3D reconstruction effect. The test distances of (ac) are 3.0 m, 5.0 m, and 7.0 m, respectively, (d) is the test result of outdoor scene. The red point is the point in the non-common FOV of the LiDAR and camera. The contents in yellow rectangles can provide detailed reference.
Remotesensing 14 06385 g017
Figure 18. Influence of different colors on LiDAR ranging. (a) is the calibration board with one chessboard pattern, (b) is the affected ranging map, (c) is the ranging map with no chessboard pattern.
Figure 18. Influence of different colors on LiDAR ranging. (a) is the calibration board with one chessboard pattern, (b) is the affected ranging map, (c) is the ranging map with no chessboard pattern.
Remotesensing 14 06385 g018
Table 1. Symbols and functions definitions.
Table 1. Symbols and functions definitions.
Symbols and FunctionsDescription
p i L ( x i L , y i L , z i L ) a 3D point in LiDAR coordinate system
p i C ( x i C , y i C , z i C ) a 3D point in camera coordinate system
q i ( u i , v i ) a 2D point in image coordinate system
P / Q a 3D/2D points set
P C i a point cloud belonging to a same class of objects
V i a 3D space vector
scalea scale factor about V i
ssthe step size of scale
ththe expanded threshold of edges
r i n g i a scanning line of LiDAR
θ/θsθ is the horizontal angle resolutions, θs is the plural form of θ
R ( r x , r y , r z ) a rotation matrix around the X, Y, and Z axes
T ( t x , t y , t z ) a translation matrix in the direction of X, Y, and Z axes
u x / u y the coordinates of the image principal point
f x /   f y the scale factors in image x and y axes
k 1 ,     k 2 the distortion coefficient of the image
B o x a bounding box in 2D plane
S c o r e ( P C i ) a matching score between P C i and the preset point cloud
N ( P C i )the number of points in P C i
S ( B o x ( P C i ) )the area of B o x enclosing P C i
cthe number of cells for image segmentation
εthe gray threshold of image
E ( q , q ) the error function of q and q
H u b e r L o s s ( E ( q , q ) ) the loss function of E ( q , q )
Table 2. Calibration board size measurement results.
Table 2. Calibration board size measurement results.
Distance(m)θ(°)BBScale = 0.1Scale = 0.2Scale = 0.4
w (mm)h (mm)w (mm)h (mm)w (mm)h (mm)w (mm)h (mm)
7.00.11012.34996.22984.11981.82997.90991.14998.51997.95
0.2994.321004.66979.91987.12993.41997.94997.20996.22
0.41003.72993.731974.41988.20994.90997.01997.81999.17
10.00.11003.50967.51981.64970.10992.73995.74998.20999.46
0.2984.191004.14970.04983.17990.73997.76997.21998.03
0.4992.161009.49981.64970.10992.73995.74994.20996.76
12.00.1977.75979.99960.40971.44982.07990.14998.00994.41
0.2978.03979.76954.70988.00991.43996.01996.24996.90
0.4975.39969.98954.55965.54981.14989.33997.29996.71
The w and h, respectively, represent the lengths of two sides of a rectangle fitted with the currently refined edge points; BB represents the 2D bounding box of this point clouds.
Table 3. Matching score of different objects.
Table 3. Matching score of different objects.
θ(°)ObjectNScoreAScoreBScoreCScoreIsTrue
0.1Figure 10a34,8840.0313610.9979990.9967980.031198 False
Figure 10b41800.1368420.9921140.9871580.134019 False
Figure 10c11160.8709680.9945760.9822460.850865True
Figure 10d3120.2820510.1342270.2503820.009479 False
Figure 10e2830.7455830.4617380.3022270.104046 False
Figure 10f2630.3650190.2790730.3494860.035601 False
Figure 10g2430.7860080.4001300.2557050.080421 False
Figure 10h2020.8762380.3383240.2154520.063871 False
Figure 10i2010.6716420.3235240.2143480.046576 False
0.2No. 195140.042250.995120.989020.04158False
No. 279170.047240.994610.994010.04670False
No. 320040.129740.997070.988690.12789False
No. 45550.814410.983900.949080.76051True
No. 51560.307690.177330.188890.01030False
No. 61420.760560.464600.300430.10616False
No. 71320.363630.296020.317770.03420False
No. 81060.943400.394330.368210.13698False
No. 91010.663360.343150.214640.04886False
0.4No. 1038870.027270.966990.963810.02541False
No. 1122760.066780.482870.807060.02602False
No. 1216890.130250.998090.994160.12924False
No. 135290.249520.998380.991150.24692False
No. 144470.306480.986460.972050.29389False
No. 153180.248420.525870.655210.08559False
No. 162790.878130.979430.958800.82464True
No. 17490.918370.726050.197070.13140False
No. 18480.333330.329330.302580.03321False
The Nos. 1–18 represent different objects after Euclidean clustering and N indicates the number of points belonging to one point cloud; the bold data indicate the optimal value in the current column; the item “IsTrue” indicates whether it is the calibration board.
Table 4. Calibration results of different methods.
Table 4. Calibration results of different methods.
Re-Projection Result
Methodθ
(°)
Error
(pixel)
Improved
(pixel)
Gain
(%)
Intrinsic
Parameters
Extrinsic
Parameters
StabilityAutomation Level
Proposed method0.10.9350--HighHigh
0.21.0942--
0.41.1992--
[18]0.12.74222.2021+19.7LowLow
0.23.24142.4074+25.7
0.45.96933.0073+49.6
[22]0.12.17511.0416+52.1×MiddleMiddle
0.22.97512.0465+31.2
0.44.89783.3172+32.3
[19]0.12.26841.6406+27.7MiddleLow
0.22.64521.9234+27.3
0.44.41143.7310+15.4
[20]0.11.3317--×MiddleLow
0.21.3982--
0.43.1492--
[24]0.10.7433--×MiddleHigh
0.22.4157--
0.43.5512--
Table 5. Mean and standard deviation of re-projection errors for 2–10 groups of images and point clouds.
Table 5. Mean and standard deviation of re-projection errors for 2–10 groups of images and point clouds.
θ(°)NP2345678910
0.1Mean1.5351.4691.4261.3011.2831.2741.2721.2491.21
Std0.6960.7030.5490.6580.5650.5290.5470.4920.476
0.2Mean2.1091.9451.6371.4701.4571.4081.3241.3031.256
Std1.0070.6820.7400.6230.6970.6440.7370.6240.552
0.4Mean2.3011.8581.7501.7451.6201.4531.3881.3111.28
Std1.3720.7200.7120.6300.7200.8020.7900.7550.586
NP represents number of test positions; Mean is the average error; and Std means the standard deviation.
Table 6. Mean and Std of optimization parameters.
Table 6. Mean and Std of optimization parameters.
θ(°) fxfyux(pixel)vy(pixel)k1k2rx(°)ry(°)rz(°)tx(cm)ty(cm)tz(cm)
0.1Mean2825.75 2817.57969.026 597.901 −0.220 0.187 89.767 −89.738 −0.321 −6.0651 9.6231 −1.4969
Std1.947 2.382 6.471 2.873 0.001 0.010 0.042 0.032 0.103 0.5986 0.3348 0.1995
0.2Mean2826.232818.55 970.405 595.841 −0.230 0.161 90.001 −89.775 −0.582 −6.0065 10.0176 −1.5216
Std2.827 3.463 5.824 3.604 0.009 0.000 0.069 0.035 0.111 0.4551 0.3172 0.4093
0.4Mean2826.85 2818.18966.150 602.771 −0.236 0.201 90.106 −89.690 −0.624 −5.9884 8.4497 −1.7355
Std3.146 3.955 9.0291 5.042 0.007 0.001 0.044 0.042 0.171 0.4098 0.4856 0.4395
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, C.; Lan, J.; Liu, H.; Chen, S.; Wang, X. Automatic Calibration between Multi-Lines LiDAR and Visible Light Camera Based on Edge Refinement and Virtual Mask Matching. Remote Sens. 2022, 14, 6385. https://doi.org/10.3390/rs14246385

AMA Style

Chen C, Lan J, Liu H, Chen S, Wang X. Automatic Calibration between Multi-Lines LiDAR and Visible Light Camera Based on Edge Refinement and Virtual Mask Matching. Remote Sensing. 2022; 14(24):6385. https://doi.org/10.3390/rs14246385

Chicago/Turabian Style

Chen, Chengkai, Jinhui Lan, Haoting Liu, Shuai Chen, and Xiaohan Wang. 2022. "Automatic Calibration between Multi-Lines LiDAR and Visible Light Camera Based on Edge Refinement and Virtual Mask Matching" Remote Sensing 14, no. 24: 6385. https://doi.org/10.3390/rs14246385

APA Style

Chen, C., Lan, J., Liu, H., Chen, S., & Wang, X. (2022). Automatic Calibration between Multi-Lines LiDAR and Visible Light Camera Based on Edge Refinement and Virtual Mask Matching. Remote Sensing, 14(24), 6385. https://doi.org/10.3390/rs14246385

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop