Next Article in Journal
Leveraging Road Characteristics and Contributor Behaviour for Assessing Road Type Quality in OSM
Next Article in Special Issue
Classification of Airborne Laser Scanning Point Cloud Using Point-Based Convolutional Neural Network
Previous Article in Journal
An Approach to Improve the Quality of User-Generated Content of Citizen Science Platforms
Previous Article in Special Issue
Methodological Proposal for Automated Detection of the Wildland–Urban Interface: Application to the Metropolitan Regions of Madrid and Barcelona
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Closed-Form Solution to Planar Feature-Based Registration of LiDAR Point Clouds

1
Jiangsu Key Laboratory of Resources and Environmental Information Engineering, China University of Mining and Technology, Xuzhou 221116, China
2
Key Laboratory of Land Environment and Disaster Monitoring, Ministry of Natural Resources, China University of Mining and Technology, Xuzhou 221116, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2021, 10(7), 435; https://doi.org/10.3390/ijgi10070435
Submission received: 20 April 2021 / Revised: 18 June 2021 / Accepted: 24 June 2021 / Published: 25 June 2021
(This article belongs to the Special Issue Advancements in Remote Sensing Derived Point Cloud Processing)

Abstract

:
Since pairwise registration is a necessary step for the seamless fusion of point clouds from neighboring stations, a closed-form solution to planar feature-based registration of LiDAR (Light Detection and Ranging) point clouds is proposed in this paper. Based on the Plücker coordinate-based representation of linear features in three-dimensional space, a quad tuple-based representation of planar features is introduced, which makes it possible to directly determine the difference between any two planar features. Dual quaternions are employed to represent spatial transformation and operations between dual quaternions and the quad tuple-based representation of planar features are given, with which an error norm is constructed. Based on L2-norm-minimization, detailed derivations of the proposed solution are explained step by step. Two experiments were designed in which simulated data and real data were both used to verify the correctness and the feasibility of the proposed solution. With the simulated data, the calculated registration results were consistent with the pre-established parameters, which verifies the correctness of the presented solution. With the real data, the calculated registration results were consistent with the results calculated by iterative methods. Conclusions can be drawn from the two experiments: (1) The proposed solution does not require any initial estimates of the unknown parameters in advance, which assures the stability and robustness of the solution; (2) Using dual quaternions to represent spatial transformation greatly reduces the additional constraints in the estimation process.

1. Introduction

With the fast development of Light Detection and Ranging (LiDAR) techniques and their successful application in three-dimensional data acquisition, point cloud registration has attracted significant attention for its role in the fusion of LiDAR point clouds from two neighboring stations. The essence of point cloud registration is to estimate the transformation parameters between the two neighboring stations, which is also known as spatial transformation. As is known, a spatial transformation can be explained as a rotation around the x, y, and z axes, a translation along the three axes, and a scale factor based on the centroid of the coordinate system.
Based on the different registration primitives used for the estimation of unknown transformation parameters, the available methods can be categorized into point feature-based methods [1,2], linear feature-based methods [3,4,5], planar feature-based methods [6,7,8], and hybrid feature-based methods [9,10,11]. Until now, point features are still the most popular and widely used registration primitives for simple mathematical expressions. However, affected by the characteristics of LiDAR technology, the extraction of point features from point clouds often has low accuracy without pre-established man-made reflectors. Except for point features, linear features are another popular registration primitive. Compared to point and linear features, the extraction of planar features from point clouds is more convenient and often has higher accuracy. However, fewer available methods use planar features as registration primitives; this may be due to the diverse mathematical expressions, which make it difficult for us to compare and determine the differences between two planar features.
Moreover, according to whether the initial approximate estimates of those unknown transformation parameters are needed in advance, the available registration methods can be divided into two categories: iterative methods [1,2,5,9,12] and closed-form methods [13,14,15,16]. Most available methods obtain the registration results by iterative computation; however, their disadvantages must be treated properly. First, the nonlinear registration model must be linearized before it can be calculated by computers, and the initial approximate estimates of those unknown parameters must be determined in advance to assist the linearization. Second, the number of iterations is closely related to the choice of those initial estimates; that is, when they are not close to the maximum likelihood values, iterations will increase. Comparatively, closed-form methods do not need any initial approximate estimates of the unknown transformation parameters, more importantly, the optimal transformation parameters can be obtained in only one step. A detailed comparison of the iterative methods and closed-form method is shown in Table 1.
Based on the abovementioned analysis, a closed-form solution to planar feature-based registration of point clouds is proposed. Firstly, a quad tuple-based representation of planar features is given and dual quaternions are then employed to represent the spatial transformation. After the operations between the dual quaternions and the quad tuple are explained, an error norm (error function) is constructed by supposing that the two conjugate planar features are equivalent after registration. Based on L2-norm-minimization, detailed derivations of the proposed solution are given step by step. Lastly, two experiments are designed to verify the correctness and feasibility of the proposed method, in which simulated and real data are both incorporated.
The remainder of the paper is organized as follows. Section 2 reviews some related work. Section 3 gives the operations between dual quaternions and the quad tuple-based representation of planar features in three-dimensional space. Section 4 explains the proposed solution, in which the detailed derivations of all formulas are given step by step. Section 5 shows the experiments and the results. Section 6 discusses the proposed solution and gives suggestions for future work. Section 7 concludes the paper.

2. Related Work

2.1. Quaternion and Its Application in Point Cloud Registration

Quaternion was first proposed by Hamilton W. R., which has been proved to be a convenient and effective way to describe rotation in three-dimensional space [17]. Due to its compactness and high efficiency, quaternion has attracted considerable attention from researchers working in different fields. The representative work can be summarized as follows: Horn introduced unit quaternion to solve the absolute orientation problem in photogrammetry [13]; Shen et al. used unit quaternion in the transformation of two sets of three-dimensional coordinates [12]; Zeng et al. presented a unit quaternion-based, iterative solution to coordinate transformation in geodesy [18]; Joseph et al. introduced unit quaternion in robot arm manipulation and presented an extended Kalman filter-based algorithm for the estimation of human motion [19]; Kim et al. presented a similar algorithm to Joseph et al., which employed unit quaternion for the real-time estimation of orientation in robot arm manipulation [20].
As is known, spatial transformation mainly consists of a rotation and a translation. However, unit quaternion can only represent the spatial rotation in three-dimensional space, as shown in Figure 1. Later, dual quaternion was presented to represent spatial rotation and translation simultaneously, as shown in Figure 2, in which two quaternions are integrated with the aid of a dual number. The first successful application of dual quaternion in estimating the unknown spatial transformation parameters was introduced by Walker et al. [15]. Based on the analysis of correspondences between dual quaternion-based and matrix-based representations, a single cost function was formulated, which enabled the simultaneous calculation of six parameters in point feature-based registration. Instead of estimating only the six transformation parameters, Wang et al. added the scale parameter to a single cost function, which enabled the simultaneous derivation of rotation, translation, and scale parameters [16]. Prošková et al. also introduced dual quaternion to represent spatial transformation; the presented approach was successfully applied for deriving the seven parameter-based transformation between two sets of three-dimensional coordinates [21,22].
Based on the above analysis, conclusions can be drawn that unit quaternion can only represent rotation in three-dimensional space; when it is applied in spatial transformation, such as the seven parameter-based Helmert transform, the rotation parameters must be estimated first, on which basis the translation parameters and the scale factor will later be estimated. In the case of errors being introduced into the estimation of the rotation parameters, the accuracy of the translation parameters and the scale factor will certainly be affected, which makes it a tendency that dual quaternions are introduced to represent spatial transformation.

2.2. Planar Feature-Based Registration Methods

Compared to point and linear features, planar features exist widely in those point clouds acquired from man-made buildings. On the condition that no specific man-made reflectors are used, point and linear features are often extracted by fitting and the intersection of adjacent planar features. Therefore, if planar features can be used directly in point cloud registration, the amount of calculation will be reduced greatly. In other words, the employment of planar features in point cloud registration will provide more conditions for estimating the transformation parameters than using point and linear features only. Of all existing planar feature-based registration methods, some use the distance between the point and its corresponding planar feature to construct error norms [6,23], others use the parallelism between the two normal vectors and the distance between the two conjugate planar features to construct error norms [7,24,25,26,27,28]. Detailed explanations are as follows: Grant et al. presented an iterative solution to pairwise point cloud registration, which is based on the correspondences between point and planar features [23]. Pavan et al. presented a closed-form solution to planar feature-based registration of terrestrial LiDAR point clouds [27], which was later applied in the global refinement for the terrestrial laser scanner (TLS) data registration [7]. Moreover, Wang et al. [24], Zhang et al. [25], and Previtali et al. [26] separately presented a planar feature-based registration algorithm based on the correspondences between each pair of planar features in which the Rodriguez matrix, Euler angles, and quaternions were, respectively, employed to represent spatial rotation. In addition, Khoshelham presented a closed-form solution to planar feature-based pairwise point cloud registration [6] in which the Kronecker product was employed to linearize the nonlinear equations; rigid transformation and similarity transformation are both given in detail; the parallelism between normal vectors of the two conjugate planar features was not fully utilized. Föstner and Khoshelham presented an optimal solution and three direct solutions for efficient motion estimation from plane-to-plane correspondences and provided an analysis of the accuracy of the solutions, comparing their performance with the classical iterative closest point (ICP) algorithm [28].
In general, few of the available methods incorporate the scale parameter in planar feature-based registration, which makes it difficult to apply them in the fusion of data from different sources. Moreover, some methods use point and planar features as registration primitives; the role of normal directions is neglected in the estimation of transformation parameters.
To summarize: (1) When unit quaternion is introduced to represent spatial rotation in point cloud registration, the accuracy of the translation parameters and the scale factor will surely be affected in the case of the introduction of errors in the estimation of rotation parameters; (2) Planar feature is a good alternative in point cloud registration, however, the determination of the difference between each corresponding planar feature is difficult with traditional mathematical expressions; (3) Few methods incorporate the scale parameters in the planar feature-based registration, which makes it difficult to apply them in the fusion of data from different sources; (4) Most available methods estimate the registration parameters by iterative calculation, the initial approximate estimates of the unknown parameters must be determined in advance to assist the linearization. Based on the above analysis, our goal was to develop a closed-form solution to a planar feature-based registration algorithm in which a quad tuple-based representation of planar features in three-dimensional space is given, which made it possible to directly determine the difference between each pair of corresponding planar features; dual quaternions were employed to represent the spatial transformation in three-dimensional space which made it possible to estimate all the registration parameters simultaneously. The scale factor was also considered in our registration methods, which made it possible to apply our method in the fusion of data from different sources.

3. Dual Quaternion-Based Transformation of Planar Features

3.1. Representation of a Plane in Three-Dimensional Space

Traditionally, a plane is represented by the normal vector n and any one point p lying on it. Since the selection of the point p is random, there is more than one representation for a single plane, which makes it difficult to compare and determine the difference between two planes. To ensure the unique representation of a plane in three-dimensional space, a quad tuple-based representation method was employed, as shown in Figure 3, in which a plane can be represented as a quad tuple Γ ^ , as shown in Equation (1):
Γ ^ = ( l , m )
where l represents the normalized normal vector that forms l = n / n , and m represents the plane moment, that is, the distance between the origin and the plane that forms m = p l .

3.2. Definition of a Dual Quaternion

A quaternion is a composite of four real numbers, which is generally represented in the form of a quad tuple r ˙ , as shown in Equation (2):
r ˙ = ( r 0 r 1 r 2 r 3 ) T
on the condition that r ˙ T r ˙ = 1 , r ˙ is usually called a unit quaternion.
A dual quaternion is a composite of two quaternions and a dual number ε , as shown in Equation (3):
q ^ = r ˙ + ε s ˙
where r ˙  and s ˙  are both quaternions, each of which corresponds to the real part and the dual part of q ^ separately; when r ˙ T r ˙ = 1 , q ^ is a unit dual quaternion, which is the default definition of a dual quaternion.

3.3. Operation Rules for Dual Quaternions

Given two dual quaternions q ^ 1 = r ˙ 1 + ε s ˙ 1 and q ^ 2 = r ˙ 2 + ε s ˙ 2 , the addition, subtraction, and multiplication between them can be expressed as shown in Equation (4):
{ q ^ 1 + q ^ 2 = ( r ˙ 1 + r ˙ 2 ) + ε ( s ˙ 1 + s ˙ 2 ) q ^ 1 q ^ 2 = ( r ˙ 1 r ˙ 2 ) + ε ( s ˙ 1 s ˙ 2 ) q ^ 1 q ^ 2 = ( r ˙ 1 + ε s ˙ 1 ) ( r ˙ 2 + ε s ˙ 2 ) = r ˙ 1 r ˙ 2 + ε ( r ˙ 1 s ˙ 2 + r ˙ 2 s ˙ 1 )
Furthermore, multiplication between r ˙ 1 and r ˙ 2 can also be represented as shown in Equation (5):
r ˙ 1 r ˙ 2 = Q ( r ˙ 1 ) r ˙ 2 = W ( r ˙ 2 ) r ˙ 1
where Q ( r ˙ ) = [ r 0 r 1 r 2 r 3 r 1 r 0 r 3 r 2 r 2 r 3 r 0 r 1 r 3 r 2 r 1 r 0 ] , W ( r ˙ ) = [ r 0 r 1 r 2 r 3 r 1 r 0 r 3 r 2 r 2 r 3 r 0 r 1 r 3 r 2 r 1 r 0 ] .

3.4. Dual Quaternion-Based Transformation of Planar Features

According to the definition of a dual quaternion, the spatial transformation of a plane in three-dimensional space can be expressed as shown in Equation (6):
Γ ^ a = q ^ Γ ^ b q ^
where q ^ is the dual quaternion corresponding to the spatial transformation; q ^ is the conjugate of q ^ that forms q ^ = r ˙ + ε s ˙ ; Γ ^ a and Γ ^ b represent a pair of conjugate planar features, Γ ^ a represents the one before transformation and Γ ^ b represents the one after transformation.
To realize the dual quaternion-based similarity transformation, a plane should be represented as shown in Equation (7):
Γ ^ = l ˙ + m ˙
where l ˙ = ( 0 , l T ) T , which corresponds the normal vector of the plane, and m ˙ = ( m , 0 T ) T , which corresponds to the moment of the plane.
According to Chasles’ theorem [29], when the scale parameter is not considered, six independent parameters are needed to represent the transformation between two different coordinate frames. However, when a dual quaternion is used, there are eight parameters in total. Two constraints should be added to estimate the unknown parameters for the spatial similarity transformation, as shown in Equation (8):
{ r ˙ T r ˙ = 1 r ˙ T s ˙ = 0
where r ˙ is the quaternion that represents the spatial rotation and s ˙ is the quaternion that represents the spatial translation. The explanation in Equation (8) is that r ˙ is a unit quaternion and it is orthogonal to s ˙ .

4. Planar Features-Based Registration Model

4.1. Construction of the Objective Functions

In three-dimensional space, the transformation of a plane can be expressed as shown in Equation (9):
{ l a = R l b m a = p a l a = μ m b + t R l b
where l b and m b represent the normal vector and the moment of the plane before transformation, respectively; l a and m a represent the normal vector and the moment after transformation, respectively; R , t , and μ separately represent the transformation matrix, the translation vector, and the scale factor between the two coordinate systems.
With a dual quaternion, Equation (9) can be further expressed as Equation (10):
{ l ˙ a = r ˙ l ˙ b r ˙ m ˙ a = μ m ˙ b + 1 2 ( r ˙ l ˙ b r ˙ t ˙ t ˙ r ˙ l ˙ b r ˙ )
where r ˙ is the unit quaternion corresponding to the rotation matrix R , r ˙ is the conjugate of r ˙ , l ˙ a = ( 0 , l a ) , l ˙ b = ( 0 , l b ) , m ˙ a = ( m a , 0 ) , and m ˙ b = ( m b , 0 ) .
Making s ˙ = 1 2 t ˙ r ˙ , Equation (10) can be rewritten as Equation (11):
{ l ˙ a = r ˙ l ˙ b r ˙ m ˙ a = μ m ˙ b s ˙ l ˙ b r ˙ + r ˙ l ˙ b s ˙
where s ˙ is the conjugate of s ˙ .
Based on Equation (7), Equation (11) can be further rewritten as Equation (12):
{ l ˙ a = W ( r ˙ ) T Q ( r ˙ ) l ˙ b m ˙ a = μ m ˙ b W ( r ˙ ) T Q ( s ˙ ) l ˙ b + W ( s ˙ ) T Q ( r ˙ ) l ˙ b
Considering the existence of random errors, the planar feature-based registration approach aims to minimize the difference between ( l a , m a ) and ( l b , m b ) . The quadratic form of the difference between ( l a , m a ) and ( l b , m b ) is introduced as the error equations according to the least squares criteria, as shown in Equations (13) and (14):
f 1 2 = ( l ˙ a W ( r ˙ ) T Q ( r ˙ ) l ˙ b ) 2
f 2 2 = ( m ˙ a μ m ˙ b + W ( r ˙ ) T Q ( s ˙ ) l ˙ b W ( s ˙ ) T Q ( r ˙ ) l ˙ b ) 2
The transformation parameters can be obtained when the expression f = f 1 2 + f 2 2 reaches its minimum. Considering that f 1 2 and f 2 2 are both positive, when each term reaches its minimum, f will be minimal. The optimal value of r ˙ will be obtained by the minimization of Equation (13); s ˙ and μ will be obtained by the minimization of Equation (14).

4.2. Solution of Rotation Quaternions r ˙

Making C l 1 = ( l ˙ a T l ˙ a + l ˙ b T l ˙ b ) , C l = 2 Q ( l ˙ a ) T W ( l ˙ b ) , Equation (13) can be decomposed as Equation (15):
f 1 2 = C l 1 + r ˙ T C l r ˙
Using Equation (5) as the restriction, the error function can be expressed as Equation (16):
F ¯ 1 = C l 1 + r ˙ T C l r ˙ + λ 1 ( r ˙ T r ˙ 1 )
where λ 1 is a Lagrange multiplier constant.
Taking the partial derivative of Equation (16) yields Equation (17):
F ¯ 1 r ˙ = ( C l + C l T ) r ˙ + 2 λ 1 r ˙ = 0
Making A = 1 2 ( C l + C l T ) , Equation (17) can be further expressed as Equation (18):
A r ˙ = λ 1 r ˙
According to the definition of the eigenvalues and eigenvectors of a matrix, r ˙ represents one of the four eigenvectors of A , and λ 1 represents the corresponding eigenvalue of r ˙ . Among the four eigenvectors that satisfy Equation (18), the optimal solution can be determined by referring back to Equation (17).
Multiplying Equation (17) with r ˙ T yields Equation (19):
r ˙ T C l r ˙ = 1 2 r ˙ T ( C l + C l T ) r ˙ = λ 1
Substituting Equation (19) into Equation (16) yields Equation (20):
F ¯ 1 = C l 1 λ 1
When λ 1 reaches its maximum, Equation (20) will be minimized. The conclusion can be drawn that the optimal solution of r ˙ equals to the eigenvector corresponding to maximum eigenvalue of A .

4.3. Solution of the Translation Quaternions s ˙ and the Scale Factor μ

Making C 1 = m ˙ a T m ˙ a , C 2 = m ˙ b T m ˙ b , C 3 = 2 m ˙ a T m ˙ b , C m 1 = 2 I , C m 2 = 2 Q ( m ˙ a ) T W ( l ˙ b ) , C m 3 = 2 W ( l ˙ b ) T Q ( m ˙ a ) , C m 4 1 × 4 = 2 m ˙ b T W ( r ˙ ) T W ( l ˙ b ) , C m 5 1 × 4 = 2 m ˙ b T Q ( r ˙ ) W ( l ˙ b ) , C m 6 = 2 W ( l ˙ b ) T W ( r ˙ ) Q ( r ˙ ) W ( l ˙ b ) , and decomposing Equation (14), we can obtain Equation (21):
f 2 2 = C 1 + C 2 μ 2 + s ˙ T ( C m 1 + C m 6 ) s ˙ + C 3 μ + r ˙ T ( C m 2 + C m 3 ) s ˙ + μ ( C m 4 + C m 5 ) s ˙
Using Equation (5) as the restriction, the best quaternion s ˙ to represent the translation can be obtained when Equation (22) is minimized:
F ¯ 2 = f 2 2 + λ 2 ( r ˙ T s ˙ )
where λ 2 is a Lagrange multiplier constant.
Taking the partial derivative of Equation (22), with respect to s ˙ and μ , and making them equal to zero, we obtain Equations (23) and (24):
F ¯ 2 s ˙ = ( C m 1 + C m 1 T ) s ˙ + ( C m 6 + C m 6 T ) s ˙ + ( C m 2 T + C m 3 T ) r ˙ + μ ( C m 4 T + C m 5 T ) + λ 2 r ˙ = 0
F ¯ 2 μ = 2 C 2 μ + C 3 + ( C m 4 + C m 5 ) s ˙ = 0
Based on Equation (24), the scale factor μ can be expressed as Equation (25):
μ = C 3 + ( C m 4 + C m 5 ) s ˙ 2 C 2
Substituting Equation (25) into Equation (23) yields Equation (26):
s ˙ = C s 1 ( C m 2 T + C m 3 T ) r ˙ + C 3 2 C 2 C s 1 ( C m 4 T + C m 5 T ) λ 2 C s 1 r ˙
where C s = ( C m 1 + C m 1 T ) + ( C m 6 + C m 6 T ) ( C m 4 T + C m 5 T ) ( C m 4 + C m 5 ) 2 C 2 .
Multiplying Equation (26) with r ˙ T , we obtain Equation (27):
r ˙ T s ˙ = r ˙ T C s 1 ( C m 2 T + C m 3 T ) r ˙ + C 3 2 C 2 r ˙ T C s 1 ( C m 4 T + C m 5 T ) λ 2 r ˙ T C s 1 r ˙ = 0
With Equation (27), λ 2 can be expressed as Equation (28):
λ 2 = r ˙ T ( C m 2 T + C m 3 T ) r ˙ + C 3 2 C 2 r ˙ T ( C m 4 T + C m 5 T )
Since ( C m 2 T + C m 3 T ) is a skew-symmetric matrix, the first item of Equation (28) will be zero, that is, r ˙ T ( C m 2 T + C m 3 T ) r ˙ = 0 . Equation (28) can be simplified, as shown in Equation (29):
λ 2 = C 3 2 C 2 r ˙ T ( C m 4 T + C m 5 T )
The quaternion corresponding to the translation between the two neighboring LiDAR stations can be obtained using Equation (30):
t ˙ = 2 s ˙ r ˙

4.4. Implementation of the Proposed Solution

As in Figure 4, the given point clouds from the two neighboring LiDAR stations, namely the reference station and the unregistered station, suppose that ( l a , m a ) and ( l b , m b ) represent the two pairs of conjugate planar features separately extracted from them, where ( l ˙ a , m ˙ a ) and ( l ˙ b , m ˙ b ) are quaternion-based representations corresponding to ( l a , m a ) and ( l b , m b ) . In the implementation of the proposed algorithm, the following steps are suggested to obtain the seven unknown parameters:
(1)
Based on the normal vectors of each pair of conjugate planar features, construct matrix A and calculate the maximum eigenvalue and its corresponding eigenvector r ˙ of A;
(2)
Calculate the quaternion s ˙ using Equation (26);
(3)
Calculate the scale factor μ using Equation (25);
(4)
Calculate the quaternion t ˙ using Equation (30).
Using r ˙ , s ˙ , and μ , a plane in three-dimensional space can be transformed from one coordinate system to another. More importantly, since each pair of conjugate planar features is extracted from the two neighboring LiDAR stations, point clouds will be merged using r ˙ , s ˙ , and μ .

5. Experiments and Results

The proposed planar feature-based registration algorithm was implemented using Matlab. Two experiments were designed to verify the correctness and the feasibility in which simulated data and real data were both incorporated.

5.1. Simulated Data

The first experiment was designed to verify the correctness of the proposed algorithm. Five pairs of simulated planar features were designed as shown in Figure 5. Mathematical descriptions of the simulated planar features were as shown in Table 2. The pre-established spatial similarity transformation parameters were as shown in Table 3.
The obtained transformation parameters were as shown in Table 4; the residuals between each pair of conjugate planar features after registration are also given.
Based on the results given in Table 4, the calculated rotation matrix and the scale factor were both consistent with the pre-established ones. The small deviation between the calculated translation vector and the pre-established one can be attributed to the rounding errors in the calculation, which can largely be ignored in practical applications. The conclusion can be drawn that the proposed solution is correct and the obtained result is as expected.

5.2. Real Data

The second experiment was designed to verify the feasibility of the proposed algorithm. The point clouds were collected by a model Riegl LMS-Z420i terrestrial laser scanner. In order to collect the complete point cloud data of the building, a total of eight observation stations were set up around it. The average distance between each observation station and the building was about 100 m, and the average sampling interval was set to 4 cm. Seven pairs of conjugate planar features were extracted from the two neighboring point clouds as shown in Figure 6, and the extracted planar features were as shown in Table 5.
The obtained transformation parameters and residuals between each pair of conjugate planar features are both given in Table 6. To further verify the correctness and the feasibility of the proposed solution, a unit quaternion-based, iterative method [8] was employed to estimate the transformation parameters; the results are also shown in Table 6.
The visual effects of the point clouds from the two neighboring LiDAR stations before and after registration were as shown in Figure 5.
As shown in Table 6 and Figure 7, by using the obtained transformation parameters to register the two neighboring stations, there were no significant residuals between each pair of conjugate planar features after registration. More importantly, there were also no significant differences between the results obtained by our method and by the unit quaternion-based iterative method [8]; the root mean square error (RMSE) of the normal and moment’s differences between conjugate planar features were exactly the same. The conclusion can be drawn that the newly proposed closed-form solution to pairwise registration of LiDAR point clouds is correct and feasible.

6. Discussion

On the assumption that the normal directions of each pair of conjugate planar features are the same, the proposed quad tuple-based representation method can provide a unique mathematical expression of any planar feature in three-dimensional space; the differences between two planar features can be determined by direct comparison, which makes it convenient to propose and implement a closed-form solution to planar feature-based point cloud registration. The results of the two designed experiments were both correct, which proves that the proposed closed-form solution is capable of dealing with point cloud registration problems on the condition that sufficient corresponding planar features are provided as registration primitives.
It should be noted that any vector in three-dimensional space can be expressed in another form with the same module and an opposite direction; however, the prerequisite of our solution is that normal directions of each pair of conjugate planar features should be exactly the same after registration. On the condition that the prerequisite is not fulfilled, ensuring the proper run of the proposed solution will be one of our objectives in the future.

7. Conclusions

On the condition that point and linear features are not sufficient, planar features are good alternatives to ensure the proper run of the point cloud registration algorithm. Compared to point and linear features, extracting planar features from point clouds is more convenient. Furthermore, the impact of random errors can be significantly reduced by least squares fitting. Based on the quad tuple-based representation method of planar features in three-dimensional space, a closed-form solution to planar feature-based registration of point clouds is proposed. Two experiments were designed, in which both simulated data and real data were used to verify the correctness and effectiveness of the proposed solution. With the simulated data, the calculated registration results were consistent with the pre-established parameters and with the real data, there were also no significant differences between the results obtained by our method and by the available iterative method [8]; the root mean square error (RMSE) of the normal and moment’s differences between conjugate planar features were the same.
The conclusion can be drawn that the newly proposed closed-form solution to pairwise registration of LiDAR point clouds is correct and feasible. Moreover, the advantages of the proposed solution can be summarized as follows: (1) By using eigenvalue decomposition to replace the linearization of the objective function, the presented solution does not require any initial estimates of unknown transformation parameters in advance, which assures the stability and robustness of the algorithm; (2) On the condition that no man-made reflectors are used, planar features are directly extracted from point clouds using least squares fitting, the impact of random errors can be reduced significantly and the reliability of the estimated transformation parameters is higher than point and linear feature-based methods; (3) In contrast to Euler angle-based and rotation matrix-based methods, using dual quaternions to represent spatial rotation greatly reduces additional constraints in the estimation of similarity transformation parameters.

Author Contributions

Conceptualization, Yongbo Wang; methodology, Yongbo Wang; software, Yongbo Wang; validation, Yongbo Wang, Nanshan Zheng and Zhengfu Bian; formal analysis, Nanshan Zheng; investigation, Yongbo Wang; writing—original draft preparation, Yongbo Wang; writing—review and editing, Nanshan Zheng and Zhengfu Bian; funding acquisition, Yongbo Wang and Zhengfu Bian. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China, grant number 2017YFE0119600 and by the National Natural Science Foundation of China, grant number 41271444.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors would like to thank the anonymous reviewers and editors for providing valuable comments and suggestions which helped improve the manuscripts greatly.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Arun, K.S.; Huang, T.S.; Bostein, S.D. Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. 1987, 9, 698–700. [Google Scholar] [CrossRef] [Green Version]
  2. Besl, P.J.; Mckay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  3. Habib, A.; Ghanma, M.; Morgan, M.; Al-Ruzouq, R. Photogrammetric and LiDAR Data Registration Using Linear Features. Photogramm. Eng. Remote. Sens. 2005, 71, 699–707. [Google Scholar] [CrossRef]
  4. Wang, Y.; Yang, H.; Liu, Y.; Niu, X. Linear-Feature-Constrained Registration of LiDAR Point Cloud via Quaternion. Geomat. Inf. Sci. Wuhan Univ. 2013, 38, 1057–1062. [Google Scholar]
  5. Sheng, Q.; Chen, S.; Liu, J.; Wang, H. LiDAR Point Cloud Registration based on Plücker Line. Acta Geod. Cartogr. Sin. 2016, 45, 58–64. [Google Scholar]
  6. Khoshelham, K. Closed-form solutions for estimating a rigid motion from plane correspondences extracted from point clouds. ISPRS J. Photogramm. Remote. Sens. 2016, 114, 78–91. [Google Scholar] [CrossRef]
  7. Pavan, N.; Santos, D.; Khoshelham, K. Global Registration of Terrestrial Laser Scanner Point Clouds Using Plane-to-Plane Correspondences. Remote Sens. 2020, 12, 1127. [Google Scholar] [CrossRef] [Green Version]
  8. Wang, Y.B.; Zheng, N.S.; Bian, Z.F. A Quaternion-based, Planar Feature-constrained Algorithm for the Registration of LiDAR Point Clouds. Acta Opt. Sin. 2020, 40, 2310001. [Google Scholar] [CrossRef]
  9. Zheng, D.; Yue, D.; Yue, J. Geometric feature constraint based algorithm for building scanning point cloud registration. Acta Geod. Cartogr. Sin. 2008, 37, 464–468. [Google Scholar]
  10. Wang, Y.; Wang, Y.; Han, X. A Unit Quaternion based, Point-Linear Feature Constrained Registration Approach for Terrestrial LiDAR Point Clouds. J. China Univ. Min. Technol. 2018, 47, 671–677. [Google Scholar]
  11. Chai, S.; Yang, X. Line primitive point cloud registration method based on dual quaternion. Acta Opt. Sin. 2019, 39, 1228006. [Google Scholar] [CrossRef]
  12. Shen, Y.Z.; Chen, Y.; Zheng, D.H. A quaternions-based geodetic datum transformation algorithm. J. Geod. 2006, 80, 233–239. [Google Scholar] [CrossRef]
  13. Horn, B.K. Closed-form solution of absolute orientation using unit quaternions. J. Opt. Soc. Am. 1987, 4, 629–642. [Google Scholar] [CrossRef]
  14. Horn, B.K. Closed form solution of absolute orientation using orthonormal matrices. J. Opt. Soc. Am. 1988, 5, 1127–1135. [Google Scholar] [CrossRef] [Green Version]
  15. Walker, M.W.; Shao, L.; Volz, R.A. Estimating 3-D Location Parameters Using Dual Number Quaternions. CVGIP Image Underst. 1991, 54, 358–367. [Google Scholar] [CrossRef] [Green Version]
  16. Wang, Y.; Wang, Y.; Wu, K.; Yang, H.; Zhang, H. A dual quaternion-based, closed-form pairwise registration algorithm for point clouds. ISPRS J. Photogramm. Remote Sens. 2014, 94, 63–69. [Google Scholar] [CrossRef]
  17. Hamilton, W.R. On quaternions, or on a new system of imaginaries in algebra. Philos. Mag. 1844, 25, 489–495. [Google Scholar]
  18. Zeng, H.; Yi, Q. Quaternion-Based Iterative Solution of Three-Dimensional Coordinate Transformation Problem. J. Comput. 2011, 6, 1361–1368. [Google Scholar] [CrossRef]
  19. Joseph, J.; Laviola, J. A comparison of unscented and extended Kalman filtering for estimating quaternions motion. In Proceedings of the 2003 American Control Conference, Denver, CO, USA, 4–6 June 2003; pp. 2435–2440. [Google Scholar]
  20. Kim, A.; Golnaraghi, M.F. A quaternions-based orientation estimation algorithm using an inertial measurement unit. In Proceedings of the Position Location and Navigation Symposium, Monterey, CA, USA, 26–29 April 2004; pp. 268–272. [Google Scholar]
  21. Prošková, J. Application of dual quaternions algorithm for geodetic datum transformation. J. Appl. Math. 2011, 4, 225–236. [Google Scholar]
  22. Prošková, J. Discovery of Dual Quaternions for Geodesy. J. Geom. Graph. 2012, 16, 195–209. [Google Scholar]
  23. Grant, D.; Bethel, J.; Crawford, M. Point-to-plane registration of terrestrial laser scans. ISPRS J. Photogramm. Remote Sens. 2012, 72, 16–26. [Google Scholar] [CrossRef]
  24. Wang, L.; Li, G.; Zhang, Q. Plane Fitting and Transformation in Laser Scanning. J. Geomat. Sci. Technol. 2012, 29, 101–104. [Google Scholar]
  25. Zhang, D.; Huang, T.; Li, G.; Jiang, M. Robust Algorithm for Registration of Building Point Clouds Using Planar Patches. J. Surv. Eng. 2012, 138, 31–37. [Google Scholar] [CrossRef]
  26. Previtali, M.; Barazzetti, L.; Brumana, R.; Scaioni, M. Laser Scan Registration Using Planar Features. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. ISPRS Arch. 2014, 40, 501–508. [Google Scholar] [CrossRef] [Green Version]
  27. Pavan, N.L.; dos Santos, D.R. A global closed-form refinement for consistent TLS data registration. IEEE Geos. Remote Sens. Lett. 2017, 14, 1131–1135. [Google Scholar] [CrossRef]
  28. Föstner, W.; Khoshelham, K. Efficient and accurate registration of point clouds with plane to plane correspondences. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; IEEE Computer Society: Washington, DC, USA, 2017; pp. 2165–2173. [Google Scholar]
  29. Chasles, M. Note sur les Proprietes Generales du Systeme de Deux Corps Semblables entr’eux, Bullettin de Sciences Mathematiques. Astron. Phys. Chim. 1830, 14, 321–326. [Google Scholar]
Figure 1. Quaternion-based rotation. The vector r is taken into r by a rotation of angle δ about the axis h . If we establish a quaternion as q ˙ = q 0 + q = c o s δ 2 + ( s i n δ 2 ) n , then the rotation can be expressed as r = q ˙ r q ˙ = ( c o s δ ) r + ( s i n δ ) ( n × r ) + ( r · n ) ( 1 c o s δ ) n .
Figure 1. Quaternion-based rotation. The vector r is taken into r by a rotation of angle δ about the axis h . If we establish a quaternion as q ˙ = q 0 + q = c o s δ 2 + ( s i n δ 2 ) n , then the rotation can be expressed as r = q ˙ r q ˙ = ( c o s δ ) r + ( s i n δ ) ( n × r ) + ( r · n ) ( 1 c o s δ ) n .
Ijgi 10 00435 g001
Figure 2. Dual quaternion-based rotation and translation. Where n is a unit vector that specifies the direction of the rotation axis and the direction of the translation, with the rotation involving the line with direction n passing through point P with a rotation angle of θ , and d is the distance of translation along the direction specified by n . A dual quaternion q ^ = r ˙ + ε s ˙ is interpreted as real quaternions q ^ = [ c o s θ ^ 2 , ( s i n θ ^ 2 ) n ^ ] , where the dual vector n ^ = n + ε ( p × n ) represents a line in the three-dimensional space from which the coordinate system is rotated and translated, and θ ^ = θ + ε d is the dual angle of the rotation and the translation.
Figure 2. Dual quaternion-based rotation and translation. Where n is a unit vector that specifies the direction of the rotation axis and the direction of the translation, with the rotation involving the line with direction n passing through point P with a rotation angle of θ , and d is the distance of translation along the direction specified by n . A dual quaternion q ^ = r ˙ + ε s ˙ is interpreted as real quaternions q ^ = [ c o s θ ^ 2 , ( s i n θ ^ 2 ) n ^ ] , where the dual vector n ^ = n + ε ( p × n ) represents a line in the three-dimensional space from which the coordinate system is rotated and translated, and θ ^ = θ + ε d is the dual angle of the rotation and the translation.
Ijgi 10 00435 g002
Figure 3. The quad tuple-based representation of a plane in three-dimensional space.
Figure 3. The quad tuple-based representation of a plane in three-dimensional space.
Ijgi 10 00435 g003
Figure 4. Flow chart of the proposed solution’s implementation.
Figure 4. Flow chart of the proposed solution’s implementation.
Ijgi 10 00435 g004
Figure 5. The distribution of the simulated planar features.
Figure 5. The distribution of the simulated planar features.
Ijgi 10 00435 g005
Figure 6. Acquired point clouds of the same building from two different LiDAR stations. (a) The point cloud from the reference station; (b) the point cloud from the unregistered station.
Figure 6. Acquired point clouds of the same building from two different LiDAR stations. (a) The point cloud from the reference station; (b) the point cloud from the unregistered station.
Ijgi 10 00435 g006
Figure 7. Visual effects of the point clouds from the two neighboring stations before and after registration. (a) Before registration; (b) after registration.
Figure 7. Visual effects of the point clouds from the two neighboring stations before and after registration. (a) Before registration; (b) after registration.
Ijgi 10 00435 g007
Table 1. Advantages and disadvantages of the iterative methods and the closed-form methods.
Table 1. Advantages and disadvantages of the iterative methods and the closed-form methods.
Registration MethodsAdvantagesDisadvantages
Iterative Methods(1) The most popular and widely used;
(2) The derivation of the formulas is simple.
(1) Initial approximate estimates of those unknown parameters must be determined in advance;
(2) The number of iterations is closely related to the choice of those initial estimates.
Closed-form Methods(1) No initial estimates of the unknown transformation parameters are needed in advance;
(2) Registration results can be obtained in only one step.
(1) The derivation of the formulas is complex.
Table 2. Mathematical descriptions of the simulated planar features.
Table 2. Mathematical descriptions of the simulated planar features.
Station Normal   l Passing   Point   p
l x l y l z p x   ( m ) p y   ( m ) p z   ( m )
Reference Station0.85030.4794−0.21733.27553.71913.6741
−0.49460.8689−0.01841.50543.86893.9816
0.17680.48560.24432.63773.35963.8370
−0.38440.16920.38771.55493.78203.9834
0.5032−0.08790.25642.42513.23973.8914
Unregistered Station1.00000.00000.00003.00000.00000.0000
0.00001.00000.00000.00002.00000.0000
0.33000.33000.33001.50000.00000.0000
−0.33000.33000.33000.00001.80000.0000
0.3300−0.33000.33001.00000.00000.0000
Table 3. Pre-established spatial similarity transformation parameters.
Table 3. Pre-established spatial similarity transformation parameters.
Rotation Matrix R Translator   T Scale Parameter μ
T x   ( m ) T y   ( m ) T z   ( m )
[ 0.8503 0.4946 0.1800 0.4794 0.8689 0.1231 0.2173 0.0183 0.9759 ] 2.0003.0004.0000.5
Table 4. Registration results and residuals between each pair of conjugate planar features after registration.
Table 4. Registration results and residuals between each pair of conjugate planar features after registration.
Rotation Matrix R Translator   T Scale Parameter μResiduals and RMSE
T x   ( m ) T y   ( m ) T z   ( m ) NormalMoment
ΔlxΔlxΔlzmΔl (×10−4)Δm (m)mΔm (×10−4)
[ 0.8503 0.4946 0.1800 0.4794 0.8689 0.1231 0.2173 0.0184 0.9759 ] 2.00013.00004.00010.50000.01910.06090.20903.00000.43996.2000
−0.1243−0.0714−0.0289−0.7919
0.1743−0.15940.19060.3048
0.11490.3019−0.01780.3810
0.08860.2668−0.0824−0.6858
Table 5. Planar features extracted by least squares fitting.
Table 5. Planar features extracted by least squares fitting.
Station Normal   l Passing   Point   p
l x l y l z p x   ( m ) p y   ( m ) p z   ( m )
Reference Station−0.7060.7081−0.0128−70.7593−6.388726.4681
−0.70620.7079−0.0108−50.43313.787422.2993
−0.7103−0.7039−0.0006−50.587714.947722.2911
−0.0060.0090.9999−61.822624.860525.7601
−0.70440.7097−0.0113−63.677226.779316.8218
−0.7072−0.7070.0013−63.220627.648516.8952
−0.00240.01420.9999−61.570225.185222.593
Unregistered Station−0.25790.9648−0.0522−63.6731−7.89215.175
−0.25860.9647−0.0503−36.1147−0.44116.7593
−0.9412−0.2605−0.2152−35.74760.664217.2299
−0.2194−0.00810.9756−41.359213.926119.8014
−0.2560.9654−0.0508−40.000617.500910.9515
−0.9401−0.2659−0.2132−39.203418.063311.1137
−0.2123−0.00540.9772−40.461914.674316.7639
Table 6. Registration results and residuals between each pair of conjugate planar features.
Table 6. Registration results and residuals between each pair of conjugate planar features.
SchemeRotation Matrix R Translator   T Scale Parameter μResiduals and RMSE
T x   ( m ) T y   ( m ) T z   ( m ) NormalMoment
ΔlxΔlxΔlzmΔlΔm (m)mΔm
Closed-form Solution [ 0.8503 0.4944 0.1802 0.4791 0.8690 0.1235 0.2177 0.0186 0.9758 ] −23.013229.3729−2.29011.0000−0.0003−0.00030.00000.00080.00120.0307
−0.0003−0.00030.0000−0.0071
0.00000.00000.0003−0.0391
0.00070.00070.0000−0.0352
−0.0003−0.00030.00050.0062
−0.00080.0008−0.00020.0394
−0.00070.00000.00000.0352
Iterative Solution [ 0.8504 0.4943 0.1802 0.4790 0.8691 0.1234 0.2176 0.0186 0.9759 ] −23.008529.3766−2.29021.0000−0.0004−0.00040.00000.00080.00120.0307
−0.0004−0.00040.0000−0.0071
0.0001−0.0001−0.0002−0.0391
0.00080.00080.0000−0.0352
−0.0004−0.00040.00050.0062
−0.00070.0007−0.00010.0394
−0.00060.00000.00000.0352
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Zheng, N.; Bian, Z. A Closed-Form Solution to Planar Feature-Based Registration of LiDAR Point Clouds. ISPRS Int. J. Geo-Inf. 2021, 10, 435. https://doi.org/10.3390/ijgi10070435

AMA Style

Wang Y, Zheng N, Bian Z. A Closed-Form Solution to Planar Feature-Based Registration of LiDAR Point Clouds. ISPRS International Journal of Geo-Information. 2021; 10(7):435. https://doi.org/10.3390/ijgi10070435

Chicago/Turabian Style

Wang, Yongbo, Nanshan Zheng, and Zhengfu Bian. 2021. "A Closed-Form Solution to Planar Feature-Based Registration of LiDAR Point Clouds" ISPRS International Journal of Geo-Information 10, no. 7: 435. https://doi.org/10.3390/ijgi10070435

APA Style

Wang, Y., Zheng, N., & Bian, Z. (2021). A Closed-Form Solution to Planar Feature-Based Registration of LiDAR Point Clouds. ISPRS International Journal of Geo-Information, 10(7), 435. https://doi.org/10.3390/ijgi10070435

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop