Next Article in Journal
A Review of Modified Steel Slag Application in Catalytic Pyrolysis, Organic Degradation, Electrocatalysis, Photocatalysis, Transesterification and Carbon Capture and Storage
Next Article in Special Issue
An Efficient Text Detection Model for Street Signs
Previous Article in Journal
Application of Blockchain in Education: GDPR-Compliant and Scalable Certification and Verification of Academic Information
Previous Article in Special Issue
Real-Time Hand Gesture Recognition Based on Deep Learning YOLOv3 Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Automatic 3D Point Cloud Registration Method Based on Biological Vision

1
Hypervelocity Aerodynamics Institute, China Aerodynamics Research and Development Center, Mianyang 621000, China
2
National Innovation Institute of Defense Technology, Beijing 100071, China
3
College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2021, 11(10), 4538; https://doi.org/10.3390/app11104538
Submission received: 20 April 2021 / Revised: 6 May 2021 / Accepted: 14 May 2021 / Published: 16 May 2021

Abstract

:
When measuring surface deformation, because the overlap of point clouds before and after deformation is small and the accuracy of the initial value of point cloud registration cannot be guaranteed, traditional point cloud registration methods cannot be applied. In order to solve this problem, a complete solution is proposed, first, by fixing at least three cones to the target. Then, through cone vertices, initial values of the transformation matrix can be calculated. On the basis of this, the point cloud registration can be performed accurately through the iterative closest point (ICP) algorithm using the neighboring point clouds of cone vertices. To improve the automation of this solution, an accurate and automatic point cloud registration method based on biological vision is proposed. First, the three-dimensional (3D) coordinates of cone vertices are obtained through multi-view observation, feature detection, data fusion, and shape fitting. In shape fitting, a closed-form solution of cone vertices is derived on the basis of the quadratic form. Second, a random strategy is designed to calculate the initial values of the transformation matrix between two point clouds. Then, combined with ICP, point cloud registration is realized automatically and precisely. The simulation results showed that, when the intensity of Gaussian noise ranged from 0 to 1 mr (where mr denotes the average mesh resolution of the models), the rotation and translation errors of point cloud registration were less than 0.1° and 1 mr, respectively. Lastly, a camera-projector system to dynamically measure the surface deformation during ablation tests in an arc-heated wind tunnel was developed, and the experimental results showed that the measuring precision for surface deformation exceeded 0.05 mm when surface deformation was smaller than 4 mm.

1. Introduction

Near-space is a connected region of traditional aeronautics and space. Near-space supersonic vehicles have great potential, but if flown for a long time in an aerothermal environment, the surface of vehicles can be deformed, which causes functional failure. Thus, deformation properties of materials in an aerothermal environment need to be urgently explored. An arc-heated wind tunnel is the main device to simulate an aerothermal environment. There are many methods to reconstruct 3D shape data at different times during ablation tests, but the alignment of point clouds is difficult, because the overlap is too small.
Point cloud registration involves calculating a rigid transformation matrix, consisting of a rotation matrix and a translation vector, to minimize the alignment error between two point clouds, and has been widely used for simultaneous localization and mapping (SLAM) [1,2,3], multi-view point cloud registration [4,5], object recognition [6,7], etc. The classical iterative closest point (ICP) algorithm is the most widely used in point cloud registration [8]. It was proposed by Besel, in 1992, and has been used to solve the registration problem of free-form surfaces. The basic idea is to search a corresponding closest point in a test point cloud for each point in a reference point cloud. According to the set consisting of all the closest points, a transformation matrix is calculated between two point clouds, resulting in a registration error. If the registration error cannot satisfy the stopping criterion, the transformed test point cloud is taken as a new test point cloud, and the above steps are repeated until the stopping criterion is satisfied. In the classical ICP algorithm, the registration error is presented in two forms, i.e., point-to-point and point-to-plane. In general, the ICP algorithm based on the point-to-plane registration error converges at a faster rate [9]. Using the classical ICP algorithm as a foundation, there have been many variants proposed by researchers [10,11,12,13,14]. The key step of these methods involves searching the set of closest points. However, in a real scenario of surface deformation, it is very difficult to accurately determine the closest points without any prior information. Moreover, these methods are very sensitive to the initial values of the transformation matrix. If the quality of initial values is not good enough, these methods easily suffer from local minima. Another way to solve the point cloud registration problem is by using probabilistic methods. The core idea is to transform the registration problem of point clouds into an optimization problem of probability model parameters [15,16,17,18,19,20]. As compared with ICP and its variants, these methods perform more robustly with respect to noise and outliers, but their computational complexity is high. Similar to ICP methods, these methods can only be used to align point clouds with large overlaps, whereas an initial value of the transformation matrix with high quality is also required. In the application of surface deformation measurement, the overlap between point clouds is small and the quality of initial values cannot be guaranteed. Thus, if the above methods are directly used to align point clouds, the registration error is large.
In order to solve the problem of surface deformation measurement, a complete solution is proposed. Figure 1 shows the flowchart. Firstly, fix the target to a mounting bracket with at least three cones. Secondly, a cone vertex detection algorithm is deduced to extract feature points. Thirdly, accurately align the point cloud before deformation to the point cloud after deformation using neighboring point clouds of each cone vertices which are not deformed during ablation tests through an automatic registration algorithm. Then, surface deformation can be obtained. This paper is organized as follows: In Section 2, we introduce the detection principles of cone vertices in detail; in Section 3, we derive an automatic registration algorithm for point clouds; in Section 4, we introduce the research methodology, including cone vertex detection, automatic registration, and surface deformation measurement; in Section 5, we present the research results, and in Section 5.1 and Section 5.2 we present the accuracy and robustness of the cone vertex detection algorithm and automatic registration algorithm, respectively; in Section 5.3, we provide the results of surface deformation measurement; in Section 6, we discuss the research results; and, in Section 7, we conclude with the contributions of this study and next steps in research.

2. Cone Vertex Detection

As shown in Figure 2, the perception of geometric features in a 3D scene for human beings relies on multi-view observation and data fusion, which confer the advantages of high detection accuracy and robustness to noise. The detection process is executed in several steps. Firstly, a target is independently observed from different viewpoints. Its 3D point clouds are projected onto the retina, and corresponding 2D images are generated. Secondly, the features of interest in each 2D image are extracted. All 2D feature points are reprojected onto the surface of the target, and the corresponding 3D feature points are obtained. Thirdly, all 3D feature points observed from different viewpoints are fused, and the fake 3D feature points (e.g., fake vertices) are deleted according to the analysis made by the brain. Lastly, the target’s corresponding geometric shape is fitted using the neighboring point cloud of each 3D feature point on the surface of the target in order to improve the precision of the detected 3D feature points.

2.1. Multi-View Observation

Since the measured space is three-dimensional, in order to carry out the all-round observation of a target, three base planes need to be selected as observation planes, for example, XOY, YOZ, and ZOX planes. In each observation plane, the target can be observed from different viewpoints. Without loss of generality, the XOY observation plane can be taken as an example, as shown in Figure 3a.
At viewpoint I, the observed 3D point is notated as P i 1 = ( x i 1 , y i 1 , z i 1 ) ( i = 1 , 2 , , n 1 , n ) , where n is the number of points. Its corresponding 2D projection point on the YOZ plane is notated as p ˜ i 1 = ( x ˜ i 1 , y ˜ i 1 ) .
{ x ˜ i 1 = y i 1 y ˜ i 1 = z i 1
The center point of the 2D point cloud is notated as p ˜ c 1 = ( x ˜ c 1 , y ˜ c 1 ) .
p ˜ c 1 = i = 1 n p ˜ i 1 n .
The width w1 and height h1 of its corresponding minimum enclosing rectangle (MER) are calculated as follows:
{ w 1 = max i = 1 , 2 , , n 1 , n ( x ˜ i 1 ) min i = 1 , 2 , , n 1 , n ( x ˜ i 1 ) h 1 = max i = 1 , 2 , , n 1 , n ( y ˜ i 1 ) min i = 1 , 2 , , n 1 , n ( y ˜ i 1 ) .
In the YOZ plane, the imaging process of the retina can be simulated to generate a binary image I 1 with width W1 and height H1 as:
v { W 1 = k w 1 H 1 = k h 1 , k 1 ,
where k is a zoom coefficient.
Then, the pixel coordinate of the center point I c 1 = ( x ¯ c 1 , y ¯ c 1 ) of the image can be calculated as follows:
{ x ¯ c 1 = ( W 1 1 ) / 2 y ¯ c 1 = ( H 1 1 ) / 2 .
Then, the 2D point cloud p ˜ i 1 can be shifted to the image pixel coordinate system as:
p ¯ i 1 = p ˜ i 1 p ˜ c 1 + I c 1 ,
where p ¯ i 1 is the corresponding point of p ˜ i 1 in image I 1 . If the grayscale of p ¯ i 1 is directly set to 255 and that of the other pixels is set to 0, the target area in the binary image becomes only a set of discrete pixels but not a connected domain, which cannot be used to detect corners.
As is shown in Figure 3b, A, B, and C are three vertices of an arbitrary triangle patch T j ( j = 1 , 2 , , t 1 , t ) on the target surface, and its corresponding triangle in the image is notated as T ¯ j , where t is the number of triangle patches. Both T j and T ¯ j are connected domains. The coordinates of A, B, and C are notated as P A 1 , P B 1 , and P C 1 , respectively. The coordinates of A ¯ , B ¯ , and C ¯ in the image are notated as p ¯ A 1 , p ¯ B 1 , and p ¯ C 1 , respectively. Q ¯ k is an arbitrary pixel in image I 1 , and its coordinates are notated as q ¯ k 1 , where ( k = 1 , 2 , , m 1 , m ) , and m is the number of pixels in image I 1 . Then, the following vectors can be obtained:
{ Q ¯ k A ¯ = p ¯ A 1 q ¯ k 1 Q ¯ k B ¯ = p ¯ B 1 q ¯ k 1 Q ¯ k C ¯ = p ¯ C 1 q ¯ k 1 .
The sum of angles between vectors is notated as β j , as:
β j = A ¯ Q ¯ k B ¯ + B ¯ Q ¯ k C ¯ + C ¯ Q ¯ k A ¯ = arccos ( Q ¯ k A ¯ × Q ¯ k B ¯ | Q ¯ k A ¯ | | Q ¯ k B ¯ | ) + arccos ( Q ¯ k B ¯ × Q ¯ k C ¯ | Q ¯ k B ¯ | | Q ¯ k C ¯ | ) + arccos ( Q ¯ k C ¯ × Q ¯ k A ¯ | Q ¯ k C ¯ | | Q ¯ k A ¯ | )
If Q ¯ k is inside T ¯ j or lying exactly on the border of T ¯ j , then β j must be equal to 360°. On the basis of this, the image binarization under geometric constraints can be executed according to the following equation:
I 1 ( Q ¯ k ) = { 255 , i f min j = 1 , 2 , , t 1 , t ( | β j 360 | ) = 0 0 , i f min j = 1 , 2 , , t 1 , t ( | β j 360 | ) 0 .
The function I 1 ( Q ¯ k ) = g means that the grayscale of the pixel Q ¯ k in image I 1 is set to g . Figure 3c shows the result of image binarization under geometric constraints, where the first observation has been completed.
As shown in Figure 3a, the point cloud P i 1 = ( x i 1 , y i 1 , z i 1 ) can be rotated by α degrees around the Z-axis; then, the point cloud P i 2 = ( x i 2 , y i 2 , z i 2 ) ( i = 1 , 2 , , n 1 , n ) observed at position II can be obtained according to Equation (10) as:
P i 2 = R Z ( α ) P i 1 .
From Equations (1)–(9), the simulated image I 2 at position II can be generated. In the same way, all simulated images I u ( u = 1 , 2 , , s 1 , s ) can be obtained, where s is an integer obtained as:
s = 360 α .
When the XOY, YOZ, and ZOX planes are taken as the observation planes, then 3s simulated binary images I u ( u = 1 , 2 , , 3 s 1 , 3 s ) can be achieved in total.

2.2. Cone Vertex Recognition

The Harris operator can be used to detect corners in all simulated images I u ( u = 1 , 2 , , 3 s 1 , 3 s ) . Q ¯ v u ( u = 1 , 2 , , 3 s 1 , 3 s ) ( v = 1 , 2 , , r u 1 , r u ) represents the detected v-th corner in image Iu. Its pixel coordinates are notated as q ¯ v u , where r u is the number of all detected corners in image Iu. Then, its corresponding point q ˜ v u in 2D space can be calculated according to Equation (12) as:
q ˜ v u = q ¯ v u I c u + p ˜ c u .
The closest point p ˜ h v u u can be searched for q ˜ v u in the 2D point cloud.
h v u = min i = 1 , 2 , , n 1 , n d i s ( q ˜ v u , p ˜ i u ) ,
where h v u is the index of p ˜ h v u u , in 2D point clouds.
Since the corresponding relationship of the points remains unchanged during the projection process from a 3D point cloud to a 2D point cloud, the corresponding point in the 3D point cloud of q ˜ v u can be written as P h v u 1 .
The clustering of { P h v u 1 } is executed on the basis of Euclidean distance according to the following steps:
(a)
η is notated as the number of categories.
η = u = 1 3 s r u + 1
ru is the number of all detected corners in image Iu, 3s is the number of all simulated images.
(b)
dij represents the distance between the centers of Ci and Cj. Ci and Cj is the i-th and j-th category. If the minimum of { d i j } is smaller than the distance threshold λ, cluster { P h v u 1 } into η-1 categories and η = η-1.
(c)
Repeat step (b) until the minimum of { d i j } is equal to or greater than λ.
(d)
The coordinate of each clustering center can be obtained by calculating the mean value of members in its corresponding category.
ψ i represents the number of members of Ci. Since the cone vertex is a robust feature point, it should be observed from the most viewpoints. Thus, all values of Ci that satisfy ψ i κ should be deleted, as shown in Figure 4c, where κ is a threshold set for the number of observations. Figure 4d shows the rough localization result of a cone vertex. The detected cone vertex is notated as χ i ( i = 1 , 2 , , ρ 1 , ρ ) , where ρ is the number of all detected cone vertices.

2.3. Shape Fitting

Without loss of generality, χ 1 can be taken as an example. In this case, P i 1 = ( x i 1 , y i 1 , z i 1 ) ( i = 1 , 2 , , ξ 1 , ξ ) represents the neighboring point cloud of χ 1 , where ξ is the number of neighboring points. The quadratic form of the conic surface can be written as:
a 1 ( x i 1 ) 2 + a 2 ( y i 1 ) 2 + a 3 ( z i 1 ) 2 + a 4 x i 1 y i 1 + a 5 x i 1 z i 1 + a 6 y i 1 z i 1 + a 7 x i 1 + a 8 y i 1 + a 9 z i 1 + a 10 = 0
Then, Equation (14) can be rewritten in matrix form, where
E φ = 0 .
The matrices of E and φ are as follows:
E = ( ( x i 1 ) 2 ( y i 1 ) 2 ( z i 1 ) 2 x i 1 y i 1 x i 1 z i 1 y i 1 z i 1 x i 1 y i 1 z i 1 1 ( x i 1 ) 2 ( y i 1 ) 2 ( z i 1 ) 2 x i 1 y i 1 x i 1 z i 1 y i 1 z i 1 x i 1 y i 1 z i 1 1 ( x i 1 ) 2 ( y i 1 ) 2 ( z i 1 ) 2 x i 1 y i 1 x i 1 z i 1 y i 1 z i 1 x i 1 y i 1 z i 1 1 ( x i 1 ) 2 ( y i 1 ) 2 ( z i 1 ) 2 x i 1 y i 1 x i 1 z i 1 y i 1 z i 1 x i 1 y i 1 z i 1 1 ) φ = ( a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 a 10 ) T
E can be decomposed using a singular value decomposition (SVD)
E = U E D E V E T .
Then, φ becomes the last column of VE. The quadratic form of the conic surface can be rewritten in matrix form as:
( x i 1 y i 1 z i 1 1 ) F ( x i 1 y i 1 z i 1 1 ) T = 0 ,
where
F = ( a 1 a 4 / 2 a 5 / 2 a 7 / 2 a 4 / 2 a 2 a 6 / 2 a 8 / 2 a 5 / 2 a 6 / 2 a 3 a 9 / 2 a 7 / 2 a 8 / 2 a 9 / 2 a 10 )
F can be decomposed using an SVD.
F = U F D F V F T .
The homogeneous coordinates of the cone vertex are represented by the last column of VF (see proof in Appendix A) and notated as [ v 1 v 2 v 3 v 4 ] T . Thus, the χ 1 coordinates of the cone vertex are as follows:
χ 1 = ( v 1 v 4 , v 2 v 4 , v 3 v 4 ) .
In the same way, all coordinates of cone vertices χ 1 = ( x i , y i , z i ) ( i = 1 , 2 , , ρ 1 , ρ ) can be obtained.

3. Automatic Registration Algorithm

{ χ i 1 } ( i = 1 , 2 , , ρ 1 1 , ρ 1 ) and { χ i 2 } ( i = 1 , 2 , , ρ 2 1 , ρ 2 ) are notated as the cone vertices in the reference and test point clouds, respectively. Since the corresponding relationship is unknown, { χ i 1 } and { χ i 2 } cannot be directly used to calculate the transformation matrix between the reference and test point clouds. The transformation matrix consists of a 3 × 3 rotation matrix R and a 3 × 1 translation vector T. To solve the corresponding relationship and improve the robustness of the algorithm, a random strategy is used to calculate the corresponding relationship between { χ i 1 } and { χ i 2 } .
Two sequences of numbers are constructed in which the probability of each element obeys a mean distribution.
1 Γ = [ 1 , 2 , , ρ 1 1 , ρ 1 ] 2 Γ = [ 1 , 2 , , ρ 2 1 , ρ 2 ]
Three different elements τ j 1 ( j = 1 , 2 , 3 ) and τ j 2 ( j = 1 , 2 , 3 ) are taken from 1 Γ and 2 Γ , respectively. The corresponding cone vertices are as follows:
χ τ j 1 1 = ( x τ j 1 , y τ j 1 , z τ j 1 ) χ τ j 2 2 = ( x τ j 2 , y τ j 2 , z τ j 2 )
The transformation between χ τ j 1 1 and χ τ j 2 2 can be written as Equation (21) as:
( χ τ j 2 2 ) T = R × ( χ τ j 1 1 ) T + T .
The matrices 1 M and 2 M can be constructed using { χ τ j 1 1 } and { χ τ j 2 2 } , respectively, as:
1 M = ( χ τ 1 1 1 χ τ 2 1 1 χ τ 3 1 1 ) = ( x τ 1 1 y τ 1 1 z τ 1 1 x τ 2 1 y τ 2 1 z τ 2 1 x τ 3 1 y τ 3 1 z τ 3 1 ) , 2 M = ( χ τ 1 2 2 χ τ 2 2 2 χ τ 3 2 2 ) = ( x τ 1 2 y τ 1 2 z τ 1 2 x τ 2 2 y τ 2 2 z τ 2 2 x τ 3 2 y τ 3 2 z τ 3 2 ) .
Their center points can be calculated using Equation (23) as follows:
{ M c 1 = j = 1 3 χ τ j 1 1 3 M c 2 = j = 1 3 χ τ j 1 2 3 .
The origin of the coordinate system can be shifted to the center points of 1 M and 2 M as:
{ 1 M ¯ = 1 M 1 M c 2 M ¯ = 2 M 2 M c .
Then, there only exists rotation transformation between 1 M ¯ and 2 M ¯ , expressed as:
2 M ¯ = R × ( 1 M ¯ ) .
Matrix Ω = ( 2 M ¯ ) T ( 1 M ¯ ) can be decomposed using an SVD as follows:
Ω = U Ω D Ω V Ω T .
U Ω ( i ) ( i = 1 , 2 , 3 ) represents the i-th column of U Ω ; thus, R must be one of the following eight forms:
{ R 1 = [ U Ω ( 1 ) U Ω ( 2 ) U Ω ( 3 ) ] V Ω T R 2 = [ U Ω ( 1 ) U Ω ( 2 ) U Ω ( 3 ) ] V Ω T R 3 = [ U Ω ( 1 ) U Ω ( 2 ) U Ω ( 3 ) ] V Ω T R 4 = [ U Ω ( 1 ) U Ω ( 2 ) U Ω ( 3 ) ] V Ω T , { R 5 = [ U Ω ( 1 ) U Ω ( 2 ) U Ω ( 3 ) ] V Ω T R 6 = [ U Ω ( 1 ) U Ω ( 2 ) U Ω ( 3 ) ] V Ω T R 7 = [ U Ω ( 1 ) U Ω ( 2 ) U Ω ( 3 ) ] V Ω T R 8 = [ U Ω ( 1 ) U Ω ( 2 ) U Ω ( 3 ) ] V Ω T .
The rotation error is defined as follows:
e i = t r a c e ( [ 2 M ¯ R i × ( 1 M ¯ ) ] × [ 2 M ¯ R i × ( 1 M ¯ ) ] T ) 3 + ζ p × ( | R i T R i | 1 ) .
The first item in Equation (27) is the data term used to represent the geometric error of rotation estimation; the second item in Equation (27) is the constraint term used to limit the coordinate system to being a right-handed coordinate system; trace(∙) is a function used to calculate the sum of the diagonal elements of the matrix; whereas ζ p is a penalty coefficient. Therefore,
{ R = min R i ( e i ) T = 2 M c R × 1 M c e min = min i = 1 , 2 , , 7 , 8 ( e i ) .
The threshold of the rotation error is notated as ε e . If e min < ε e , it indicates that the cone vertices in 1 M and 2 M are corresponding points and the solution of rotation and translation is credible. Otherwise, the following steps are repeated until e min < ε e or the number of iterations is greater than a threshold Nmax: (a) take three different elements from 1 Γ and 2 Γ , respectively; (b) calculate R, T, and emin according to Equations (22)–(28). Since the probability of each element obeys a mean distribution, the value of Nmax can be calculated using Equation (29) as:
N max = A ρ 1 3 A ρ 2 3 A 3 3 = ρ 1 ρ 2 ( ρ 1 1 ) ( ρ 1 2 ) ( ρ 2 1 ) ( ρ 2 2 ) 6 .
The corresponding point between { χ i 1 } and { χ i 2 } is notated as { χ ^ i 1 , χ ^ i 2 } ( i = 1 , 2 , , n c 1 , n c ) ; thus, { χ ^ i 1 , χ ^ i 2 } should satisfy Equation (29) as:
R × ( χ ^ i 1 ) T + T χ ^ i 2 2 δ d ,
where δ d is the distance threshold.
{ P j 1 } ( j = 1 , 2 , , n 1 1 , n 1 ) represents the neighboring point cloud of { χ ^ i 1 } in the reference point cloud. { P j 2 } ( j = 1 , 2 , , n 2 1 , n 2 ) represents the neighboring point cloud of { χ ^ i 2 } in the test point cloud. On the above basis, taking the solution of Equation (28) as the initial value, the rotation matrix R ˜ and translation vector T ˜ can be solved using the ICP algorithm. Figure 5 shows the flowchart of the automatic registration algorithm.

4. Research Methodology

4.1. Cone Vertex Detection

The algorithm’s robustness and accuracy can be evaluated by the detection rate S D and location error E D under Gaussian noise with different intensities. S D is defined as follows:
S D = N D N T ,
where N D is the number of detected cone vertices and N T is the total number of cone vertice. E D is defined as follows:
E D = i = 1 N D E i 2 N D ,
where E i represents the location error of the i-th cone vertex.
A cone model provided by [21] was adopted to test the algorithm performance. To test the influence of noise intensity on the detection of cone vertices, Gaussian noise was independently added to the X-, Y-, and Z-axes of each scene point. The standard deviation σGN of Gaussian noise ranged from 0 to 1 mr with a step size of 0.1 mr, where mr denotes the average mesh resolution of the models.

4.2. Automatic Registration

As shown in Figure 6, the reference and test point clouds consisted of five cones. There exists a rigid transformation between the reference and test point clouds, as expressed in Equation (21). The ideal rotation matrix and translation vector are notated as R ^ and T ^ , respectively. The calculated rotation matrix and translation vector are notated as R ˜ and T ˜ , respectively. Rotation and translation errors under Gaussian noise with different intensities were used to evaluate the algorithm’s robustness and accuracy. The rotation error ε r is defined as follows:
ε r = arccos ( t r a c e ( R ˜ R ^ T ) 1 2 ) 180 π .
The translation error ε T is defined as follows:
ε T = T ˜ T ^ d r e s ,
where dres is equal to the average mesh resolution (dres = 1 mr). To test the influence of Gaussian noise with different intensities on registration performance, Gaussian noise was independently added to the X-, Y-, and Z-axes of each scene point. The standard deviation and step size were the same as in Section 4.1. In the experiment, the corresponding Euler angle of R ^ was [40°, 40°, 40°], and the rotation sequence was “XYZ”. T ^ was [237 mr, 166 mr, −144 mr].

4.3. Surface Deformation Measurement

A camera-projector system based on the proposed method, shown in Figure 7a, was developed to measure the surface deformation dynamically during ablation tests in an arc-heated wind tunnel. The resolutions of the projector and camera images were 1280 × 800 px and 2456 × 2058 px, respectively. The sizes of CCD and CCD pixel were 2/3” and 3.45 × 3.45 μm, respectively. The focal length was 75 mm. The data processing platform comprised a Dell laptop with an Intel(R) Core(TM) i7-6700HQ CPU @ 2.6 GHz and 16 GB RAM. The aim was to evaluate the system’s precision. Firstly, a model with the size of 40 × 40 × 5 mm was fixed to a special device which had six cones, as shown in Figure 7b. Secondly, the device was placed on a translation stage, as shown in Figure 7c. The translation stage’s precision was 0.01 mm. Thirdly, surface ablation could be simulated by moving the model on the translation stage. The camera-projector system was used to reconstruct the model surface and the six cones at time 0 and time k, denoting the reference point cloud and test point cloud, respectively. Fourthly, the reference and test point clouds were aligned using the proposed registration method, and then the model’s surface deformation could be measured. Lastly, the measurement result was compared with the reading from the translation stage, and the measurement error of the camera-projector system was calculated. Root-mean-square (RMS) was introduced to evaluate the above measurement error.

5. Research Results

5.1. Cone Vertex Detection

Figure 8 shows the point clouds of cone models where Gaussian noise was added with different intensities. Here, “+” represents the detected position of the cone vertex. Table 1 shows the statistical results of cone vertex detection.

5.2. Automatic Registration

Figure 9 and Figure 10 show the reference and test point clouds, respectively, where Gaussian noise was added with different intensities. Here, “+” represents the detected position of the cone vertex. Figure 11 shows the relationship between registration error and noise intensity.

5.3. Surface Deformation Measurement

Figure 12 shows the model’s surface deformation at different times. Table 2 shows the comparison results of readings and measurement results.

6. Discussion

(1)
As shown in Figure 8c, when σGN = 1 mr, the conic surface was very rough, but its vertex could still be detected accurately, which fully proves the robustness of the algorithm to Gaussian noise. Table 1 shows the statistical results of the detection rate and location error under Gaussian noise with different intensities. It could be seen that the location error increased with increasing noise intensity. In general, detection of the cone vertex was successful if the location error was lower than 5 mr. Thus, the algorithm can maintain a high detection rate under Gaussian noise with intensity ranging from 0 to 1 mr.
(2)
Figure 9, Figure 10, Figure 11 indicate that the rotation and translation errors increase with increasing noise intensity. When σGN = 1 mr, most details on the conic surfaces were lost, but the point clouds could still be aligned accurately, which proves the robustness of the algorithm to Gaussian noise. In general, the point cloud registration was successful if the rotation error was lower than 5° and the translation error was lower than 5 mr. Thus, the algorithm performs well under Gaussian noise with an intensity ranging from 0 to 1 mr.
(3)
In general, surface deformation of a near-space supersonic vehicle is no more than 4 mm during ablation tests in an arc-heated wind tunnel. Table 2 shows the statistical results, where it can be seen that the precision of surface deformation measurement exceeds 0.05 mm when surface deformation is smaller than 4 mm.

7. Conclusions

In order to solve the problem of surface deformation measurement during ablation tests in an arc-heated wind tunnel, in this study, we proposed an automatic point cloud registration method. As compared with other registration methods, we provided a complete solution, including two parts: (1) Guarantee high-quality initial values and overlaps for aligning point clouds which have deformed much. (2) A strategy to automatically compute rigid transformations between point clouds. Inspired by 2D artificial targets used to improve precision of camera calibration or photogrammetry, we introduced 3D artificial targets to obtain accurate registration results, which was the key idea for solving the problem of surface deformation measurement; most state-of-art approaches only considered how to align point clouds more accurately and robustly using a big enough overlap. Simulations and experiments were conducted, and the research results indicated the following: (1) The proposed method performed well under Gaussian noise with an intensity ranging from 0 to 1 mr. When σGN = 1 mr, rotation and translation error were smaller than 0.025° and 1 mr, respectively. (2) The error of surface deformation measurement was smaller than 0.05 mm when deformation was no more than 4 mm. In addition to surface deformation measurement, the proposed method can also be applied for experimental studying of soft matter.
However, there are still some aspects that need to be studied: (1) The procedure of cone vertex detection should be more efficient. As compared with the Harris operator used in cone vertex detection, Radon or Hough transform may be more simplified [22,23], but the robustness needs to be evaluated. (2) The 3D artificial target should be more multiple. An artificial neutral network (ANN) can be adopted to train a classifier. And rotational projection statistics (RoPS) can be taken as the inputs of the classifier [6]. The classifier can be used to recognize different geometric features and delete fake features, which can improve the efficiency of data fusion and decrease the time cost.

Author Contributions

Conceptualization, J.L.; Data curation, P.G.; Formal analysis, J.L. and X.S.; Funding acquisition, J.L.; Investigation, J.L. and P.G.; Methodology, J.L.; Supervision, X.S.; Validation, X.S.; Writing—original draft, J.L.; Writing—review and editing, P.G. and X.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China with grant no. 11802321.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Lemma A1.
The quadratic form of the conic surface can be written as:
P T A P = 0 ,
and:
A = ( a 1 a 4 / 2 a 5 / 2 a 7 / 2 a 4 / 2 a 2 a 6 / 2 a 8 / 2 a 5 / 2 a 6 / 2 a 3 a 9 / 2 a 7 / 2 a 8 / 2 a 9 / 2 a 10 ) , P = [ x y z 1 ] T .
A can be decomposed using SVD.
A = U A D A V A T .
Then, the last column of VA represents the homogeneous coordinate of the cone vertex.
Proof. 
Matrix A can be diagonalized on the basis of eigenvalues and eigenvectors.
A = R D R T ,
where D is a diagonal matrix and R is a transformation matrix. Substituting Equation (A3) into Equation (34) yields:
P T A P = ( P T R ) D ( P T R ) T = ( P ) T D P = 0 ,
where ( P ) T D P = 0 is the standard quadratic form of the conic surface. Equation (A4) is essentially a transformation between coordinate systems. Through this transformation, the cone vertex can be shifted to the origin of the new coordinate system, and the axis of the cone can be made parallel to the Z-axis of new coordinate system. The homogeneous coordinates of the cone vertex in the new coordinate system can be notated as:
P v = [ 0 0 0 1 ] T .
Because P = R T P , the homogeneous coordinates of the cone vertex in the original coordinate system can be calculated using Equation (A6) as follows:
P v = ( R T ) 1 P v .
A is a real symmetric matrix; thus, R T R = λ I . I is a 4 × 4 unit matrix; therefore,
A T A = R D R T R D R T = λ R D 2 R T .
R is a matrix consisting of eigenvectors of A T A . A can be decomposed using SVD.
A = U A D A V A T .
Thus,
R = V A .
Substituting Equation (A9) into Equation (A6) yields
P v = ( R T ) 1 P v = V A ( 0 0 0 1 ) .
Therefore, the homogeneous coordinates of the cone vertex become the last column of VA. The lemma has been proven. □

References

  1. Nguyen, T.H.; Nguyen, T.M.; Xie, L. Tightly-coupled ultra-wide band-aided monocular visual SLAM with degenerate anchor configurations. Auton. Robot. 2020, 44, 1519–1534. [Google Scholar] [CrossRef]
  2. Shane, G.W.; Voorhies, R.C.; Laurent, I. Efficient velodyne SLAM with point and plane features. Auton. Robot. 2018, 43, 1207–1224. [Google Scholar]
  3. Wang, F.R.; Lu, E.; Wang, Y.; Qiu, G.J.; Lu, H.Z. Efficient stereo visual simultaneous localization and mapping for an autonomous unmanned forklift in an unstructured warehouse. Appl. Sci. 2020, 10, 698. [Google Scholar] [CrossRef] [Green Version]
  4. Lei, H.; Jiang, G.; Quan, L. Fast descriptors and correspondence propagation for robust global point cloud registration. IEEE Trans. Image Process. 2017, 26, 1. [Google Scholar] [CrossRef] [PubMed]
  5. Dong, Z.; Liang, F.; Yang, B. Registration of large-scale terrestrical laser scanner point clouds: A review and benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 163, 327–342. [Google Scholar] [CrossRef]
  6. Guo, Y.; Sohel, F.; Bennamoun, M.; Lu, M.; Wan, J. Rotational projection statistics for 3D local surface description and object recognition. Int. J. Comput. Vis. 2013, 105, 63–86. [Google Scholar] [CrossRef] [Green Version]
  7. Tran, D.-S.; Ho, N.-H.; Yang, H.-J.; Baek, E.-T.; Kim, S.-H.; Lee, G. Real-time hand gesture spotting and recognition using RGB-D Camera and 3D convolutional neural network. Appl. Sci. 2020, 10, 722. [Google Scholar] [CrossRef] [Green Version]
  8. Besl, P.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  9. Servos, J.; Waslander, S.L. Multi-Channel Generalized-ICP: A robust framework for multi-channel scan registration. Robot. Auton. Syst. 2017, 87, 247–257. [Google Scholar] [CrossRef] [Green Version]
  10. Serafin, J.; Grisetti, G. NICP: Dense normal based point cloud registration. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015; pp. 742–749. [Google Scholar]
  11. Attia, M.; Slama, Y. Efficient initial guess determination based on 3D point cloud projection for ICP algorithms. In Proceedings of the 2017 International Conference on High Performance Computing & Simulation (HPCS) 2017, Genoa, Italy, 17–21 July 2017; pp. 807–814. [Google Scholar]
  12. Makovetskii, A.; Voronin, S.; Kober, V.; Tihonkih, D. An efficient point-to-plane registration algorithm for affine transformation. In Applications of Digital Image Processing XL; SPIE: Bellingham, WA, USA, 2017. [Google Scholar]
  13. Li, P.; Wang, R.; Wang, Y.; Tao, W. Evaluation of the ICP algorithm in 3D point cloud registration. IEEE Access 2020, 8, 68030–68048. [Google Scholar] [CrossRef]
  14. Wu, P.; Li, W.; Yan, M. 3D scene reconstruction based on improved ICP algorithm. Microprocess. Microsystems 2020, 75, 103064. [Google Scholar] [CrossRef]
  15. Kang, Z.; Yang, J. A probabilistic graphical model for the classification of mobile LiDAR point clouds. ISPRS J. Photogramm. Remote Sens. 2018, 143, 108–123. [Google Scholar] [CrossRef]
  16. Hong, H.; Lee, B.H. Probabilistic normal distributions transform representation for accurate 3D point cloud registration. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017, Vancouver, BC, Canada, 24–28 September 2017; pp. 3333–3338. [Google Scholar] [CrossRef]
  17. Zhu, H.; Guo, B.; Zou, K.; Li, Y.; Yuen, K.-V.; Mihaylova, L.; Leung, H. A review of point set registration: From pairwise registration to groupwise registration. Sensors 2019, 19, 1191. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Lu, M.; Zhao, J.; Guo, Y.; Ma, Y. Accelerated coherent point drift for automatic 3D point cloud registration. IEEE GRSL 2016, 13, 162–166. [Google Scholar]
  19. Wang, G.; Chen, Y. Fuzzy correspondences guided Gaussian mixture model for point set registration. Knowl. Based Syst. 2017, 136, 200–209. [Google Scholar] [CrossRef]
  20. Ma, J.; Jiang, J.; Liu, C.; Li, Y. Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration. Inf. Sci. 2017, 417, 128–142. [Google Scholar] [CrossRef]
  21. Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D ShapeNets: A deep representation for volumetric shapes. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. [Google Scholar]
  22. Polyakova, A.P.; Svetov, I.E. On a singular value decomposition of the normal Radon transform operator acting on 3D 2-tensor fields. J. Phys. Conf. Ser. 2021, 1715, 012041. [Google Scholar] [CrossRef]
  23. Wang, Y.S.; Qi, Y.; Man, Y. An improved hough transform method for detecting forward vehicle and lane in road. J. Phys. Conf. Ser. 2021, 1757, 012082. [Google Scholar] [CrossRef]
Figure 1. A flowchart of surface deformation measurement using the proposed method.
Figure 1. A flowchart of surface deformation measurement using the proposed method.
Applsci 11 04538 g001
Figure 2. The principles of cone vertex detection based on biological vision.
Figure 2. The principles of cone vertex detection based on biological vision.
Applsci 11 04538 g002
Figure 3. Multi-view observation. (a) Observations from different viewpoints; (b) image binarization under constraints; (c) binary image.
Figure 3. Multi-view observation. (a) Observations from different viewpoints; (b) image binarization under constraints; (c) binary image.
Applsci 11 04538 g003
Figure 4. The principles of cone vertex recognition. (a) Corner detection; (b) clustering of feature points; (c) data filtering; (d) rough localization.
Figure 4. The principles of cone vertex recognition. (a) Corner detection; (b) clustering of feature points; (c) data filtering; (d) rough localization.
Applsci 11 04538 g004aApplsci 11 04538 g004b
Figure 5. Flowchart of the automatic registration algorithm.
Figure 5. Flowchart of the automatic registration algorithm.
Applsci 11 04538 g005
Figure 6. Point cloud registration. (a) Reference point cloud; (b) test point cloud.
Figure 6. Point cloud registration. (a) Reference point cloud; (b) test point cloud.
Applsci 11 04538 g006
Figure 7. Experimental equipment. (a) Camera-projector system; (b) Model and special device; (c) Translation stage.
Figure 7. Experimental equipment. (a) Camera-projector system; (b) Model and special device; (c) Translation stage.
Applsci 11 04538 g007
Figure 8. The detection results. (a) σGN = 0 mr; (b) σGN = 0.5 mr; (c) σGN = 1 mr.
Figure 8. The detection results. (a) σGN = 0 mr; (b) σGN = 0.5 mr; (c) σGN = 1 mr.
Applsci 11 04538 g008
Figure 9. Reference point cloud. (a) σGN = 0.5 mr; (b) σGN = 1 mr.
Figure 9. Reference point cloud. (a) σGN = 0.5 mr; (b) σGN = 1 mr.
Applsci 11 04538 g009
Figure 10. Test point cloud. (a) σGN = 0.5 mr; (b) σGN = 1 mr.
Figure 10. Test point cloud. (a) σGN = 0.5 mr; (b) σGN = 1 mr.
Applsci 11 04538 g010
Figure 11. The relationship between registration error and noise intensity. (a) Rotation error; (b) translation error. Note, mr denotes the average mesh resolution of the models.
Figure 11. The relationship between registration error and noise intensity. (a) Rotation error; (b) translation error. Note, mr denotes the average mesh resolution of the models.
Applsci 11 04538 g011
Figure 12. Surface deformation measurement results. (a) 0s, reading = 0.000 mm; (b) 9s, reading = 1.810 mm.
Figure 12. Surface deformation measurement results. (a) 0s, reading = 0.000 mm; (b) 9s, reading = 1.810 mm.
Applsci 11 04538 g012
Table 1. The statistical results of cone vertex detection (mr denotes the average mesh resolution of the models).
Table 1. The statistical results of cone vertex detection (mr denotes the average mesh resolution of the models).
Noise Intensity (mr)Location Error (mr)Detection Rate
0.00.00100%
0.10.16100%
0.20.27100%
0.30.45100%
0.40.75100%
0.51.24100%
0.61.56100%
0.71.85100%
0.82.29100%
0.92.58100%
1.03.1397%
Table 2. Statistical results.
Table 2. Statistical results.
Time (s)Reading (mm)Measurement Result (mm)Error (mm)
0.00.0000.0350.035
0.50.1000.1090.009
1.00.2000.1960.004
1.50.3000.3000.000
2.00.4000.4020.002
2.50.5000.4920.008
3.00.6000.6050.005
3.50.7000.7030.003
4.00.8000.7950.005
4.50.9000.9030.003
5.01.0001.0080.008
5.51.1001.1060.006
6.01.2001.2100.010
6.51.3001.3200.020
7.01.4001.4330.033
7.51.5001.5220.022
8.01.6001.6200.020
8.51.7101.7420.032
9.01.8101.8430.033
9.51.9101.9520.042
10.02.0102.0560.046
12.52.5002.5370.037
15.03.0003.0290.029
20.04.0004.0350.035
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, J.; Guo, P.; Sun, X. An Automatic 3D Point Cloud Registration Method Based on Biological Vision. Appl. Sci. 2021, 11, 4538. https://doi.org/10.3390/app11104538

AMA Style

Liu J, Guo P, Sun X. An Automatic 3D Point Cloud Registration Method Based on Biological Vision. Applied Sciences. 2021; 11(10):4538. https://doi.org/10.3390/app11104538

Chicago/Turabian Style

Liu, Jinbo, Pengyu Guo, and Xiaoliang Sun. 2021. "An Automatic 3D Point Cloud Registration Method Based on Biological Vision" Applied Sciences 11, no. 10: 4538. https://doi.org/10.3390/app11104538

APA Style

Liu, J., Guo, P., & Sun, X. (2021). An Automatic 3D Point Cloud Registration Method Based on Biological Vision. Applied Sciences, 11(10), 4538. https://doi.org/10.3390/app11104538

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop