An Automatic 3D Point Cloud Registration Method Based on Biological Vision
Abstract
:1. Introduction
2. Cone Vertex Detection
2.1. Multi-View Observation
2.2. Cone Vertex Recognition
- (a)
- η is notated as the number of categories.ru is the number of all detected corners in image Iu, 3s is the number of all simulated images.
- (b)
- dij represents the distance between the centers of Ci and Cj. Ci and Cj is the i-th and j-th category. If the minimum of is smaller than the distance threshold λ, cluster into η-1 categories and η = η-1.
- (c)
- Repeat step (b) until the minimum of is equal to or greater than λ.
- (d)
- The coordinate of each clustering center can be obtained by calculating the mean value of members in its corresponding category.
2.3. Shape Fitting
3. Automatic Registration Algorithm
4. Research Methodology
4.1. Cone Vertex Detection
4.2. Automatic Registration
4.3. Surface Deformation Measurement
5. Research Results
5.1. Cone Vertex Detection
5.2. Automatic Registration
5.3. Surface Deformation Measurement
6. Discussion
- (1)
- As shown in Figure 8c, when σGN = 1 mr, the conic surface was very rough, but its vertex could still be detected accurately, which fully proves the robustness of the algorithm to Gaussian noise. Table 1 shows the statistical results of the detection rate and location error under Gaussian noise with different intensities. It could be seen that the location error increased with increasing noise intensity. In general, detection of the cone vertex was successful if the location error was lower than 5 mr. Thus, the algorithm can maintain a high detection rate under Gaussian noise with intensity ranging from 0 to 1 mr.
- (2)
- Figure 9, Figure 10, Figure 11 indicate that the rotation and translation errors increase with increasing noise intensity. When σGN = 1 mr, most details on the conic surfaces were lost, but the point clouds could still be aligned accurately, which proves the robustness of the algorithm to Gaussian noise. In general, the point cloud registration was successful if the rotation error was lower than 5° and the translation error was lower than 5 mr. Thus, the algorithm performs well under Gaussian noise with an intensity ranging from 0 to 1 mr.
- (3)
- In general, surface deformation of a near-space supersonic vehicle is no more than 4 mm during ablation tests in an arc-heated wind tunnel. Table 2 shows the statistical results, where it can be seen that the precision of surface deformation measurement exceeds 0.05 mm when surface deformation is smaller than 4 mm.
7. Conclusions
Author Contributions
Funding
Conflicts of Interest
Appendix A
References
- Nguyen, T.H.; Nguyen, T.M.; Xie, L. Tightly-coupled ultra-wide band-aided monocular visual SLAM with degenerate anchor configurations. Auton. Robot. 2020, 44, 1519–1534. [Google Scholar] [CrossRef]
- Shane, G.W.; Voorhies, R.C.; Laurent, I. Efficient velodyne SLAM with point and plane features. Auton. Robot. 2018, 43, 1207–1224. [Google Scholar]
- Wang, F.R.; Lu, E.; Wang, Y.; Qiu, G.J.; Lu, H.Z. Efficient stereo visual simultaneous localization and mapping for an autonomous unmanned forklift in an unstructured warehouse. Appl. Sci. 2020, 10, 698. [Google Scholar] [CrossRef] [Green Version]
- Lei, H.; Jiang, G.; Quan, L. Fast descriptors and correspondence propagation for robust global point cloud registration. IEEE Trans. Image Process. 2017, 26, 1. [Google Scholar] [CrossRef] [PubMed]
- Dong, Z.; Liang, F.; Yang, B. Registration of large-scale terrestrical laser scanner point clouds: A review and benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 163, 327–342. [Google Scholar] [CrossRef]
- Guo, Y.; Sohel, F.; Bennamoun, M.; Lu, M.; Wan, J. Rotational projection statistics for 3D local surface description and object recognition. Int. J. Comput. Vis. 2013, 105, 63–86. [Google Scholar] [CrossRef] [Green Version]
- Tran, D.-S.; Ho, N.-H.; Yang, H.-J.; Baek, E.-T.; Kim, S.-H.; Lee, G. Real-time hand gesture spotting and recognition using RGB-D Camera and 3D convolutional neural network. Appl. Sci. 2020, 10, 722. [Google Scholar] [CrossRef] [Green Version]
- Besl, P.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
- Servos, J.; Waslander, S.L. Multi-Channel Generalized-ICP: A robust framework for multi-channel scan registration. Robot. Auton. Syst. 2017, 87, 247–257. [Google Scholar] [CrossRef] [Green Version]
- Serafin, J.; Grisetti, G. NICP: Dense normal based point cloud registration. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015; pp. 742–749. [Google Scholar]
- Attia, M.; Slama, Y. Efficient initial guess determination based on 3D point cloud projection for ICP algorithms. In Proceedings of the 2017 International Conference on High Performance Computing & Simulation (HPCS) 2017, Genoa, Italy, 17–21 July 2017; pp. 807–814. [Google Scholar]
- Makovetskii, A.; Voronin, S.; Kober, V.; Tihonkih, D. An efficient point-to-plane registration algorithm for affine transformation. In Applications of Digital Image Processing XL; SPIE: Bellingham, WA, USA, 2017. [Google Scholar]
- Li, P.; Wang, R.; Wang, Y.; Tao, W. Evaluation of the ICP algorithm in 3D point cloud registration. IEEE Access 2020, 8, 68030–68048. [Google Scholar] [CrossRef]
- Wu, P.; Li, W.; Yan, M. 3D scene reconstruction based on improved ICP algorithm. Microprocess. Microsystems 2020, 75, 103064. [Google Scholar] [CrossRef]
- Kang, Z.; Yang, J. A probabilistic graphical model for the classification of mobile LiDAR point clouds. ISPRS J. Photogramm. Remote Sens. 2018, 143, 108–123. [Google Scholar] [CrossRef]
- Hong, H.; Lee, B.H. Probabilistic normal distributions transform representation for accurate 3D point cloud registration. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017, Vancouver, BC, Canada, 24–28 September 2017; pp. 3333–3338. [Google Scholar] [CrossRef]
- Zhu, H.; Guo, B.; Zou, K.; Li, Y.; Yuen, K.-V.; Mihaylova, L.; Leung, H. A review of point set registration: From pairwise registration to groupwise registration. Sensors 2019, 19, 1191. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Lu, M.; Zhao, J.; Guo, Y.; Ma, Y. Accelerated coherent point drift for automatic 3D point cloud registration. IEEE GRSL 2016, 13, 162–166. [Google Scholar]
- Wang, G.; Chen, Y. Fuzzy correspondences guided Gaussian mixture model for point set registration. Knowl. Based Syst. 2017, 136, 200–209. [Google Scholar] [CrossRef]
- Ma, J.; Jiang, J.; Liu, C.; Li, Y. Feature guided Gaussian mixture model with semi-supervised EM and local geometric constraint for retinal image registration. Inf. Sci. 2017, 417, 128–142. [Google Scholar] [CrossRef]
- Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D ShapeNets: A deep representation for volumetric shapes. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. [Google Scholar]
- Polyakova, A.P.; Svetov, I.E. On a singular value decomposition of the normal Radon transform operator acting on 3D 2-tensor fields. J. Phys. Conf. Ser. 2021, 1715, 012041. [Google Scholar] [CrossRef]
- Wang, Y.S.; Qi, Y.; Man, Y. An improved hough transform method for detecting forward vehicle and lane in road. J. Phys. Conf. Ser. 2021, 1757, 012082. [Google Scholar] [CrossRef]
Noise Intensity (mr) | Location Error (mr) | Detection Rate |
---|---|---|
0.0 | 0.00 | 100% |
0.1 | 0.16 | 100% |
0.2 | 0.27 | 100% |
0.3 | 0.45 | 100% |
0.4 | 0.75 | 100% |
0.5 | 1.24 | 100% |
0.6 | 1.56 | 100% |
0.7 | 1.85 | 100% |
0.8 | 2.29 | 100% |
0.9 | 2.58 | 100% |
1.0 | 3.13 | 97% |
Time (s) | Reading (mm) | Measurement Result (mm) | Error (mm) |
---|---|---|---|
0.0 | 0.000 | 0.035 | 0.035 |
0.5 | 0.100 | 0.109 | 0.009 |
1.0 | 0.200 | 0.196 | 0.004 |
1.5 | 0.300 | 0.300 | 0.000 |
2.0 | 0.400 | 0.402 | 0.002 |
2.5 | 0.500 | 0.492 | 0.008 |
3.0 | 0.600 | 0.605 | 0.005 |
3.5 | 0.700 | 0.703 | 0.003 |
4.0 | 0.800 | 0.795 | 0.005 |
4.5 | 0.900 | 0.903 | 0.003 |
5.0 | 1.000 | 1.008 | 0.008 |
5.5 | 1.100 | 1.106 | 0.006 |
6.0 | 1.200 | 1.210 | 0.010 |
6.5 | 1.300 | 1.320 | 0.020 |
7.0 | 1.400 | 1.433 | 0.033 |
7.5 | 1.500 | 1.522 | 0.022 |
8.0 | 1.600 | 1.620 | 0.020 |
8.5 | 1.710 | 1.742 | 0.032 |
9.0 | 1.810 | 1.843 | 0.033 |
9.5 | 1.910 | 1.952 | 0.042 |
10.0 | 2.010 | 2.056 | 0.046 |
12.5 | 2.500 | 2.537 | 0.037 |
15.0 | 3.000 | 3.029 | 0.029 |
20.0 | 4.000 | 4.035 | 0.035 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, J.; Guo, P.; Sun, X. An Automatic 3D Point Cloud Registration Method Based on Biological Vision. Appl. Sci. 2021, 11, 4538. https://doi.org/10.3390/app11104538
Liu J, Guo P, Sun X. An Automatic 3D Point Cloud Registration Method Based on Biological Vision. Applied Sciences. 2021; 11(10):4538. https://doi.org/10.3390/app11104538
Chicago/Turabian StyleLiu, Jinbo, Pengyu Guo, and Xiaoliang Sun. 2021. "An Automatic 3D Point Cloud Registration Method Based on Biological Vision" Applied Sciences 11, no. 10: 4538. https://doi.org/10.3390/app11104538
APA StyleLiu, J., Guo, P., & Sun, X. (2021). An Automatic 3D Point Cloud Registration Method Based on Biological Vision. Applied Sciences, 11(10), 4538. https://doi.org/10.3390/app11104538