Real-Time 6-DOF Pose Estimation of Known Geometries in Point Cloud Data
Abstract
:1. Introduction
2. The Challenges of Estimating Object Pose in Point Cloud Data
2.1. Challenge 1: Providing Reliable Solutions
2.2. Challenge 2: Providing Timely Solutions
3. Related Work
4. Problem Formulation
Algorithm 1: Lookup table generation (offline) |
Algorithm 2: Pose Lookup Method (PLuM) algorithm—objective function |
5. Results
5.1. Influence of the Initial Pose Estimate
5.2. Clutter Test
5.3. The Effect of the Lookup Table Uncertainty
5.4. The Effect of Lookup Table Resolution
5.5. The Effect of Measurement Uncertainty
6. Case Study: Haul Truck Pose Estimation
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
CAD | Computer-Aided Design |
CPU | Central Processing Unit |
DOF | Degrees Of Freedom |
FOV | Field Of View |
GPS | Global Positioning System |
GPU | Graphics Processing Unit |
GNSS | Global Navigation Satellite System |
ICP | Iterative Closest Point |
LiDAR | Light Detection and Ranging |
MSoE | Maximum Sum of Evidence |
PLuM | Pose Lookup Method |
RTK | Real-Time Kinematic Positioning |
References
- Phillips, T. Determining and Verifying Object Pose from LiDAR Measurements to Support the Perception Needs of an Autonomous Excavator. Ph.D. Thesis, The University of Queensland, Brisbane, Australia, 2016. [Google Scholar] [CrossRef] [Green Version]
- Phillips, T.; D’adamo, T.; McAree, P. Maximum Sum of Evidence—An Evidence-Based Solution to Object Pose Estimation in Point Cloud Data. Sensors 2021, 21, 6473. [Google Scholar] [CrossRef] [PubMed]
- Phillips, T.G.; Guenther, N.; McAree, P.R. When the Dust Settles. J. Field Robot. 2017, 34, 985–1009. [Google Scholar] [CrossRef]
- Bergelt, R.; Khan, O.; Hardt, W. Improving the intrinsic calibration of a Velodyne LiDAR sensor. In Proceedings of the IEEE Sensors, Glasgow, UK, 29 October–1 November 2017; pp. 1–3. [Google Scholar] [CrossRef]
- Mirzaei, F.M. Extrinsic and Intrinsic Sensor Calibration. Ph.D. Thesis, University of Minnesota, Saint Paul, MN, USA, 2013. [Google Scholar]
- Sheehan, M.; Harrison, A.; Newman, P. Self-calibration for a 3D laser. Int. J. Robot. Res. 2012, 31, 675–687. [Google Scholar] [CrossRef] [Green Version]
- D’Adamo, T.A.; Phillips, T.G.; McAree, P.R. Registration of three-dimensional scanning LiDAR sensors: An evaluation of model-based and model-free methods. J. Field Robot. 2018, 35, 1182–1200. [Google Scholar] [CrossRef]
- Phillips, T.G.; Green, M.E.; McAree, P.R. An Adaptive Structure Filter for Sensor Registration from Unstructured Terrain. J. Field Robot. 2015, 32, 748–774. [Google Scholar] [CrossRef]
- Thorpe, C.; Durrant-Whyte, H. Field Robots; Technical Report; Robotics Institute, Carnegie Mellon University: Pittsburgh, PA, USA; Australian Centre for Field Robotics, The University of Sydney: Sydney, NSW, Australia, 2001. [Google Scholar]
- Besl, P.J.; McKay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef] [Green Version]
- Phillips, T.G.; McAree, P.R. An evidence-based approach to object pose estimation from LiDAR measurements in challenging environments. J. Field Robot. 2018, 35, 921–936. [Google Scholar] [CrossRef]
- Velodyne ULTRA Puck. 2019. Available online: https://velodynelidar.com/wp-content/uploads/2019/12/63-9378_Rev-F_Ultra-Puck_Datasheet_Web.pdf (accessed on 11 October 2022).
- High-Resolution OS1 Lidar Sensor: Robotics, Trucking, Mapping|Ouster. Available online: https://ouster.com/products/scanning-lidar/os1-sensor/ (accessed on 11 October 2022).
- OpenCL Overview—The Khronos Group Inc. Available online: https://www.khronos.org/opencl/ (accessed on 11 October 2022).
- CUDA Toolkit—Free Tools and Training|NVIDIA Developer. Available online: https://developer.nvidia.com/cuda-toolkit (accessed on 11 October 2022).
- Chen, Y.; Medioni, G. Object modeling by registration of multiple range images. Proc. IEEE Int. Conf. Robot. Autom. 1991, 3, 2724–2729. [Google Scholar] [CrossRef]
- Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings of the International Conference on 3-D Digital Imaging and Modeling, 3DIM, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Gu, P. Free-form surface inspection techniques state of the art review. Comput.-Aided Des. 2004, 36, 1395–1417. [Google Scholar] [CrossRef]
- Ellingson, L.; Zhang, J. An efficient algorithm for matching protein binding sites for protein function prediction. In Proceedings of the 2011 ACM Conference on Bioinformatics, Computational Biology and Biomedicine (BCB 2011), Chicago, IL, USA, 1–3 August 2011; pp. 289–293. [Google Scholar] [CrossRef]
- Bertolazzi, P.; Liuzzi, G.; Guerra, C. A global optimization algorithm for protein surface alignment. In Proceedings of the 2009 IEEE International Conference on Bioinformatics and Biomedicine Workshops (BIBMW 2009), Washington, DC, USA, 1–4 November 2009; pp. 93–100. [Google Scholar] [CrossRef]
- Stewart, C.V.; Tsai, C.L.; Roysam, B. The dual-bootstrap iterative closest point algorithm with application to retinal image registration. IEEE Trans. Med. Imaging 2003, 22, 1379–1394. [Google Scholar] [CrossRef]
- Thielemann, J.; Skotheim, Ø.; Nygaard, J.O.; Vollset, T. System for conveyor belt part picking using structured light and 3D pose estimation. Three-Dimens. Imaging Metrol. 2009, 7239, 72390X. [Google Scholar] [CrossRef]
- Borthwick, J.R. Mining Haul Truck Pose Estimation and Load Profiling Using Stereo Vision. Ph.D. Thesis, The University of British Columbia, Vancouver, BC, USA, 2003. [Google Scholar]
- Cui, Y.; An, Y.; Sun, W.; Hu, H.; Song, X. Memory-Augmented Point Cloud Registration Network for Bucket Pose Estimation of the Intelligent Mining Excavator. IEEE Trans. Instrum. Meas. 2022, 71, 1–12. [Google Scholar] [CrossRef]
- Pomerleau, F.; Colas, F.; Siegwart, R. A Review of Point Cloud Registration Algorithms for Mobile Robotics. Found. Trends Robot. 2015, 4, 1–104. [Google Scholar] [CrossRef] [Green Version]
- Donoso, F.A.; Austin, K.J.; McAree, P.R. How do ICP variants perform when used for scan matching terrain point clouds? Robot. Auton. Syst. 2017, 87, 147–161. [Google Scholar] [CrossRef] [Green Version]
- Bouaziz, S.; Tagliasacchi, A.; Pauly, M. Sparse Iterative Closest Point. Comput. Graph. Forum 2013, 32, 113–123. [Google Scholar] [CrossRef] [Green Version]
- Yang, J.; Li, H.; Jia, Y. Go-ICP: Solving 3D registration efficiently and globally optimally. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 1457–1464. [Google Scholar] [CrossRef] [Green Version]
- Zhang, J.; Yao, Y.; Deng, B. Fast and Robust Iterative Closest Point. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 3450–3466. [Google Scholar] [CrossRef] [PubMed]
- Myronenko, A.; Song, X. Point set registration: Coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 2262–2275. [Google Scholar] [CrossRef] [Green Version]
- III, W.M.W. Statistical Approaches to Feature-Based Object Recognition. Int. J. Comput. Vis. 1997, 21, 63–98. [Google Scholar] [CrossRef]
- Luo, B.; Hancock, E.R. A unified framework for alignment and correspondence. Comput. Vis. Image Underst. 2003, 92, 26–55. [Google Scholar] [CrossRef]
- McNeill, G.; Vijayakumar, S. A probabilistic approach to robust shape matching. In Proceedings of the International Conference on Image Processing, ICIP, Atlanta, GA, USA, 8–11 October 2006; pp. 937–940. [Google Scholar] [CrossRef] [Green Version]
- Bronstein, M.M.; Bruna, J.; Lecun, Y.; Szlam, A.; Vandergheynst, P. Geometric deep learning: Going beyond Euclidean data. IEEE Signal Process. Mag. 2016, 34, 18–42. [Google Scholar] [CrossRef] [Green Version]
- Ahmadli, I.; Bedada, W.B.; Palli, G. Deep Learning and OcTree-GPU-Based ICP for Efficient 6D Model Registration of Large Objects. In Human-Friendly Robotics 2021; Springer Proceedings in Advanced Robotics; Springer: Cham, Switzerland, 2022; Volume 23, pp. 29–43. [Google Scholar] [CrossRef]
- Wang, Y.; Solomon, J. Deep Closest Point: Learning Representations for Point Cloud Registration. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3522–3531. [Google Scholar] [CrossRef]
- Lin, H.Y.; Chang, C.C.; Liang, S.C. 3D Pose estimation using genetic-based iterative closest point algorithm. Int. J. Innov. Comput. Inf. Control 2018, 14, 537–547. [Google Scholar]
- Aeries 1: The First 4D LiDAR™ System for Autonom|Aeva Inc. 2022. Available online: https://www.aeva.com/aeries-i/ (accessed on 11 October 2022).
- Hexsel, B.; Vhavle, H.; Chen, Y. DICP: Doppler Iterative Closest Point Algorithm. In Proceedings of the Robotics: Science and Systems, New York City, NY, USA, 27 June–1 July 2022. [Google Scholar] [CrossRef]
- Li, Y.; Ma, L.; Zhong, Z.; Liu, F.; Chapman, M.A.; Cao, D.; Li, J. Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 3412–3432. [Google Scholar] [CrossRef] [PubMed]
- Wu, D.; Liang, Z.; Chen, G. Deep learning for LiDAR-only and LiDAR-fusion 3D perception: A survey. Intell. Robot. 2022, 2, 105–129. [Google Scholar] [CrossRef]
- Pitkin, T.A. GPU Ray Tracing with CUDA. Ph.D. Thesis, Eastern Washington University, Cheney, WA, USA, 2013. [Google Scholar]
- Zhang, J.; Yao, Y.; Deng, B. yaoyx689/Fast-Robust-ICP. 2022. Available online: https://github.com/yaoyx689/Fast-Robust-ICP (accessed on 15 October 2022).
- Tsin, Y.; Kanade, T. A Correlation-Based Approach to Robust Point Set Registration. In Proceedings of the 8th European Conference on Computer Vision, Prague, Czech Republic, 11–14 May 2004; Volume 3, pp. 558–569. [Google Scholar] [CrossRef]
- Jian, B.; Vemuri, B.C. A robust algorithm for point set registration using mixture of Gaussians. In Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05), Beijing, China, 17–21 October 2005; Volume 2, pp. 1246–1251. [Google Scholar] [CrossRef] [Green Version]
- Singh, S. State of the Art in Automation of Earthmoving. J. Aerosp. Eng. 1997, 10, 179–188. [Google Scholar] [CrossRef] [Green Version]
- D’adamo, T.; Phillips, T.; McAree, P. LiDAR-Stabilised GNSS-IMU Platform Pose Tracking. Sensors 2022, 22, 2248. [Google Scholar] [CrossRef]
- Dunbabin, M.; Corke, P. Autonomous excavation using a rope shovel. J. Field Robot. 2006, 23, 379–394. [Google Scholar] [CrossRef]
- Hirayama, M.; Guivant, J.; Katupitiya, J.; Whitty, M. Path planning for autonomous bulldozers. Mechatronics 2019, 58, 20–38. [Google Scholar] [CrossRef]
- Kim, S.H.; Lee, Y.S.; Sun, D.I.; Lee, S.K.; Yu, B.H.; Jang, S.H.; Kim, W.; Han, C.S. Development of bulldozer sensor system for estimating the position of blade cutting edge. Autom. Constr. 2019, 106, 102890. [Google Scholar] [CrossRef]
- Ali, D.; Frimpong, S. DeepHaul: A deep learning and reinforcement learning-based smart automation framework for dump trucks. Prog. Artif. Intell. 2021, 10, 157–180. [Google Scholar] [CrossRef]
- Egli, P.; Hutter, M. Towards RL-Based Hydraulic Excavator Automation. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 2692–2697. [Google Scholar] [CrossRef]
- Bradley, D.A.; Seward, D.W. The Development, Control and Operation of an Autonomous Robotic Excavator. J. Intell. Robot. Syst. 1998, 21, 73–97. [Google Scholar] [CrossRef]
- Lanz, G. Guided Spotting. 2018. Available online: https://www.modularmining.com/wp-content/uploads/2019/09/EMJ_Guided-Spotting-V2_August2018.pdf (accessed on 18 October 2022).
- Dudley, J.; McAree, R. Shovel Load Assist Project; Technical Report; CRC Mining: Brisbane, Australia, 2016. [Google Scholar]
- Neptec Technologies Demonstrates 3DRi-Based Truck Spotting Application at Fording River Coal Mine|Geo Week New|Lidar, 3D, and More Tools at the Intersection of Geospatial Technology and the Built World. 2015. Available online: https://www.geoweeknews.com/news/vol13no8-neptec-demonstrates-3dri-based-truck-spotting-at-fording-river-coal-mine (accessed on 18 October 2022).
- FlexPak6. 2016. Available online: https://hexagondownloads.blob.core.windows.net/public/Novatel/assets/Documents/Papers/FlexPak6/FlexPak6.pdf (accessed on 28 October 2022).
Method | Strengths | Limitations |
---|---|---|
Iterative Closest Point (ICP) [10] | Accurate 6-DOF registration with noise-free point clouds. The algorithm is independent of shape representation. | The results strongly depend on initial alignment. The registration process requires preprocessing such as point cloud segmentation and does not provide real-time results. The solution tends to converge to local extrema and is not robust with clutter and noisy data. |
Sparse ICP [27] | Improves the performance of ICP with noisy and incomplete data. | The algorithm is computationally expensive. The solution converges to local extrema when there is minimal overlap between the two datasets. |
Go-ICP [28] | The first globally optimal ICP algorithm, providing accurate results when good initialisation is not possible. | The algorithm is limited to scenarios where real-time performance is not critical. |
Fast and Robust ICP [29] | The algorithm provides accurate results on noisy datasets and with some clutter. The performance offers a significant improvement over the generic ICP algorithm. | The performance requires good initialisation. The Fast ICP variation compromises accuracy for speed. The Robust ICP variation compromises speed for accuracy. |
Maximum likelihood estimation variations of ICP [30,31,32,33] | This class of algorithms performs well without good initialisation. They provide accurate registration in the presence of noise, outliers, and missing measurements. | The algorithms tend to converge to local solutions, and many do not provide real-time results. |
Maximum Sum of Evidence (MSoE) [2] | This reward-based method provides an accurate point cloud-to-model registration and performs well without good initialisation. The results are accurate in noisy, occluded, and cluttered environments. | The algorithm is computationally expensive and does not provide real-time results. |
Geometric deep-learning-based methods [34,35,36] | These methods provide accurate performance in scenarios similar to the trained data. | The performance is not deterministic with unseen cases, and it is difficult to trace errors. The methods require sufficient training data. |
ICP | Fast ICP | Robust ICP | Sparse ICP | PLuM (CPU) | PLuM (GPU) | |
---|---|---|---|---|---|---|
(mm) | 27.98 | 28.18 | 0.14 | 3.47 | 2.89 | 2.89 |
t (ms) | 566 | 455 | 1813 | 34174 | 118 | 25 |
ICP | Fast ICP | Robust ICP | Sparse ICP | PLuM (CPU) | PLuM (GPU) | |
---|---|---|---|---|---|---|
(mm) | 154.36 | 154.37 | 159.21 | 157.27 | 3.89 | 3.89 |
t (ms) | 646 | 1425 | 4112 | 78566 | 132 | 32 |
% Clutter (mm) (t (s)) | ICP | Fast ICP | Robust ICP | PLuM (GPU) |
---|---|---|---|---|
0 | 144.47 (0.56) | 145.54 (0.45) | 1.24 (1.83) | 3.42 (0.05) |
50 | 123.43 (0.63) | 121.93 (0.42) | 97.65 (3.42) | 2.63 (0.06) |
60 | 147.63 (0.62) | 142.16 (0.81) | 142.16 (2.55) | 2.19 (0.06) |
70 | 153.46 (0.67) | 169.81 (0.55) | 200.32 (2.49) | 4.27 (0.07) |
80 | 158.64 (0.57) | 163.93 (0.81) | 168.60 (2.79) | 3.61 (0.09) |
90 | 147.52 (0.72) | 148.45 (0.41) | 155.08 (2.33) | 4.66 (0.17) |
(mm) | 5 | 10 | 15 | 20 | 25 | 30 | 35 | 40 | 45 | 50 |
---|---|---|---|---|---|---|---|---|---|---|
(mm) | 2.33 | 2.00 | 1.32 | 2.03 | 1.54 | 2.89 | 3.05 | 3.66 | 4.22 | 4.27 |
Resolution (mm) | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 15 | 20 |
---|---|---|---|---|---|---|---|---|---|---|---|---|
(mm) | 0.89 | 1.03 | 1.13 | 1.61 | 1.71 | 1.65 | 2.07 | 2.20 | 1.53 | 2.61 | 4.73 | 6.11 |
Uncertainty (mm) (mm) (t (s)) | ICP | Fast ICP | Robust ICP | PLuM (GPU) |
---|---|---|---|---|
0 | 27.98 (0.57) | 28.18 (0.40) | 0.14 (3.46) | 2.00 (0.05) |
10 | 28.45 (1.09) | 29.10 (0.68) | 1.44 (2.80) | 2.89 (0.05) |
20 | 29.92 (0.71) | 30.00 (0.30) | 3.11 (3.97) | 4.73 (0.05) |
30 | 30.49 (1.26) | 30.48 (0.68) | 1.97 (2.23) | 1.98 (0.05) |
40 | 31.96 (1.80) | 31.96 (0.65) | 12.29 (5.03) | 4.24 (0.05) |
50 | 31.38 (1.68) | 31.39 (0.27) | 4.21 (3.04) | 4.86 (0.05) |
(m) | ||||||
---|---|---|---|---|---|---|
Method | 0.1 | 0.2 | 0.4 | 0.6 | (s) | |
% estimates less than | ICP | unable to solve under the test conditions | ||||
MSoE | 3 | 55 | 95 | 99 | 6055 | |
PLuM | 25 | 78 | 96 | 99 | 0.048 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bhandari, V.; Phillips, T.G.; McAree, P.R. Real-Time 6-DOF Pose Estimation of Known Geometries in Point Cloud Data. Sensors 2023, 23, 3085. https://doi.org/10.3390/s23063085
Bhandari V, Phillips TG, McAree PR. Real-Time 6-DOF Pose Estimation of Known Geometries in Point Cloud Data. Sensors. 2023; 23(6):3085. https://doi.org/10.3390/s23063085
Chicago/Turabian StyleBhandari, Vedant, Tyson Govan Phillips, and Peter Ross McAree. 2023. "Real-Time 6-DOF Pose Estimation of Known Geometries in Point Cloud Data" Sensors 23, no. 6: 3085. https://doi.org/10.3390/s23063085
APA StyleBhandari, V., Phillips, T. G., & McAree, P. R. (2023). Real-Time 6-DOF Pose Estimation of Known Geometries in Point Cloud Data. Sensors, 23(6), 3085. https://doi.org/10.3390/s23063085