Research on Visual–Inertial Measurement Unit Fusion Simultaneous Localization and Mapping Algorithm for Complex Terrain in Open-Pit Mines
Abstract
:1. Introduction
2. Visual–Inertial Fusion SLAM System Framework
2.1. Front-End Visual–Inertial Odometry
2.2. Back-End Nonlinear Optimization
2.3. Loop Closure Detection
3. Front-End Visual–Inertial Odometry
3.1. Enhancement of Line Feature Detection Algorithm
3.2. Line Feature Description and Matching
3.3. Reprojection Residual Model Construction
3.4. IMU Data Pre-Integration
4. Back-End Optimization and Loop Closure Detection
4.1. Multi-Source Fusion Sliding Window Optimization
4.2. Marginalization Model
4.3. Point–Line Fusion Loop Closure Detection
5. Experimental Design and Results Analysis
5.1. Comparative Analysis of Enhanced Line Feature Extraction Algorithm
5.2. Evaluation of System Accuracy
5.3. Real-World Localization Experiment Validation
6. Conclusions
7. Patents
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Wang, Q.; Gu, Q.; Li, X.; Xiong, N. Comprehensive overview: Fleet management drives green and climate-smart open pit mine. Renew. Sustain. Energy Rev. 2024, 189, 113942. [Google Scholar] [CrossRef]
- Fahle, L.; Holley, E.A.; Walton, G.; Petruska, A.J. Analysis of SLAM-based lidar data quality metrics for geotechnical underground monitoring. Min. Metall. Explor. 2022, 39, 1939–1960. [Google Scholar] [CrossRef]
- Zhou, W.; Pan, Y.; Liu, J.; Wang, T.; Zha, H. Visual-Inertial-Wheel Odometry with Wheel-Aided Maximum-a-Posteriori Initialization for Ground Robots. IEEE Robot. Autom. Lett. 2024, 9, 4814–4821. [Google Scholar] [CrossRef]
- Zhu, D.; Ji, K.; Wu, D.; Liu, S. A coupled visual and inertial measurement units method for locating and mapping in coal mine tunnel. Sensors 2022, 22, 7437. [Google Scholar] [CrossRef] [PubMed]
- Ma, Z.-W.; Cheng, W.-S. Visual-Inertial RGB-D SLAM with Encoder Integration of ORB Triangulation and Depth Measurement Uncertainties. Sensors 2024, 24, 5964. [Google Scholar] [CrossRef]
- Yang, L.; Ma, H.; Nie, Z.; Zhang, H.; Wang, Z.; Wang, C. 3D LiDAR Point Cloud Registration Based on IMU Preintegration in Coal Mine Roadways. Sensors 2023, 23, 3473. [Google Scholar] [CrossRef]
- Li, D.; Liu, S.; Xiang, W.; Tan, Q.; Yuan, K.; Zhang, Z.; Hu, Y. A SLAM system based on RGBD image and point-line feature. IEEE Access 2021, 9, 9012–9025. [Google Scholar] [CrossRef]
- Tang, Y.; Qi, S.; Zhu, L.; Zhuo, X.; Zhang, Y.; Meng, F. Obstacle avoidance motion in mobile robotics. J. Syst. Simul. 2024, 36, 1–26. [Google Scholar]
- Ye, L.; Wu, F.; Zou, X.; Li, J. Path planning for mobile robots in unstructured orchard environments: An improved kinematically constrained bi-directional RRT approach. Comput. Electron. Agric. 2023, 215, 108453. [Google Scholar] [CrossRef]
- Wan, S.; Guan, S.; Tang, Y. Advancing bridge structural health monitoring: Insights into knowledge-driven and data-driven approaches. J. Data Sci. Intell. Syst. 2024, 2, 129–140. [Google Scholar] [CrossRef]
- Xu, Z.; Lu, X.; Wang, W.; Xu, E.; Qin, R.; Niu, Y.; Qiao, X.; Yang, F.; Yan, R. Monocular video frame optimization through feature-based parallax analysis for 3D pipe reconstruction. Photogramm. Eng. Remote Sens. 2022, 88, 469–478. [Google Scholar] [CrossRef]
- Xin, X.; Huang, W.; Zhong, S.; Zhang, M.; Liu, Z.; Xie, Z. Accurate and complete line segment extraction for large-scale point clouds. Int. J. Appl. Earth Obs. Geoinf. 2024, 128, 103728. [Google Scholar] [CrossRef]
- Zhang, D.; Xu, B.; Hu, H.; Zhu, Q.; Wang, Q.; Ge, X.; Chen, M.; Zhou, Y. Spherical Hough Transform for robust line detection toward a 2D–3D integrated mobile mapping system. Photogramm. Eng. Remote Sens. 2023, 89, 50–59. [Google Scholar] [CrossRef]
- Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Ruan, C.; Zang, Q.; Zhang, K.; Huang, K. Dn-slam: A visual slam with orb features and nerf mapping in dynamic environments. IEEE Sens. J. 2023, 24, 5279–5287. [Google Scholar] [CrossRef]
- Engel, J.; Koltun, V.; Cremers, D. Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 611–625. [Google Scholar] [CrossRef]
- He, Y.; Zhao, J.; Guo, Y.; He, W.; Yuan, K. PL-VIO: Tightly-coupled monocular visual–inertial odometry using point and line features. Sensors 2018, 18, 1159. [Google Scholar] [CrossRef]
- Zhang, L.; Koch, R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. J. Vis. Commun. Image Represent. 2013, 24, 794–805. [Google Scholar] [CrossRef]
- Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-manifold preintegration for real-time visual-inertial odometry. IEEE Trans. Robot. 2016, 33, 1–21. [Google Scholar] [CrossRef]
- Zuñiga-Noël, D.; Moreno, F.-A.; Gonzalez-Jimenez, J. An analytical solution to the IMU initialization problem for visual-inertial systems. IEEE Robot. Autom. Lett. 2021, 6, 6116–6122. [Google Scholar] [CrossRef]
- Tian, C.; Liu, H.; Liu, Z.; Li, H.; Wang, Y. Research on multi-sensor fusion SLAM algorithm based on improved gmapping. IEEE Access 2023, 11, 13690–13703. [Google Scholar] [CrossRef]
- Bartoli, A.; Sturm, P. The 3D line motion matrix and alignment of line reconstructions. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; p. I. [Google Scholar]
- Tsakiris, M.C. Low-rank matrix completion theory via Plücker coordinates. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 10084–10099. [Google Scholar] [CrossRef] [PubMed]
- Cattaneo, D.; Vaghi, M.; Valada, A. Lcdnet: Deep loop closure detection and point cloud registration for lidar slam. IEEE Trans. Robot. 2022, 38, 2074–2093. [Google Scholar] [CrossRef]
Sequence | LSD | EDLines | Ours | |||
---|---|---|---|---|---|---|
Numbers/Line | Time/ms | Numbers/Line | Time/ms | Numbers/Line | Time/ms | |
Scenario 1 | 415 | 86.5 | 375 | 40.6 | 213 | 36.7 |
Scenario 2 | 568 | 89.4 | 510 | 62.3 | 398 | 60.4 |
MH_04 | 635 | 103.3 | 569 | 66.2 | 368 | 61.9 |
MH_05 | 726 | 98.5 | 617 | 67.8 | 353 | 62.3 |
V1_03 | 418 | 83.4 | 395 | 43.5 | 233 | 39.1 |
V2_03 | 354 | 87.1 | 311 | 43.1 | 185 | 41.7 |
Sequence | VINS-Mono | PL-VIO | ORB-SLAM3 | Ours | ||||
---|---|---|---|---|---|---|---|---|
RMSE/m | Mean/m | RMSE/m | Mean/m | RMSE/m | Mean/m | RMSE/m | Mean/m | |
MH_01 | 0.170410 | 0.144947 | 0.149992 | 0.124888 | 0.035390 | 0.029830 | 0.019034 | 0.016230 |
MH_03 | 0.203385 | 0.187061 | 0.238910 | 0.219831 | 0.048817 | 0.044774 | 0.023251 | 0.020838 |
MH_04 | 0.351601 | 0.322701 | 0.326813 | 0.299383 | 0.057246 | 0.04934 | 0.048088 | 0.042501 |
MH_05 | 0.323365 | 0.304474 | 0.298774 | 0.281060 | 0.067150 | 0.060276 | 0.046667 | 0.041904 |
V1_01y | 0.084627 | 0.075022 | 0.103103 | 0.092958 | 0.034904 | 0.031758 | 0.032795 | 0.029826 |
V1_02 | 0.159020 | 0.150049 | 0.131306 | 0.118482 | 0.007813 | 0.007217 | 0.008568 | 0.007788 |
V1_03 | 0.190584 | 0.173283 | 0.213054 | 0.193671 | 0.018956 | 0.017240 | 0.022056 | 0.020778 |
V2_01 | 0.097043 | 0.087504 | 0.079954 | 0.074809 | 0.018472 | 0.014673 | 0.012600 | 0.011401 |
V2_02 | 0.176523 | 0.150649 | 0.153797 | 0.140683 | 0.013429 | 0.012363 | 0.010747 | 0.009912 |
V2_03 | 0.289265 | 0.276645 | 0.253488 | 0.240798 | 0.176151 | 0.144637 | 0.125673 | 0.091029 |
Category | VINS-Mono | PL-VIO | ORB-SLAM3 | Ours |
---|---|---|---|---|
RMSE | 1.191849 | 0.901704 | 0.589311 | 0.349885 |
Mean | 1.169402 | 0.884324 | 0.505794 | 0.337527 |
Category | VINS-Mono | PL-VIO | ORB-SLAM3 | Ours |
---|---|---|---|---|
RMSE | 0.737751 | 0.605052 | 0.518785 | 0.199050 |
Mean | 0.698574 | 0.576665 | 0.490459 | 0.189343 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xiao, Y.; Xu, W.; Li, B.; Zhang, H.; Xu, B.; Zhou, W. Research on Visual–Inertial Measurement Unit Fusion Simultaneous Localization and Mapping Algorithm for Complex Terrain in Open-Pit Mines. Sensors 2024, 24, 7360. https://doi.org/10.3390/s24227360
Xiao Y, Xu W, Li B, Zhang H, Xu B, Zhou W. Research on Visual–Inertial Measurement Unit Fusion Simultaneous Localization and Mapping Algorithm for Complex Terrain in Open-Pit Mines. Sensors. 2024; 24(22):7360. https://doi.org/10.3390/s24227360
Chicago/Turabian StyleXiao, Yuanbin, Wubin Xu, Bing Li, Hanwen Zhang, Bo Xu, and Weixin Zhou. 2024. "Research on Visual–Inertial Measurement Unit Fusion Simultaneous Localization and Mapping Algorithm for Complex Terrain in Open-Pit Mines" Sensors 24, no. 22: 7360. https://doi.org/10.3390/s24227360
APA StyleXiao, Y., Xu, W., Li, B., Zhang, H., Xu, B., & Zhou, W. (2024). Research on Visual–Inertial Measurement Unit Fusion Simultaneous Localization and Mapping Algorithm for Complex Terrain in Open-Pit Mines. Sensors, 24(22), 7360. https://doi.org/10.3390/s24227360