W-VSLAM: A Visual Mapping Algorithm for Indoor Inspection Robots
Abstract
:1. Introduction
2. Method Construction
2.1. Coordinate System and Parameter Representation
2.2. Visual SLAM Perception Algorithm with Wheel Speed Fusion
2.3. Visual Observation Constraint
2.4. Wheel Speed Pre-Integration Constraint
2.5. Loop Closure Co-Visibility Observation Constraint
2.6. Formulation of the Maximum A Posteriori Estimation Problem
3. Experimental Comparison and Analysis
3.1. Data Collection
3.2. Quantitative Evaluation Metrics
3.3. Indoor Comparative Experimental Results and Analysis
3.4. Results and Analysis of Multiple Indoor Experiments
3.5. The Results and Analysis of the Indoor Long Corridor Environment Experiment
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Gao, Y.; Li, S.; Wang, X. A patrol mobile robot for power transformer substations based on ROS. In Proceedings of the 2018 Chinese Control and Decision Conference (CCDC), Shenyang, China, 9–11 June 2018. [Google Scholar]
- Brossard, M.; Bonnabel, S.; Barrau, A. Unscented Kalman filter on Lie groups for visual inertial odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
- He, G.; Liu, Y.; Li, C. Tightly coupled laser-inertial pose estimation and map building based on B-spline curves. Meas. Sci. Technol. 2023, 34, 125130. [Google Scholar] [CrossRef]
- Shimizu, R.; Sugita, Y. Accuracy improvement of cooperative localization using UAV and UGV. Adv. Robot. 2023, 37, 999–1011. [Google Scholar] [CrossRef]
- You, B.; Zhong, G.; Chen, C. A Simultaneous Localization and Map System Using the Iterative Error State Kalman Filter Judgment Algorithm for Global Navigation Satellite System. Sensors 2023, 23, 6000. [Google Scholar] [CrossRef] [PubMed]
- Su, Z.; Zhou, J.; Dai, J. Optimization Design and Experimental Study of Gmap Algorithm. In Proceedings of the 2020 Chinese Control and Decision Conference (CCDC), Hefei, China, 22–24 August 2020. [Google Scholar]
- Gao, H.; Zhang, X.; Wen, J. Autonomous indoor exploration via polygon map construction and graph-based SLAM using directional endpoint features. IEEE Trans. Autom. Sci. Eng. 2018, 16, 1531–1542. [Google Scholar] [CrossRef]
- Yang, A.; Cao, Y.; Liu, Y. AGV robot for laser-SLAM based method testing in automated container terminal. IEEE Trans. Image Process. 2023, 50, 969–980. [Google Scholar] [CrossRef]
- Wang, X.; Ma, X.; Li, Z. Research on SLAM and Path Planning Method of Inspection Robot in Complex Scenarios. Electronics 2023, 12, 2178. [Google Scholar] [CrossRef]
- Li, S.; Yun, J.; Feng, C. An Indoor Autonomous Inspection and Firefighting Robot Based on SLAM and Flame Image Recognition. Fire 2023, 6, 93. [Google Scholar] [CrossRef]
- Wu, J.; Guo, X.; Georgiou, G. Vins on wheels. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017. [Google Scholar]
- Zhang, M.; Chen, Y.; Li, M. Vision-aided localization for ground robots. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019. [Google Scholar]
- Liu, J.; Gao, W.; Hu, Z. Visual-inertial odometry tightly coupled with wheel encoder adopting robust initialization and online extrinsic calibration. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019. [Google Scholar]
- Lee, W.; Eckenhoff, K.; Yang, Y. Visual-inertial-wheel odometry with online calibration. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020. [Google Scholar]
- Zheng, F.; Liu, H. Visual-odometric localization and map for ground vehicles using SE(2)-XYZ constraints. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Campos, C.; Elvira, R.; Rodríguez, G. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Li, J.; Yang, B.; Huang, K. Robust and efficient visual-inertial odometry with multi-plane priors. In Proceedings of the 2nd Chinese Conference on Pattern Recognition and Computer Vision, PRCV 2019, Xi’an, China, 8–11 November 2019. [Google Scholar]
- Zhu, J.; Yao, C.; Janschek, K. Stereo Visual-Inertial Fusion for UAV State Estimation. IFAC-PapersOnLine 2020, 53, 9420–9425. [Google Scholar] [CrossRef]
- Li, J.; Pei, L.; Zou, D. Attention-SLAM: A visual monocular SLAM learning from human gaze. IEEE Sens. J. 2020, 21, 6408–6420. [Google Scholar] [CrossRef]
- Derpanis, G. Overview of the RANSAC Algorithm. Image Rochester NY 2010, 4, 2–3. Available online: https://scholar.google.com/scholar_lookup?title=Overview+of+the+RANSAC+Algorithm&author=Derpanis,+K.G.&publication_year=2010&journal=Image+Rochester+NY&volume=4&pages=2%E2%80%933 (accessed on 26 August 2024).
- Miguel, C.; Domingo, M.; Andrés, C. An Efficient Point-Matching Method Based on Multiple Geometrical Hypotheses. Electronics 2021, 10, 246. [Google Scholar] [CrossRef]
- Kummerle, R.; Grisetti, G.; Strasdat, H. g2o: A general framework for graph optimization. In Proceedings of the 2011 IEEE International Conference on Robotics and Automatio, Shanghai, China, 9–13 May 2011. [Google Scholar]
- Galvez-Lpez, D. Tardos.Bags of Binary Words for Fast Place Recognition in Image Sequences. IEEE Trans. Robot. 2012, 28, 1188–1197. [Google Scholar] [CrossRef]
- Zhu, Y.; An, H.; Wang, H. RC-SLAM: Road Constrained Stereo Visual SLAM System Based on Graph Optimization. Sensors 2024, 24, 536. [Google Scholar] [CrossRef] [PubMed]
- Forster, C.; Carlone, L.; Dellaert, F. On-manifold preintegration for real-time visual–inertial odometry. IEEE Trans. Robot. 2016, 33, 1–21. [Google Scholar] [CrossRef]
- Heng, L.; Li, B.; Pollefeys, M. Camodocal: Automatic intrinsic and extrinsic calibration of a rig withmultiple generic cameras and odometry. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013. [Google Scholar]
- He, Y.; Chai, Z.; Liu, X. Tightly-coupled vision-gyro-wheel odometry for ground vehicle with online extrinsic calibration. In Proceedings of the 2020 3rd International Conference on Intelligent Autonomous Systems (ICoIAS), Singapore, 26–29 February 2020. [Google Scholar]
- Dou, H.; Wang, Z.; Wang, C. Immediate Pose Recovery Method for Untracked Frames in Feature-Based SLAM. Sensors 2024, 24, 835. [Google Scholar] [CrossRef] [PubMed]
- Ly, A.; Marsman, M.; Verhagen, J. A tutorial on Fisher information. J. Math. Psychol. 2017, 80, 40–55. [Google Scholar] [CrossRef]
- Ralph, H. Schur type comparison theorems for affine curves. J. Geom. 2023, 115, 1. [Google Scholar] [CrossRef]
Parameters | Radial Distortion | Radial Distortion | Tangential Distortion | Tangential Distortion |
---|---|---|---|---|
Value | −0.02209723 | −0.00543498 | 0.00179311 | −0.00069370 |
Error Type | ORBSLAM3 Algorithm | Wheel Odometer | The Proposed Algorithm |
---|---|---|---|
Maximum Translational Error (m) | 0.996424 | 0.663076 | 0.135999 |
Minimum Translational Error (m) | 0.053682 | 0.191564 | 0.005725 |
Translational Root Mean Square Error (m) | 0.649618 | 0.411756 | 0.070133 |
Maximum Rotational Error (rad) | 1.798408 | 0.249882 | 0.073018 |
Minimum Translational Error (rad) | 1.499492 | 0.000036 | 0.000234 |
Rotational Root Mean Square Error (rad) | 1.674251 | 0.138140 | 0.024885 |
Error Type | ORBSLAM3 Algorithm | Wheel Odometer | The Proposed Algorithm |
---|---|---|---|
Maximum Translational Error (m) | 2.258283 | 0.072512 | 0.083033 |
Minimum Translational Error (m) | 1.679300 | 0.004918 | 0.005750 |
Translational Root Mean Square Error (m) | 1.934542 | 0.030060 | 0.035044 |
Maximum Rotational Error (rad) | 0.267859 | 0.015824 | 0.042052 |
Minimum Translational Error (rad) | 0.004864 | 0.000011 | 0.000113 |
Rotational Root Mean Square Error (rad) | 0.080078 | 0.002707 | 0.013080 |
Error Type | ORBSLAM3 Algorithm | Wheel Odometer | The Proposed Algorithm |
---|---|---|---|
Maximum Translational Error (m) | 2.448048 | 0.877545 | 0.265917 |
Minimum Translational Error (m) | 0.270060 | 0.130613 | 0.008439 |
Translational Root Mean Square Error (m) | 1.263682 | 0.384537 | 0.160804 |
Maximum Rotational Error (rad) | 2.242332 | 0.288801 | 0.255336 |
Minimum Translational Error (rad) | 1.169881 | 0.001404 | 0.001039 |
Rotational Root Mean Square Error (rad) | 1.844953 | 0.157194 | 0.101190 |
Error Type | ORBSLAM3 Algorithm | Wheel Odometer | The Proposed Algorithm |
---|---|---|---|
Maximum Translational Error (m) | 1.798007 | 0.100730 | 0.120928 |
Minimum Translational Error (m) | 1.286840 | 0.064568 | 0.066626 |
Translational Root Mean Square Error (m) | 1.445999 | 0.083673 | 0.094710 |
Maximum Rotational Error (rad) | 0.375543 | 0.026001 | 0.037211 |
Minimum Translational Error (rad) | 0.007266 | 0.000009 | 0.000061 |
Rotational Root Mean Square Error (rad) | 0.069250 | 0.001821 | 0.010742 |
Category | Experiment 1 | Experiment 2 | Experiment 3 |
---|---|---|---|
Absolute Translational RMSE (m) | 0.069569 | 0.174467 | 0.083839 |
Absolute Rotation RMSE (rad) | 0.079908 | 0.103241 | 0.096109 |
Relative Translation RMSE (m) | 0.110006 | 0.106630 | 0.107273 |
Relative Rotation RMSE (rad) | 0.012823 | 0.015159 | 0.025358 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Luo, D.; Huang, Y.; Huang, X.; Miao, M.; Gao, X. W-VSLAM: A Visual Mapping Algorithm for Indoor Inspection Robots. Sensors 2024, 24, 5662. https://doi.org/10.3390/s24175662
Luo D, Huang Y, Huang X, Miao M, Gao X. W-VSLAM: A Visual Mapping Algorithm for Indoor Inspection Robots. Sensors. 2024; 24(17):5662. https://doi.org/10.3390/s24175662
Chicago/Turabian StyleLuo, Dingji, Yucan Huang, Xuchao Huang, Mingda Miao, and Xueshan Gao. 2024. "W-VSLAM: A Visual Mapping Algorithm for Indoor Inspection Robots" Sensors 24, no. 17: 5662. https://doi.org/10.3390/s24175662
APA StyleLuo, D., Huang, Y., Huang, X., Miao, M., & Gao, X. (2024). W-VSLAM: A Visual Mapping Algorithm for Indoor Inspection Robots. Sensors, 24(17), 5662. https://doi.org/10.3390/s24175662