A SLAM Method Based on Multi-Robot Cooperation for Pipeline Environments Underground
Abstract
:1. Introduction
2. Methods and Models
2.1. Robot Queue
- The Leader and the Follower use broadcasting to communicate with each other, to exchange data and information.
- Each respective robot is equipped with a high-precision wheeled odometer and inertial navigation sensors to establish the basic positioning of its movement.
- The Leader carries ultrasonic sensors to observe the Follower and then acquires the observation value of the relative distance between the Leader and the Follower. The Leader receives the observation value and then sends it to the Follower in real time.
- Since GPS signals can be received at the entrance and exit locations of the pipe, both robots are equipped with GPS receivers, thus further reducing cumulative errors when using GPS signals at the entry and exit locations.
- A 16-line LiDAR is installed on the Leader for point-cloud projection, using its fused poses to complete the 3D reconstruction of the pipeline environment.
2.2. Cycle Creep Cooperation Strategy
- A.
- Stage 1 is the extension stage. At this stage, the Leader plays the role of the mover, and the Follower plays the role of the reference point. The Leader moves forward and stops when the distance to the Follower is , while the Follower position remains stationary.
- B.
- Stage 2 is the contraction stage. At this stage, the role of the Leader is converted to a reference, while the role of the Follower is transformed into a mover. The Leader is stationary, and the Follower’s movement is transformed into the movement direction of the Leader at the minimum contraction distance, .
- 1.
- Ultrasonic observation reliability problem.
- 2.
- Robot following problem.
2.3. SLAM Algorithm Model
2.3.1. Factor Graph Structure
- Wheeled odometer factor.
- IMU pre-integration factor.
- GPS factor.
- Multi-robot pose constraint factor (ultrasonic range factor).
2.3.2. Factor Graph Optimization Solution
2.3.3. Algorithm Flow
- The Leader plays the role of the mover in the expansion phase, while the Follower plays the role of the reference. The mover (Leader) moves a tripod head and the position changes. In this case, the reference (Follower) is stationary, and the position stays the same. The Follower and the Leader in this process obtain the current frame poses of their respective Odoms with, the current and previous frame timestamps. The pose information is employed as a priori factor to construct the respective factor graphs, and the current frame timestamp and the previous frame timestamp are adopted to perform the temporal alignment for the respective sensor data. The reference (Follower) is in the stationary state and sends its cached information regarding its pose to the active (Leader). The leader uses the IMU data pre-integration between the current frame timestamp and the previous frame timestamp as a constraint to generate the IMU factor insertion factor graph, while it employs the reference (Follower) published pose information as a reference point to address the relative displacement between its Odom pose and the reference (Follower) pose. Moreover, it uses the ultrasonic measurement value and the relative translation amount to form an ultrasonic RANGE factor to be inserted into its factor graph to find the optimal solution, then the final pose information is output. As the active robot (Follower) moves forward, the position changes, the reference (Leader) is stationary, and its position remains unchanged.
- Likewise, in the contraction phase, the role of the Follower is the mover, and the Leader is the reference at that moment. The Follower and the Leader in this process obtain the current frame pose of their respective Odom, with the current and previous frame timestamp. Position information serves as a priori factor to construct respective factor graphs, and the current frame timestamp and previous frame timestamp are adopted to perform a temporal alignment for the respective sensor data. The reference (Leader) is stationary, and it sends its buffered information regarding its own position and ultrasound measurements to the active one (Follower). The Follower takes the pre-integration of IMU data between the current frame timestamp and previous frame timestamp as a constraint, to generate the IMU factor insertion factor graph. Moreover, it takes the reference position information, when released by the Leader as a reference point, thus solving the relative displacement between its own Odom position and the reference (Leader) position. Furthermore, it uses ultrasonic measurement value and relative translation quantity to form the ultrasonic RANG factor, to be inserted into its factor graph for the optimal solution, final position, as well as final position.
- Furthermore, during the robot forward queue, point cloud projection is performed by the Leader, mounted on 16-line LiDAR using the final pose information output by fusion, to complete the 3D reconstruction of the pipeline.
3. Experiment
3.1. Experimental Design
- SLAM testing in a simple pipeline environment. To verify the applicability and benefits of this study’s method in the pipeline environment, SLAM testing was performed, and the method in this study was compared with the LIO-SAM algorithm, the FAST-LIO2.0 algorithm, and the conventional algorithms fusing IMU and Odom.
- SLAM testing in a complex urban pipeline environment is used to verify the applicability of the method in this study in a larger-scale and more complex urban pipeline environment. The pipeline model was derived from the actual natural gas pipeline network distribution map of Kunming city; the local pipeline network was modeled using Solidworks and was then converted to the URDF format.
- EVO evaluation analysis. The SLAM algorithm evaluation tool, EVO, was adopted to quantitatively assess and analyze the SLAM results of the method used in this study, in a complex urban pipeline environment.
3.2. SLAM Testing in General Environments
3.3. SLAM Test in Simple Pipeline Environments
3.4. SLAM Test in a Complex Urban Pipeline Environment
- Types of piping include a long straight pipe of about 2230 m, one tee pipe, two four-way pipes, and nine multi-angle bends.
- The total length of the pipeline is about 4000 m.
- The pipe’s inner diameter is 1016 mm.
3.5. EVO Review Analysis
4. Conclusions and Discussion
- Due to the specificity of the pipeline environment, some major SLAM methods (e.g., LIO-SAM, FAST-LIO2.0, and the IMU Odom fusion strategy) cannot achieve better performance and effectiveness in the pipeline setting, and they do not apply to the pipeline context.
- Given the advantage of multi-robot cooperation, compared with existing methods, the method used in this study is not easily affected by the environment. Besides its applicability to ordinary environments (e.g., rooms), the method in this study achieves better performance in various pipeline environments with high similarity. Moreover, with the increase in pipeline size and operation time, the method in this study can still maintain a good SLAM effect, with the advantages of cumulative controllable error, high confidence, and robustness. Based on the above result, training robots in queues and designing efficient queue cooperation strategies takes on a critical significance for improving SLAM task performance and efficiency.
- The multi-robot pose constraint factor (e.g., ultrasonic RANGE factor) is built using a multi-robot cooperation method. This method is capable of effectively reducing the covariance, and it is a critical factor for increasing the efficiency of SLAM and enhancing the robustness of the system. It is not limited to the ultrasonic method. The ultrasonic method can be replaced with other observation methods (e.g., laser, Bluetooth, and infrared) according to the actual environment and accuracy requirements, to satisfy the needs of different scenarios.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Cai, X.; Ning, H.; Dhelim, S.; Zhou, R.; Zhang, T.; Xu, Y.; Wan, Y. Robot and its living space: A roadmap for robot development based on the view of living space. Digit. Commun. Netw. 2021, 7, 505–517. [Google Scholar] [CrossRef]
- Wang, Z.; Cao, Q.; Luan, N.; Zhang, L. Precision location technology of pipeline robot based on multi-sensor data fusion. Robot 2008, 30, 238–241. [Google Scholar]
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-Time. In Proceedings of the Robotics: Science and Systems, Berkeley, CA, USA, 13–15 July 2014. [Google Scholar]
- Zhang, J.; Singh, S. Visual-lidar Odometry and Mapping: Low-drift, Robust, and Fast. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. [Google Scholar]
- Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021. [Google Scholar]
- Shan, T.; Englot, B.; Ratti, C.; Rus, D. LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021. [Google Scholar]
- Zuo, X.; Geneva, P.; Lee, W.; Liu, Y.; Huang, G. Lic-fusion: Lidarinertial-camera Odometry. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019. [Google Scholar]
- Zuo, X.; Yang, Y.; Geneva, P.; Lv, J.; Liu, Y.; Huang, G.; Pollefeys, M. LIC-Fusion 2.0: LiDAR-Inertial-Camera Odometry with Sliding-Window Plane-Feature Tracking. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021. [Google Scholar]
- Lin, J.; Zheng, C.; Xu, W.; Zhang, F. R2LIVE: A robust, real-time, LiDAR-inertial-visual tightly-coupled state estimator and mapping. IEEE Robot. Autom. Lett. 2021, 6, 7469–7476. [Google Scholar] [CrossRef]
- Lin, J.; Zhang, F. R3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022. [Google Scholar]
- Zhu, Z.; Jiang, W.; Yang, L.; Luo, Z. Indoor multi-robot cooperative mapping based on geometric features. IEEE Access 2021, 9, 74574–74588. [Google Scholar] [CrossRef]
- Zhang, A.; Atia, M.M. Comparison of 2D Localization Using Radar and LiDAR in Long Corridors. In Proceedings of the 2020 IEEE SENSORS, Rotterdam, The Netherlands, 25–28 October 2020. [Google Scholar]
- Bai, S.; Lai, J.; Lü, P.; Ji, B.; Zheng, X.; Fang, W.; Cen, Y. Mobile robot localization and perception method for subterranean space exploration. Jiqiren/Robot 2022, 44, 463–470. [Google Scholar] [CrossRef]
- Zeng, L.; Guo, S.; Xu, Z.; Zhu, M. An indoor global localization technique for mobile robots in long straight environments. IEEE Access 2020, 8, 209644–209656. [Google Scholar] [CrossRef]
- Tardioli, D.; Villarroel, J.L. Odometry-less Localization in Tunnel-like Environments. In Proceedings of the 2014 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Espinho, Portugal, 14–15 May 2014. [Google Scholar]
- Zhang, J.; Kaess, M.; Singh, S. On Degeneracy of Optimization-based State Estimation Problems. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016. [Google Scholar]
- Zhen, W.; Zeng, S.; Soberer, S. Robust Localization and Localizability Estimation with a Rotating Laser Scanner. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017. [Google Scholar]
- Zhen, W.; Scherer, S. Estimating the Localizability in Tunnel-like Environments using LiDAR and UWB. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
- Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. IMU Preintegration on Manifold for Efficient Visual-inertial Maximum-a-posteriori Estimation. In Proceedings of the Robotics: Science and Systems, Rome, Italy, 13–17 July 2015. [Google Scholar]
- Forster, C.; Carlone, L.; Dellaert, F.; Scaramuzza, D. On-manifold preintegration for real-time visual--inertial odometry. IEEE Trans. Robot. 2017, 33, 1–21. [Google Scholar] [CrossRef] [Green Version]
- Qin, T.; Li, P.; Shen, S. VINS-Mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
- Xu, W.; Cai, Y.; He, D.; Lin, J.; Zhang, F. FAST-LIO2: Fast direct LiDAR-inertial odometry. IEEE Trans. Robot. 2022, 38, 2053–2073. [Google Scholar] [CrossRef]
- Xu, W.; Zhang, F. FAST-LIO: A fast, robust LiDAR-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robot. Autom. Lett. 2021, 6, 3317–3324. [Google Scholar] [CrossRef]
Algorithm | Operation Status | Completion Degree | Noise | Map Ghosting |
---|---|---|---|---|
LIO-SAM | Failure | Incomplete | × | × |
FAST-LIO2 | Failure | Incomplete | × | × |
IMU Fusion Odom Strategy | Success | Completed | More | Serious |
Method of this study | Success | Completed | Less | None |
Algorithm | RMSE (Root Mean Square Error) | Mean (Mean Absolute Error) | Median (Median Error) | Std. (Standard Deviation) | Min (Minimum Error) | Max (Maximum Error) |
---|---|---|---|---|---|---|
IMU Fusion Odom Strategy | 0.1810 | 0.1199 | 0.0718 | 0.1355 | 0.0011 | 1.5434 |
Method used in this study | 0.0681 | 0.0472 | 0.0391 | 0.049 | 0.0008 | 0.7131 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lu, D.; Zhang, Y.; Gong, Z.; Wu, T. A SLAM Method Based on Multi-Robot Cooperation for Pipeline Environments Underground. Sustainability 2022, 14, 12995. https://doi.org/10.3390/su142012995
Lu D, Zhang Y, Gong Z, Wu T. A SLAM Method Based on Multi-Robot Cooperation for Pipeline Environments Underground. Sustainability. 2022; 14(20):12995. https://doi.org/10.3390/su142012995
Chicago/Turabian StyleLu, Dongfeng, Yunwei Zhang, Zewu Gong, and Tiannan Wu. 2022. "A SLAM Method Based on Multi-Robot Cooperation for Pipeline Environments Underground" Sustainability 14, no. 20: 12995. https://doi.org/10.3390/su142012995
APA StyleLu, D., Zhang, Y., Gong, Z., & Wu, T. (2022). A SLAM Method Based on Multi-Robot Cooperation for Pipeline Environments Underground. Sustainability, 14(20), 12995. https://doi.org/10.3390/su142012995