Multi-Robot SLAM Using Fast LiDAR Odometry and Mapping
Abstract
:1. Introduction
2. Related Work
- Distributed MRSLAM: An example of a distributed MR-SLAM system was introduced by Yosuke Kishimoto et al. [13], which leverages Moving Horizon Estimation (MHE) to formulate the cost function on each robot. Additionally, the Continuation/Generalized Minimum Residual (C/GMRES) method is employed to alleviate computational overhead.
- Centralized MRSLAM: Patrik Schmuck and Margarita Chli presented Centralized Collaborative Monocular SLAM (CCM-SLAM) [16]. Their focus centered on enhancing the scalability and robustness of CCM-SLAM, addressing concerns like information loss and communication delays that frequently emerge during real-world applications.
- We have developed an online centralized MR-SLAM approach based on lidar SLAM.
- We have effectively tackled the challenges associated with speed and computational complexity in MR-SLAM, rendering it suitable for real-time applications. This is achieved by incorporating two key adaptations: firstly, the integration of FLOAM SLAM, enhancing the rapidity of local map construction and robot localization; secondly, the refinement of the fusion algorithm on the central workstation to expedite the merging process of local maps.
- The output of our system materializes as a 3D global map represented in the form of a point cloud within the spatial context.
- Compatibility between mapping approaches: The point cloud representation facilitates seamless data exchange among diverse mapping methodologies, even if they internally employ distinct representations. The flexibility of point clouds, devoid of constraints on point geometry, enables easy integration of metadata associated with each point.
- Enhanced cooperation among robots: A 3D map promotes efficient collaboration among robots that operate within a space unbounded by a 2D plane. This versatility allows robots with varying capabilities and mobility (e.g., humanoid robots, aerial vehicles, ground-based platforms, outdoor robots) to contribute effectively to the task at hand. By leveraging the diverse strengths and weaknesses of different robot types, the collective can perform tasks more efficiently.
3. Methodology
3.1. F-LOAM SLAM
3.2. Map Fusion
- Discovering the Robots: This is performed by searching for all map topics in the ROS environment and then storing the maps.
- Estimate the Transform: This consists of two main parts. The first part estimates the transform between two maps, which includes the following process:
- The maps are down-sampled by using a voxal filter to reduce the computation time and deal with some noise and inaccuracies.
- Outliers are removed; this will help to remove far-lying key points.
- Surface normal is computed based on the neighborhood of the point.
- Since we deal with point cloud, which does not have color information, we should use a detector algorithm that deals with geometrical information, such as Harries 3D point detector [25]. The determined surface normal will be used to detect these points.
- Compute Fast Point Feature Histogram (FPFH) descriptors [26].
- Match the descriptors of the two maps by finding the k-nearest descriptor.
- Align the transform with FAST_VGICP [8].
- Identification of the largest connected component, with matches having confidences less than 1.0 being discarded.
- Construction of a maximum spanning tree to select the global reference frame and provide only one path from the nodes to the selected reference.
- Determination of the global transform through the tree and pair-wise transforms.
- Merge the Maps: Obtaining the maps and their transform, then combining them to generate the 3D-global map.
4. Simulation Results
- VirtualBox Software: Provides a platform for testing software, running multiple operating systems on a single machine, and developing and experimenting with various environments without affecting the host system. In our paper, we used the VirtualBox to install the Ubuntu operating system, which provides a better environment for ROS.
- Ubuntu 20.04 Operating System: A popular choice for roboticists and provides the most seamless and well-supported environment for ROS applications. Many ROS packages and libraries are built and tested primarily on Ubuntu, ensuring compatibility and stability.
- Noetic version of ROS: A preferred choice for roboticists because it offers compatibility with both Python 2 and Python 3, making it easier to transition from legacy code. It is a Long-Term Support (LTS) release with extended community support for five years, ensuring stability for long-term projects. Noetic is compatible with Ubuntu 20.04, a stable LTS operating system, and provides a migration path for existing ROS projects. Despite its Python 2 focus, it includes new features and benefits from an active community ecosystem.
- Gazebo: is a widely used simulator for robotics due to its realistic, open-source, and extensible nature. It integrates seamlessly with ROS, supports multirobot scenarios, provides accurate sensor simulation, and enjoys strong community support. This makes Gazebo an invaluable tool for developing and testing robotic systems.
- Part 1. The method was initially assessed using a bag file dataset [29], a widely recognized benchmark for outdoor localization evaluation. The dataset originates from six robots equipped with Velodyne VLP-16 LiDAR, cameras, and GPS, which were used for driving data collection. Within our study, we focused on a subset of the dataset involving only two robots. Their respective ground truth trajectories are visualized in Figure 4, distinguished by blue and purple colors. A comprehensive breakdown of the dataset particulars can be found in Table 2.
- Discussion for Part 1: MR-FLOAM notably outperforms MR-ALOAM in terms of map-matching accuracy with the ground truth path of the robots, a distinction clearly visible in Figure 5d compared to Figure 5c. This outcome highlights MR-FLOAM’s superior alignment capabilities. Furthermore, our proposed method exhibits a commendable reduction in computing time, as demonstrated in Table 3, ensuring its efficiency for real-time mapping applications.
- Part 2. To implement the proposed system in a virtual environment, we used the following requirements:
- Gazebo simulator to simulate the environment, which is shown in Figure 6.
- Discussion for Part 2: Figure 7a,b portray the partially constructed maps of individual robots. These separate maps are subsequently collated by the fusion algorithm residing at the server. The outcome of this fusion process is evident in Figure 7c,d, wherein the merged map emerges as a comprehensive depiction of the environment, culminating in a cohesive and holistic representation.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Moura, A.; Antunes, J.; Dias, A.; Martins, A.; Almeida, J. Graph-SLAM Approach for Indoor UAV Localization in Warehouse Logistics Applications. In Proceedings of the 2021 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Santa Maria da Feira, Portugal, 28–29 April 2021; pp. 4–11. [Google Scholar] [CrossRef]
- Kazerouni, I.A.; Fitzgerald, L.; Dooly, G.; Toal, D. A survey of state-of-the-art on visual SLAM. Expert Syst. Appl. 2022, 205, 117734. [Google Scholar] [CrossRef]
- Debeunne, C.; Vivet, D. A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping. Sensors 2020, 20, 2068. [Google Scholar] [CrossRef]
- Cunha, F.; Youcef-Toumi, K. Ultra-Wideband Radar for Robust Inspection Drone in Underground Coal Mines. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 86–92. [Google Scholar] [CrossRef]
- Milz, S.; Arbeiter, G.; Witt, C.; Abdallah, B.; Yogamani, S. Visual SLAM for Automated Driving: Exploring the Applications of Deep Learning. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018; Volume 2018-June, pp. 360–370. [Google Scholar] [CrossRef]
- Ito, S.; Hiratsuka, S.; Ohta, M.; Matsubara, H.; Ogawa, M. Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle. Sensors 2018, 18, 177. [Google Scholar] [CrossRef] [PubMed]
- Li, R.; Liu, J.; Zhang, L.; Hang, Y. LIDAR/MEMS IMU integrated navigation (SLAM) method for a small UAV in indoor environments. In Proceedings of the 2014 DGON Inertial Sensors and Systems (ISS), Karlsruhe, Germany, 16–17 September 2014; pp. 18–32. [Google Scholar]
- Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. Low-drift and real-time lidar odometry and mapping. Auton. Robot. 2017, 41, 401–416. [Google Scholar] [CrossRef]
- Gonzalez, C.; Adams, M. An improved feature extractor for the Lidar Odometry and Mapping (LOAM) algorithm. In Proceedings of the 2019 International Conference on Control, Automation and Information Sciences (ICCAIS), Chengdu, China, 23–26 October 2019; pp. 1–7. [Google Scholar] [CrossRef]
- Wang, H.; Wang, C.; Chen, C.-L.; Xie, L. F-LOAM: Fast LiDAR Odometry and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 4390–4396. [Google Scholar] [CrossRef]
- Li, L.; Kong, X.; Zhao, X.; Li, W.; Wen, F.; Zhang, H.; Liu, Y. SA-LOAM: Semantic-aided LiDAR SLAM with Loop Closure. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 7627–7634. [Google Scholar] [CrossRef]
- Kishimoto, Y.; Takaba, K.; Ohashi, A. Moving Horizon Multi-Robot SLAM Based on C/GMRES Method. In Proceedings of the International Conference on Advanced Mechatronic Systems (ICAMechS), Kusatsu, Japan, 26–28 August 2019. [Google Scholar]
- Chang, Y.; Tian, Y.; How, J.P.; Carlone, L. Kimera-Multi: A System for Distributed Multi-Robot Metric-Semantic Simultaneous Localization and Mapping. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 11210–11218. [Google Scholar] [CrossRef]
- Lajoie, P.Y.; Beltrame, G. Swarm-SLAM: Sparse Decentralized Collaborative Simultaneous Localization and Mapping Framework for Multi-Robot Systems. arXiv 2023, arXiv:2301.06230. [Google Scholar]
- Schmuck, P.; Chli, M. CCM-SLAM: Robust and efficient centralized collaborative monocular simultaneous localization and mapping for robotic teams. J. Field Robot. 2019, 36, 763–781. [Google Scholar] [CrossRef]
- Boroson, E.R.; Hewitt, R.; Ayanian, N.; de la Croix, J.-P. Inter-Robot Range Measurements in Pose Graph Optimization. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 4806–4813. [Google Scholar] [CrossRef]
- Liu, W. Slam algorithm for multi-robot communication in unknown environment based on particle filter. J. Ambient. Intell. Humaniz. Comput. 2021, 1–9. [Google Scholar] [CrossRef]
- Chen, Y.; Wang, Y.; Lin, J.; Chen, Z.; Wang, Y. Multi-Robot Point Cloud Map Fusion Algorithm Based on Visual SLAM. In Proceedings of the 2021 IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 15–17 January 2021; pp. 329–333. [Google Scholar] [CrossRef]
- Chang, Y.; Ebadi, K.; Denniston, C.E.; Ginting, M.F.; Rosinol, A.; Reinke, A.; Palieri, M.; Shi, J.; Chatterjee, A.; Morrell, B.; et al. LAMP 2.0: A Robust Multi-Robot SLAM System for Operation in Challenging Large-Scale Underground Environments. IEEE Robot. Autom. Lett. 2022, 7, 9175–9182. [Google Scholar] [CrossRef]
- Ye, K.; Dong, S.; Fan, Q.; Wang, H.; Yi, L.; Xia, F.; Wang, J.; Chen, B. Multi-Robot Active Mapping via Neural Bipartite Graph Matching. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 14819–14828. [Google Scholar] [CrossRef]
- Kshirsagar, J.; Shue, S.; Conrad, J.M. A Survey of Implementation of Multi-Robot Simultaneous Localization and Mapping. In Proceedings of the SoutheastCon 2018, St. Petersburg, FL, USA, 19–22 April 2018; IEEE: Washington, DC, USA, 2018. [Google Scholar]
- Saeedi, S.; Trentini, M.; Seto, M.; Li, H. Multiple-robot simultaneous localization and mapping: A review. J. Field Robot. 2016, 33, 3–46. [Google Scholar] [CrossRef]
- Hörner, J. Automatic Point Clouds Merging. Master’s Thesis, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic, 2018. [Google Scholar]
- Harris, C.; Stephens, M. A Combined Corner and Edge Detector. In Proceedings of the AVC, Manchester, UK, 31 August–2 September 1988. [Google Scholar]
- Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Golub, G.H.; Reinsch, C. Singular value decomposition and least squares solutions. Numer. Math. 1970, 14, 403–420. [Google Scholar] [CrossRef]
- Tian, Y.; Chang, Y.; Quang, L.; Schang, A.; Nieto-Granda, C.; How, J.P.; Carlone, L. Resilient and Distributed Multi-Robot Visual SLAM: Datasets, Experiments, and Lessons Learned. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018. [Google Scholar]
- Neor_Mini Ackerman Mobile Base. Available online: https://github.com/COONEO/neor_mini (accessed on 3 March 2021).
Work | Sensor | Map Type | Computation Efficiency |
---|---|---|---|
[13] | laser range scanner | 2D-Grid map | Reasonable |
[14] | IMU + Stero Camera | 3D-metric-semantic Meshes | Efficient computation compared with other distributed system |
[15] | Lidar, Camera | 3D-Feature map | Need to enhance the accuracy |
[16] | Monocular camera | 3D-Feature map | KF is costly for large spaces |
[17] | UWB range sensor | 2D-Feature map | Reasonable |
[18] | RGB camera | 2D-Occupancy map | Reasonable but increases for large spaces |
[19] | Camera | 2D-Feature map | Became efficient for large scale |
[20] | Lidar | 3D-Feature map | Its centralized architecture may not scale to large robot teams |
[21] | Camera | 2D-Occupancy map | Efficient computation compared with other systems that use the neural network |
Our work | VLP-16 (Velodyne Puck) | 3D-Feature map | Low computing time compared with others |
Sequence | Robot’s Type | Size | Velodyne Points Topic |
---|---|---|---|
1 | Acl_Jackal.bag | 8.0 GB | /acl_jackal /lidar_points |
2 | Acl_Jackal2.bag | 12.6 GB | /acl_jackal2/lidar_points |
3 | thoth.bag | 5.86 GB | /thoth/lidar_points |
4 | sparkal1-001.bag | 10.86 GB | /sparkal1-001/lidar_points |
5 | Sparkal2-001.bag | 12 GB | /sparkal2-001/lidar_points |
6 | Hathor | 8.81 GB | /hathor/lidar_points |
MR-Implementation Method | Computing Time |
---|---|
MR-ALOAM | 53,782.476562 ms |
MR-FLOAM | 18,360.228516 ms |
RMSE (MR-ALOM) | RMSE (MR-FLOM) | |
---|---|---|
ROBOT1 | 8.685816 | 13.122269 |
ROBOT2 | 2.914255 | 2.745622 |
RMSE | Global Map Computation Time | |
---|---|---|
Robot1 | 0.960604 | 296.395529 ms |
Robot2 | 0.707429 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ahmed Jalil, B.; Kasim Ibraheem, I. Multi-Robot SLAM Using Fast LiDAR Odometry and Mapping. Designs 2023, 7, 110. https://doi.org/10.3390/designs7050110
Ahmed Jalil B, Kasim Ibraheem I. Multi-Robot SLAM Using Fast LiDAR Odometry and Mapping. Designs. 2023; 7(5):110. https://doi.org/10.3390/designs7050110
Chicago/Turabian StyleAhmed Jalil, Basma, and Ibraheem Kasim Ibraheem. 2023. "Multi-Robot SLAM Using Fast LiDAR Odometry and Mapping" Designs 7, no. 5: 110. https://doi.org/10.3390/designs7050110
APA StyleAhmed Jalil, B., & Kasim Ibraheem, I. (2023). Multi-Robot SLAM Using Fast LiDAR Odometry and Mapping. Designs, 7(5), 110. https://doi.org/10.3390/designs7050110