LFVB-BioSLAM: A Bionic SLAM System with a Light-Weight LiDAR Front End and a Bio-Inspired Visual Back End
Abstract
:1. Introduction
- We employ a lightweight range flow-based LiDAR odometry as the front end of our SLAM system, which quickly generates horizontal planar odometry output using the 3D LiDAR point cloud input.
- Our SLAM system incorporates a bio-inspired visual loop closure detection and path integration algorithm as the back end, which utilizes the odometry estimation from the front end, along with image input, to generate the robot’s pose and construct the environmental map.
- We propose a novel bionic SLAM system called LFVB-BioSLAM, which combines the aforementioned front and back end components. Through real-world experiments, we validate the feasibility of LFVB-BioSLAM and demonstrate its superior performance in terms of accuracy and robustness.
2. Materials and Methods
2.1. Front End: A Light-Weight Range Flow-Based LiDAR Odometry Algorithm
2.1.1. Horizontal Single-Ring Point Cloud Extraction
2.1.2. Range Flow Constraint Equation
2.1.3. Estimation of Odometry
2.2. Back End: A Bio-Inspired Visual Loop Closure Detection and Path Integration Algorithm
2.2.1. Local View Cell
2.2.2. Pose Cell Network
- Visual templates from the local view cell module;
- Odometry estimation from the front-end LiDAR odometry algorithm.
- Creating a new node (along with creating an edge from the previous node to the new node);
- Creating an edge between two existing nodes;
- Setting the current pose as an existing node.
2.2.3. Experience Map
3. Results
3.1. Experimental Setup
3.2. Experimental Results
- RatSLAM [21], a classical bio-inspired visual SLAM, with a similar bio-inspired back-end processing mechanism to our LFVB-BioSLAM;
- RF2O [32], a range flow-based horizontal planar laser odometry, with a similar processing mechanism to the front-end odometry estimation algorithm in our LFVB-BioSLAM;
- LFVB-BioSLAM proposed by us, with a LiDAR-based front end and a vision-based bio-inspired back end.
4. Discussion
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
SLAM | Simultaneous localization and mapping |
IMU | Inertial measurement unit |
HDR | High dynamic range |
SNN | Spiking neural network |
STDP | Spike-timing-dependent plasticity |
VPR | Visual place recognition |
LO | LiDAR odometry |
SAD | Sum of absolute differences |
CAN | Continuous attractor network |
SOTA | State-of-the-art |
RMSE | Root mean square error |
References
- Thrun, S. Probabilistic robotics. Commun. Acm 2002, 45, 52–57. [Google Scholar] [CrossRef]
- Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [PubMed]
- Zhuang, G.; Bing, Z.; Huang, Y.; Huang, K.; Knoll, A. A Biologically-Inspired Simultaneous Localization and Mapping System Based on LiDAR Sensor. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 13136–13142. [Google Scholar]
- Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Trans. Robot. 2016, 32, 1309–1332. [Google Scholar] [CrossRef]
- Saputra, M.R.U.; Markham, A.; Trigoni, N. Visual SLAM and structure from motion in dynamic environments: A survey. ACM Comput. Surv. (CSUR) 2018, 51, 1–36. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In Proceedings of the 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2020; pp. 5135–5142. [Google Scholar]
- Doer, C.; Trommer, G.F. Radar visual inertial odometry and radar thermal inertial odometry: Robust navigation even in challenging visual conditions. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 331–338. [Google Scholar]
- Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Dai, W.; Zhang, Y.; Li, P.; Fang, Z.; Scherer, S. Rgb-d slam in dynamic environments using point correlations. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 373–389. [Google Scholar] [CrossRef]
- Saputra, M.R.U.; de Gusmao, P.P.; Lu, C.X.; Almalioglu, Y.; Rosa, S.; Chen, C.; Wahlström, J.; Wang, W.; Markham, A.; Trigoni, N. Deeptio: A deep thermal-inertial odometry with visual hallucination. IEEE Robot. Autom. Lett. 2020, 5, 1672–1679. [Google Scholar] [CrossRef]
- Zhou, Y.; Gallego, G.; Shen, S. Event-based stereo visual odometry. IEEE Trans. Robot. 2021, 37, 1433–1450. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. Robot. Sci. Syst. 2014, 2, 1–9. [Google Scholar]
- Xu, W.; Cai, Y.; He, D.; Lin, J.; Zhang, F. Fast-lio2: Fast direct lidar-inertial odometry. IEEE Trans. Robot. 2022, 38, 2053–2073. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
- Engel, J.; Koltun, V.; Cremers, D. Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 611–625. [Google Scholar] [CrossRef]
- Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
- Rebecq, H.; Horstschäfer, T.; Gallego, G.; Scaramuzza, D. Evo: A geometric approach to event-based 6-dof parallel tracking and mapping in real time. IEEE Robot. Autom. Lett. 2016, 2, 593–600. [Google Scholar] [CrossRef]
- Vidal, A.R.; Rebecq, H.; Horstschaefer, T.; Scaramuzza, D. Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high-speed scenarios. IEEE Robot. Autom. Lett. 2018, 3, 994–1001. [Google Scholar] [CrossRef]
- Huang, K.; Zhang, S.; Zhang, J.; Tao, D. Event-based Simultaneous Localization and Mapping: A Comprehensive Survey. arXiv 2023, arXiv:2304.09793. [Google Scholar]
- Milford, M.J.; Wyeth, G.F.; Prasser, D. RatSLAM: A hippocampal model for simultaneous localization and mapping. In Proceedings of the IEEE International Conference on Robotics and Automation, ICRA’04. 2004, New Orleans, LA, USA, 26 April–1 May 2004; Volume 1, pp. 403–408. [Google Scholar]
- Ball, D.; Heath, S.; Wiles, J.; Wyeth, G.; Corke, P.; Milford, M. OpenRatSLAM: An open source brain-based SLAM system. Auton. Robot. 2013, 34, 149–176. [Google Scholar] [CrossRef]
- Milford, M.; Wyeth, G. Persistent navigation and mapping using a biologically inspired SLAM system. Int. J. Robot. Res. 2010, 29, 1131–1153. [Google Scholar] [CrossRef]
- Milford, M.J.; Wyeth, G.F. Mapping a suburb with a single camera using a biologically inspired SLAM system. IEEE Trans. Robot. 2008, 24, 1038–1053. [Google Scholar] [CrossRef]
- Yu, F.; Shang, J.; Hu, Y.; Milford, M. NeuroSLAM: A brain-inspired SLAM system for 3D environments. Biol. Cybern. 2019, 113, 515–545. [Google Scholar] [CrossRef] [PubMed]
- Çatal, O.; Jansen, W.; Verbelen, T.; Dhoedt, B.; Steckel, J. LatentSLAM: Unsupervised multi-sensor representation learning for localization and mapping. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 6739–6745. [Google Scholar]
- Safa, A.; Verbelen, T.; Ocket, I.; Bourdoux, A.; Sahli, H.; Catthoor, F.; Gielen, G. Fusing Event-based Camera and Radar for SLAM Using Spiking Neural Networks with Continual STDP Learning. arXiv 2022, arXiv:2210.04236. [Google Scholar]
- Hussaini, S.; Milford, M.; Fischer, T. Spiking neural networks for visual place recognition via weighted neuronal assignments. IEEE Robot. Autom. Lett. 2022, 7, 4094–4101. [Google Scholar] [CrossRef]
- Tang, G.; Shah, A.; Michmizos, K.P. Spiking neural network on neuromorphic hardware for energy-efficient unidimensional slam. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 4176–4181. [Google Scholar]
- Kreiser, R.; Cartiglia, M.; Martel, J.N.; Conradt, J.; Sandamirskaya, Y. A neuromorphic approach to path integration: A head-direction spiking neural network with vision-driven reset. In Proceedings of the 2018 IEEE international symposium on circuits and systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar]
- Kreiser, R.; Renner, A.; Sandamirskaya, Y.; Pienroj, P. Pose estimation and map formation with spiking neural networks: Towards neuromorphic slam. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 2159–2166. [Google Scholar]
- Jaimez, M.; Monroy, J.G.; Gonzalez-Jimenez, J. Planar odometry from a radial laser scanner. A range flow-based approach. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 4479–4485. [Google Scholar]
- Spies, H.; Jähne, B.; Barron, J.L. Range flow estimation. Comput. Vis. Image Underst. 2002, 85, 209–231. [Google Scholar] [CrossRef]
- Hafting, T.; Fyhn, M.; Molden, S.; Moser, M.B.; Moser, E.I. Microstructure of a spatial map in the entorhinal cortex. Nature 2005, 436, 801–806. [Google Scholar] [CrossRef] [PubMed]
- O’Keefe, J.; Dostrovsky, J. The hippocampus as a spatial map: Preliminary evidence from unit activity in the freely-moving rat. Brain Res. 1971, 34, 171–175. [Google Scholar] [CrossRef]
- Taube, J.S.; Muller, R.U.; Ranck, J.B. Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. J. Neurosci. 1990, 10, 420–435. [Google Scholar] [CrossRef]
- Chen, K.; Lopez, B.T.; Agha-mohammadi, A.-a.; Mehta, A. Direct lidar odometry: Fast localization with dense point clouds. IEEE Robot. Autom. Lett. 2022, 7, 2000–2007. [Google Scholar] [CrossRef]
- Grupp, M. evo: Python Package for the Evaluation of Odometry and SLAM. 2017. Available online: https://github.com/MichaelGrupp/evo (accessed on 1 April 2023).
Algorithm | Parameter | Meaning | Value |
---|---|---|---|
RF2O & LFVB-BioSLAM | coarse-to-fine levels | 5 | |
IRLS iterations | 5 | ||
Gaussian mask | (0.0625, 0.25, 0.375, 0.25, and 0.0625) | ||
RatSLAM & LFVB-BioSLAM | max y for image cropping | 1000 | |
visual template matching threshold | 0.03 | ||
x size of visual template | 60 | ||
y size of visual template | 10 | ||
visual template matching step | 1 | ||
visual template normalization factor | 0.4 | ||
x size of pose cell | 2 | ||
side length of plane of pose cell network | 100 | ||
experience map graph relaxation loops per iteration | 20 |
Experiment | Trajectory Length [m] | Algorithm | Trans. APE [m] | ||
---|---|---|---|---|---|
Max | Mean | RMSE | |||
Exp. 1 | 74.76 | RatSLAM * | / | / | / |
RF2O | 1.3116 | 0.5484 | 0.6366 | ||
LFVB-BioSLAM | 1.0280 | 0.5347 | 0.5835 | ||
Exp. 2 | 101.76 | RatSLAM * | / | / | / |
RF2O | 1.8066 | 0.3913 | 0.4838 | ||
LFVB-BioSLAM | 0.5480 | 0.2967 | 0.3136 |
Experiment Number | Trajectory Length [m] | Algorithm | Rot. APE [] | ||
---|---|---|---|---|---|
Max | Mean | RMSE | |||
Exp. 1 | 74.76 | RatSLAM * | / | / | / |
RF2O | 8.3100 | 1.6831 | 1.8798 | ||
LFVB-BioSLAM | 2.8556 | 1.6128 | 1.7079 | ||
Exp. 2 | 101.76 | RatSLAM * | / | / | / |
RF2O | 4.9346 | 1.3323 | 1.6260 | ||
LFVB-BioSLAM | 2.8120 | 1.0148 | 1.1193 |
Algorithm | Computational Complexity | Accuracy | Robustness |
---|---|---|---|
RatSLAM | + | + | + |
RF2O | ++ | ++ | ++ |
LFVB-BioSLAM | +++ | +++ | +++ |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gao, R.; Wan, Z.; Guo, S.; Jiang, C.; Zhang, Y. LFVB-BioSLAM: A Bionic SLAM System with a Light-Weight LiDAR Front End and a Bio-Inspired Visual Back End. Biomimetics 2023, 8, 410. https://doi.org/10.3390/biomimetics8050410
Gao R, Wan Z, Guo S, Jiang C, Zhang Y. LFVB-BioSLAM: A Bionic SLAM System with a Light-Weight LiDAR Front End and a Bio-Inspired Visual Back End. Biomimetics. 2023; 8(5):410. https://doi.org/10.3390/biomimetics8050410
Chicago/Turabian StyleGao, Ruilan, Zeyu Wan, Sitong Guo, Changjian Jiang, and Yu Zhang. 2023. "LFVB-BioSLAM: A Bionic SLAM System with a Light-Weight LiDAR Front End and a Bio-Inspired Visual Back End" Biomimetics 8, no. 5: 410. https://doi.org/10.3390/biomimetics8050410
APA StyleGao, R., Wan, Z., Guo, S., Jiang, C., & Zhang, Y. (2023). LFVB-BioSLAM: A Bionic SLAM System with a Light-Weight LiDAR Front End and a Bio-Inspired Visual Back End. Biomimetics, 8(5), 410. https://doi.org/10.3390/biomimetics8050410