A Novel Multi-Sensor Nonlinear Tightly-Coupled Framework for Composite Robot Localization and Mapping
Abstract
:1. Introduction
- We propose a luminance conversion model based on RGB images. The model utilizes mean values from multiple samples of RGB images taken under maximum and minimum illumination conditions in a simulated indoor rescue scenario. These mean values are then converted to serve as normalized standard values.
- We perform nonlinear interpolation and weighted fusion of visual sensor data based on real-time luminance values. The images undergo processing where two types of features are weighted and fused. The allocation of weights dynamically changes according to thresholds set for luminance values, ensuring tight coupling of multi-sensor data.
- We introduce IIVL-LM, a tightly coupled composite localization and mapping system for composite robots, utilizing the R3LIVE++ framework. This system integrates an IMU, visual sensors (infrared and RGB), and LiDAR, as illustrated in Figure 1. Additionally, by integrating the depth information acquisition capabilities of infrared cameras, we have optimized the VIO fusion module.
- We conducted extensive experiments under simulated indoor rescue scenarios and with the TUM-VI dataset [10], comparing our framework with similar systems.
2. Related Work
2.1. LiDAR-Inertial SLAM
2.2. Visual-Inertial SLAM
2.3. LiDAR-Visual Fusion SLAM
3. Nonlinear Tightly-Coupled Model
3.1. Overview of IIVL-LM System Framework
3.2. Basic Sensors Fusion Theory
3.3. Illuminance Conversion Model
3.4. Nonlinear Feature Weight Allocation Method
3.5. VIO (Visual and IMU Fusion) Odometry Based on Stereoscopic Information
- Based on the rapid response characteristics of the IMU, we employed IMU measurements to drive the process model, adapting to the maneuverability of composite robots.
- Leveraging the non-accumulative error attribute of stereo vision, we utilized localization estimates from stereo vision as observations in our observation model to correct errors in the IMU measurements.
- Considering the potential constraints of instantaneous sliding and jumping during the robot’s motion, we modeled the observed velocities on the y-axis and z-axis as zero-mean noise.
4. Experiments and Result
4.1. Experimental Platform
4.2. Experimental Environment and Dataset
4.3. Experimental Evaluation Criteria
4.4. Experimental Result
4.4.1. Experiment on Illumination Changes in Simulated Indoor Rescue Scenarios
4.4.2. Ablation Experiments on TUM-VI Dataset
4.5. Experimental Analysis
4.6. Performance Testing of the Proposed Method Integrated into ORB-SLAM3
5. Conclusions and Discussion
- This paper primarily focuses on the selection and fusion of visual sensors based on luminance. These sensors are then input into the R3LIVE++ fusion framework as part of the Vision SLAM module. This aspect requires further in-depth research and optimization.
- This method has been tested and validated only on ground-based composite robots. However, rescue operations are often three-dimensional, and the system’s infrared camera supports 3D depth detection. Future applications in three-dimensional rescue, automated factory settings, and other areas will inevitably require the integration of unmanned aerial vehicles with ground robots. This integration is necessary to achieve 3D mapping, localization, and navigation.
- Continued in-depth research in environmental modeling and cognition is necessary. Traditional environmental modeling methods are insufficient for robots’ autonomous navigation in unknown and complex environments, necessitating advanced approaches for a deeper understanding of the environment. Future studies should consider a top-down, model-based approach, employing techniques such as Conditional Random Fields and Markov Random Fields to articulate scene positional data, scale information, inter-object relationships, and the probabilities of specific objects’ presence within the scene. This strategy enables rapid scene evaluation based on limited information, mimicking human-like perception abilities.
- Research on the autonomous learning of behaviors should also be expanded. Using machine learning techniques like reinforcement learning and integrating terrain understanding at the feature level is advantageous when designing behavior controllers. This online learning enhances the behavior controllers’ adaptability and flexibility in unfamiliar environments.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
- The following abbreviations are used in this manuscript:
IIVL-LM | IMU, Infrared, Vision, and LiDAR Fusion for Localization and Mapping |
IMU | Inertial Measurement Unit |
RGB | Red, Green and Blue |
VIO | Visual-Inertial Odometry |
R3LIVE++ | Robust, Real-time, Radiance Reconstruction with LiDAR-Inertial-Visual State Estimation |
RMSE ATE | Root Mean Square Error of Absolute Trajectory Error |
TUM-VI | Technical University of Munich Visual-Inertial Dataset |
SLAM | Simultaneous Localization and Mapping |
LOAM | Lidar Odometry and Mapping in Real-time |
LeGO-LOAM | Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain |
LIO-SAM | Lidar Inertial Odometry VIA Smoothing and Mapping |
LINS | A Lidar-Inertial State Estimator for Robust and Efficient Navigation |
DOF | Degrees of Freedom |
LIO | LiDAR-Inertial Odometry |
PTAM | Parallel Tracking and Mapping |
MonoSLAM | Monocular Simultaneous Localization and Mapping |
ORB-SLAM | Oriented Fast and Rotated Brief Simultaneous Localization and Mapping |
RGB-D | Red, Green, Blue, and Depth |
VINS-MONO | Visual-Inertial System Monocular |
LSD-SLAM | Large-Scale Direct Monocular Simultaneous Localization and Mapping |
DSO | Direct Sparse Odometry |
CPU | Central Processing Unit |
SVO | Semi-Direct Visual Odometry |
LIC | LiDAR-Inertial-Camera |
MSCKF | Multi-State Constraint Kalman Filter |
NIKFF | Non-linear Insert Keyframe Feature Fusion |
LMSE | Least Mean Square Error |
MAP | Maximum A Posteriori |
FoV | Field of View |
NNKF | Need for a New Key Frame |
AGV | Automated Guided Vehicle |
ROS | Robot Operating System |
API | Application Programming Interface |
GPU | Graphics Processing Unit |
RAM | Random Access Memory |
References
- Aoyama, Y.; Saravanos, A.D.; Theodorou, E.A. Receding horizon differential dynamic programming under parametric uncertainty. In Proceedings of the 2021 60th IEEE Conference on Decision and Control (CDC), Austin, TX, USA, 14–17 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 3761–3767. [Google Scholar]
- Shi, Y.; Zhang, W.; Yao, Z.; Li, M.; Liang, Z.; Cao, Z.; Zhang, H.; Huang, Q. Design of a hybrid indoor location system based on multi-sensor fusion for robot navigation. Sensors 2018, 18, 3581. [Google Scholar] [CrossRef] [PubMed]
- Shen, S.; Mulgaonkar, Y.; Michael, N.; Kumar, V. Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft MAV. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 4974–4981. [Google Scholar]
- Chen, L.; Li, G.; Xie, W.; Tan, J.; Li, Y.; Pu, J.; Chen, L.; Gan, D.; Shi, W. A Survey of Computer Vision Detection, Visual SLAM Algorithms, and Their Applications in Energy-Efficient Autonomous Systems. Energies 2024, 17, 5177. [Google Scholar] [CrossRef]
- Zhang, X.; Zhou, M.; Liu, H.; Hussain, A. A cognitively inspired system architecture for the Mengshi cognitive vehicle. Cogn. Comput. 2020, 12, 140–149. [Google Scholar] [CrossRef]
- Chen, C.; Zhu, H.; Li, M.; You, S. A Review of Visual-Inertial Simultaneous Localization and Mapping from Filtering-Based and Optimization-Based Perspectives. Robotics 2018, 7, 45. [Google Scholar] [CrossRef]
- Le Gentil, C.; Vidal-Calleja, T.; Huang, S. In2lama: Inertial lidar localisation and map. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 6388–6394. [Google Scholar]
- Li, R.; Wang, S.; Gu, D. Ongoing evolution of visual SLAM from geometry to deep learning: Challenges and opportunities. Cogn. Comput. 2018, 10, 875–889. [Google Scholar] [CrossRef]
- Lin, J.; Zhang, F. R3LIVE++: A Robust, Real-time, Radiance reconstruction package with a tightly-coupled LiDAR-Inertial-Visual state Estimator. arXiv 2022, arXiv:2209.03666. [Google Scholar] [CrossRef]
- Schubert, D.; Goll, T.; Demmel, N.; Usenko, V.; Stückler, J.; Cremers, D. The TUM VI benchmark for evaluating visual-inertial odometry. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1680–1687. [Google Scholar]
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. Robot. Sci. Syst. 2014, 2, 1–9. [Google Scholar]
- Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 4758–4765. [Google Scholar]
- Lin, J.; Zhang, F. A fast, complete, point cloud based loop closure for LiDAR odometry and mapping. arXiv 2019, arXiv:1909.11811. [Google Scholar]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; IEEE: Piscataway, NJ, USA, 2020; pp. 5135–5142. [Google Scholar]
- Qin, C.; Ye, H.; Pranata, C.E.; Han, J.; Zhang, S.; Liu, M. Lins: A lidar-inertial state estimator for robust and efficient navigation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 8899–8906. [Google Scholar]
- Xu, W.; Zhang, F. Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robot. Autom. Lett. 2021, 6, 3317–3324. [Google Scholar] [CrossRef]
- Xu, W.; Cai, Y.; He, D.; Lin, J.; Zhang, F. Fast-lio2: Fast direct lidar-inertial odometry. IEEE Trans. Robot. 2022, 38, 2053–2073. [Google Scholar] [CrossRef]
- Engel, J.; Koltun, V.; Cremers, D. Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 611–625. [Google Scholar] [CrossRef] [PubMed]
- Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067. [Google Scholar] [CrossRef] [PubMed]
- Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; IEEE: Piscataway, NJ, USA, 2007; pp. 225–234. [Google Scholar]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Tardos, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
- Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
- Campos, C.; Elvira, R.; Rodriguez, J.J.G.; Montiel, J.M.M.; Tardos, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the IJCAI’81: 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981; Volume 2, pp. 674–679. [Google Scholar]
- Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular SLAM. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014; Springer International Publishing: Cham, Switzerland, 2014; pp. 834–849. [Google Scholar]
- Forster, C.; Zhang, Z.; Gassner, M.; Werlberger, M.; Scaramuzza, D. SVO: Semidirect visual odometry for monocular and multicamera systems. IEEE Trans. Robot. 2016, 33, 249–265. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. Laser–visual–inertial odometry and mapping with high robustness and low drift. J. Field Robot. 2018, 35, 1242–1264. [Google Scholar] [CrossRef]
- Shao, W.; Vijayarangan, S.; Li, C.; Kantor, G. Stereo visual inertial lidar simultaneous localization and mapping. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 370–377. [Google Scholar]
- Zuo, X.; Geneva, P.; Lee, W.; Liu, Y.; Huang, G. Lic-fusion: Lidar-inertial-camera odometry. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 5848–5854. [Google Scholar]
- An, Y.; Shi, J.; Gu, D.; Liu, Q. Visual-LiDAR SLAM based on unsupervised multi-channel deep neural networks. Cogn. Comput. 2022, 14, 1496–1508. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B.; Ratti, C.; Rus, D. Lvi-sam: Tightly-coupled lidar-visual-inertial odometry via smoothing and mapping. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 5692–5698. [Google Scholar]
- Agarwal, S.; Mierle, K. Ceres Solver: Tutorial & Reference; Google Inc.: Mountain View, CA, USA, 2012; Volume 2, p. 8. [Google Scholar]
- Ben-Shabat, Y.; Gould, S. Deepfit: 3d surface fitting via neural network weighted least squares. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part I 16. Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 20–34. [Google Scholar]
- Lin, J.; Zheng, C.; Xu, W.; Zhang, F. R2LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping. IEEE Robot. Autom. Lett. 2021, 6, 7469–7476. [Google Scholar] [CrossRef]
- Koretsky, G.M.; Nicoll, J.F.; Taylor, M.S. Tutorial on Electro-Optical/Infrared (EO/IR) Theory and Systems; Institute for Defense Analyses: Alexandria, VA, USA, 2022. [Google Scholar]
- Singh, H.; Fatima, H.; Sharma, S.; Arora, D. A novel approach for IR target localization based on IR and visible image fusion. In Proceedings of the 2017 4th International Conference on Signal Processing, Computing and Control (ISPCC), Solan, India, 21–23 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 235–240. [Google Scholar]
- He, M.; Wu, Q.; Ngan, K.N.; Jiang, F.; Meng, F.; Xu, L. Misaligned RGB-Infrared Object Detection via Adaptive Dual-Discrepancy Calibration. Remote Sens. 2023, 15, 4887. [Google Scholar] [CrossRef]
- Zhu, R.; Yu, D.; Ji, S.; Lu, M. Matching RGB and infrared remote sensing images with densely-connected convolutional neural networks. Remote Sens. 2019, 11, 2836. [Google Scholar] [CrossRef]
- Chen, L.; Li, G.; Zhao, K.; Zhang, G.; Zhu, X. A Perceptually Adaptive Long-Term Tracking Method for the Complete Occlusion and Disappearance of a Target. Cogn. Comput. 2023, 15, 2120–2131. [Google Scholar] [CrossRef]
- Ge, Z.; Han, Z.; Liu, Y.; Wang, X.; Zhou, Z.; Yang, F.; Li, Y.; Li, Y.; Chen, L.; Li, W.; et al. Midinfrared up-conversion imaging under different illumination conditions. Phys. Rev. Appl. 2023, 20, 054060. [Google Scholar] [CrossRef]
- Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. Speeded-up robust features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- Kou, J.; Li, Y.; Wang, X. Some modifications of Newton’s method with fifth-order convergence. J. Comput. Appl. Math. 2007, 209, 146–152. [Google Scholar] [CrossRef]
- Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Tardos, J.D. Visual-inertial monocular SLAM with map reuse. IEEE Robot. Autom. Lett. 2017, 2, 796–803. [Google Scholar] [CrossRef]
No. | Sensors | Items | Parameter Value |
---|---|---|---|
1 | LiDAR | Wavelength | 905 nm |
FOV | Horizontal 360°, vertical −7~52° | ||
Electrical cloud output | 200,000 Points/second, 10 Hz | ||
Point cloud frame rate | 10 Hz | ||
Near blind spots | 0.1 m | ||
Measurement distance and accuracy | 0.06 to 10 m, Max.30 m, 270° ±40 mm | ||
Data synchronization method | IEEE 1588-2008 (PTPv2), GPS | ||
2 | RGB Camera | Resolution | 752 × 480 |
Perspective and applicable distance | D:140° H:120° V:75°, 0.8–10 m | ||
Maximum frame rate | 60 FPS | ||
3 | IMU | frequency | 100~500 Hz |
4 | Infrared Camera | Effective IR distance | 3 m |
Frame rate | 60 FPS |
Time Frame | The Average Illumination Value (lux) | Distance (m) | ORB-SLAM | VINS-Mono | R3LIVE++ | DSO | SVO | IIVL-LM (Ours) |
---|---|---|---|---|---|---|---|---|
0~4 a.m. | 22 | 910 | 0.044 | 0.042 | 0.037 | 0.028② | 0.033 | 0.018① |
4~8 a.m. | 1211 | 800 | 0.036 | 0.040 | 0.016① | 0.021 | 0.018② | 0.016① |
8~12 a.m. | 17,490 | 657 | 0.019 | 0.015 | 0.011① | 0.020 | 0.018 | 0.012② |
12~16 p.m. | 29,782 | 745 | 0.026 | 0.021① | 0.023② | 0.026 | 0.032 | 0.021① |
16~20 p.m. | 20,034 | 800 | 0.031 | 0.027① | 0.030② | 0.041 | 0.039 | 0.032 |
20~24 p.m. | 7980 | 880 | 0.042 | 0.044 | 0.038 | 0.036② | 0.037 | 0.020① |
Avg RMSE ATE | 0.033 | 0.032 | 0.026② | 0.029 | 0.030 | 0.020① | ||
Compare with the population average (0.028) | −18% | −14% | +7%② | −4% | −7% | +40%① |
Seq. | ORB-SLAM | VINS-Mono | R3LIVE++ | DSO | SVO | IIVL-LM (Ours) |
---|---|---|---|---|---|---|
Room 1 | 0.057 | 0.040 | 0.028① | 0.032② | 0.037 | 0.033 |
Room 2 | 0.051 | 0.027① | 0.037 | 0.066 | 0.029② | 0.033 |
Room 3 | 0.027 | 0.017① | 0.021② | 0.023 | 0.038 | 0.026 |
Room 4 | 0.052 | 0.058 | 0.043 | 0.033② | 0.021① | 0.033② |
Room 5 | 0.030 | 0.026① | 0.051 | 0.027② | 0.055 | 0.037 |
Room 6 | 0.031① | 0.035 | 0.032② | 0.036 | 0.037 | 0.040 |
Avg RMSE ATE | 0.041 | 0.033① | 0.035 | 0.036 | 0.036 | 0.034② |
Compare with the population average (0.036) | −12% | +8%① | +3% | 0% | 0% | +6%② |
Time Frame | The Average Illumination Value (lux) | Distance (m) | ORB-SLAM | VINS-Mono | DSO | SVO | ORB-SLAM3 | ORB-SLAM3 (with Ours VIO Module) |
---|---|---|---|---|---|---|---|---|
0~4 a.m. | 184 | 460 | 0.021 | 0.028 | 0.030 | 0.021 | 0.017② | 0.012① |
4~8 a.m. | 1532 | 600 | 0.029 | 0.023② | 0.037 | 0.033 | 0.024 | 0.017① |
8~12 a.m. | 19,330 | 400 | 0.013 | 0.008① | 0.011 | 0.012 | 0.009② | 0.010 |
12~16 p.m. | 30,414 | 500 | 0.019 | 0.020 | 0.018 | 0.014① | 0.016② | 0.017 |
16~20 p.m. | 22,615 | 480 | 0.014 | 0.010① | 0.018 | 0.011② | 0.012 | 0.012 |
20~24 p.m. | 7110 | 510 | 0.028 | 0.025 | 0.022② | 0.030 | 0.039 | 0.017① |
Avg RMSE ATE | 0.021 | 0.019② | 0.023 | 0.020 | 0.020 | 0.014① | ||
Compare with the population average (0.020) | −5% | +5%② | −15% | 0% | 0% | +30%① |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, L.; Hussain, A.; Liu, Y.; Tan, J.; Li, Y.; Yang, Y.; Ma, H.; Fu, S.; Li, G. A Novel Multi-Sensor Nonlinear Tightly-Coupled Framework for Composite Robot Localization and Mapping. Sensors 2024, 24, 7381. https://doi.org/10.3390/s24227381
Chen L, Hussain A, Liu Y, Tan J, Li Y, Yang Y, Ma H, Fu S, Li G. A Novel Multi-Sensor Nonlinear Tightly-Coupled Framework for Composite Robot Localization and Mapping. Sensors. 2024; 24(22):7381. https://doi.org/10.3390/s24227381
Chicago/Turabian StyleChen, Lu, Amir Hussain, Yu Liu, Jie Tan, Yang Li, Yuhao Yang, Haoyuan Ma, Shenbing Fu, and Gun Li. 2024. "A Novel Multi-Sensor Nonlinear Tightly-Coupled Framework for Composite Robot Localization and Mapping" Sensors 24, no. 22: 7381. https://doi.org/10.3390/s24227381
APA StyleChen, L., Hussain, A., Liu, Y., Tan, J., Li, Y., Yang, Y., Ma, H., Fu, S., & Li, G. (2024). A Novel Multi-Sensor Nonlinear Tightly-Coupled Framework for Composite Robot Localization and Mapping. Sensors, 24(22), 7381. https://doi.org/10.3390/s24227381