PLM-SLAM: Enhanced Visual SLAM for Mobile Robots in Indoor Dynamic Scenes Leveraging Point-Line Features and Manhattan World Model
Abstract
:1. Introduction
- A merging and optimization strategy for LSD line features is proposed, and the method of separating the point-line features is used to calculate the LSD line feature descriptors, which improves the accuracy of feature matching while ensuring real-time performance.
- The accuracy of localization in indoor environments is improved by weighting the reprojection error of point-line features and optimizing the robot’s pose in conjunction with the Manhattan world model.
- A dynamic point-line feature rejection method combining LK optical flow, geometric constraints, and image depth is constructed to effectively avoid the influence of dynamic objects on the system.
- On the basis of obtaining closed-loop candidate keyframes using the Bag of Words (BOW) model, geometric consistency verification is performed by calculating the similarity of line features to improve the accuracy of loop closure detection.
2. Related Work
2.1. Point-Line Feature-Based SLAM
2.2. Dynamic SLAM
3. Pipeline
3.1. Line Feature Extraction and Optimization
3.2. Manhattan Axis Calculation
3.3. High-Precision Matching for Line Feature
Algorithm 1 High-precision matching of line features |
Require: line feature set KeyL1 of and the descriptor mLdes1; line feature set KeyL2 of and the descriptor mLdes2; Foundational matrix F Ensure: Refined matching list of line features 1: 2: for to N do 3: 4: 5: if then 6: 7: end if 8: end for |
3.4. Dynamic Feature Detection and Elimination
3.5. Pose Estimation
3.6. Loop Closure Detection
Algorithm 2 Loop closure detection incorporating point-line features |
Require: Queue of keyframes to be processed , ORB vocabulary , Keyframe ID of the last loop closure detection Ensure: List of candidate keyframes for closed-loop 1: 2: if then 3: pKF1 ⇐ GetCovKF() 4: minScore (, pKF1, ) 5: 6: if is empty then 7: return 0 8: end if 9: pKF3 ⇐ FindSharKF(, ) 10: if is empty then 11: return 0 12: end if 13: pKF4 (, pKF3, minScore, ) 14: if is empty then 15: return 0 16: end if 17: pKF5 ⇐ SelectBestCovK(, pKF3, minScore) 18: if is empty then 19: return 0 20: end if 21: 22: end if |
3.7. Map Building
4. Experiments and Results
4.1. Simulation Studies
4.2. Experimental Testing
5. Conclusions and Future Work
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
PLM-SLAM | The proposed algorithm |
vSLAM | Visual Simultaneous Localization And Mapping |
LSD | Line Segment Detector |
LK | Lucas–Kanade |
FOV | Field Of View |
BOW | Bag of Words |
LBD | Line Binary Descriptor |
IMU | Inertial Measurement Unit |
BA | Bundle Adjustment |
PCL | Point Cloud Library |
APE | Absolute Pose Error |
ATE | Absolute Trajectory Error |
RPE | Relative Pose Error |
RMSE | Root Mean Square Error |
References
- Mur-Artal, R.; Tardós, J.D. Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Trans. Robot. 2017, 33, 1255–1262. [Google Scholar] [CrossRef]
- Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
- Baker, L.; Ventura, J.; Langlotz, T.; Gul, S.; Mills, S.; Zollmann, S. Localization and tracking of stationary users for augmented reality. Vis. Comput. 2024, 40, 227–244. [Google Scholar] [CrossRef]
- Pumarola, A.; Vakhitov, A.; Agudo, A.; Sanfeliu, A.; Moreno-Noguer, F. PL-SLAM: Real-time monocular visual SLAM with points and lines. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 4503–4508. [Google Scholar] [CrossRef]
- Gomez-Ojeda, R.; Moreno, F.A.; Zuñiga-Noël, D.; Scaramuzza, D.; Gonzalez-Jimenez, J. PL-SLAM: A Stereo SLAM System Through the Combination of Points and Line Segments. IEEE Trans. Robot. 2019, 35, 734–746. [Google Scholar] [CrossRef]
- Zhang, G.; Lee, J.H.; Lim, J.; Suh, I.H. Building a 3-D Line-Based Map Using a Stereo SLAM. IEEE Trans. Robot. 2015, 31, 1–14. [Google Scholar] [CrossRef]
- Wei, Y.; Zhou, B.; Duan, Y.; Liu, J.; An, D. DO-SLAM: Research and application of semantic SLAM system towards dynamic environments based on object detection. Appl. Intell. 2023, 53, 30009–30026. [Google Scholar] [CrossRef]
- Gong, H.; Gong, L.; Ma, T.; Sun, Z.; Li, L. AHY-SLAM: Toward faster and more accurate visual SLAM in dynamic scenes using homogenized feature extraction and object detection method. Sensors 2023, 23, 4241. [Google Scholar] [CrossRef]
- Islam, Q.U.; Ibrahim, H.; Chin, P.K.; Lim, K.; Abdullah, M.Z.; Khozaei, F. ARD-SLAM: Accurate and robust dynamic SLAM using dynamic object identification and improved multi-view geometrical approaches. Displays 2024, 82, 102654. [Google Scholar] [CrossRef]
- Dai, W.; Zhang, Y.; Li, P.; Fang, Z.; Scherer, S. Rgb-d slam in dynamic environments using point correlations. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 373–389. [Google Scholar] [CrossRef] [PubMed]
- Long, F.; Ding, L.; Li, J. DGFlow-SLAM: A Novel Dynamic Environment RGB-D SLAM without Prior Semantic Knowledge Based on Grid Segmentation of Scene Flow. Biomimetics 2022, 7, 163. [Google Scholar] [CrossRef] [PubMed]
- Von Gioi, R.G.; Jakubowicz, J.; Morel, J.M.; Randall, G. LSD: A fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 32, 722–732. [Google Scholar] [CrossRef]
- Zhang, L.; Koch, R. An efficient and robust line segment matching approach based on LBD descriptor and pairwise geometric consistency. J. Vis. Commun. Image Represent. 2013, 24, 794–805. [Google Scholar] [CrossRef]
- Fu, Q.; Wang, J.; Yu, H.; Ali, I.; Guo, F.; He, Y.; Zhang, H. PL-VINS: Real-time monocular visual-inertial SLAM with point and line features. arXiv 2020, arXiv:2009.07462. [Google Scholar]
- Zhou, F.; Zhang, L.; Deng, C.; Fan, X. Improved point-line feature based visual SLAM method for complex environments. Sensors 2021, 21, 4604. [Google Scholar] [CrossRef]
- Zhang, C.; Huang, T.; Zhang, R.; Yi, X. PLD-SLAM: A new RGB-D SLAM method with point and line features for indoor dynamic scene. ISPRS Int. J. Geo-Inf. 2021, 10, 163. [Google Scholar] [CrossRef]
- Akinlar, C.; Topal, C. EDLines: A real-time line segment detector with a false detection control. Pattern Recognit. Lett. 2011, 32, 1633–1642. [Google Scholar] [CrossRef]
- Zhu, Y.; Jin, R.; Lou, T.-s.; Zhao, L. PLD-VINS: RGBD visual-inertial SLAM with point and line features. Aerosp. Sci. Technol. 2021, 119, 107185. [Google Scholar] [CrossRef]
- Teng, Z.; Han, B.; Cao, J.; Hao, Q.; Tang, X.; Li, Z. PLI-SLAM: A Tightly-Coupled Stereo Visual-Inertial SLAM System with Point and Line Features. Remote Sens. 2023, 15, 4678. [Google Scholar] [CrossRef]
- Ma, X.; Ning, S. Real-Time Visual-Inertial SLAM with Point-Line Feature using Improved EDLines Algorithm. In Proceedings of the 2020 IEEE 5th Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 12–14 June 2020; pp. 1323–1327. [Google Scholar] [CrossRef]
- Lim, H.; Kim, Y.; Jung, K.; Hu, S.; Myung, H. Avoiding Degeneracy for Monocular Visual SLAM with Point and Line Features. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 11675–11681. [Google Scholar] [CrossRef]
- Li, Y.; Brasch, N.; Wang, Y.; Navab, N.; Tombari, F. Structure-SLAM: Low-Drift Monocular SLAM in Indoor Environments. IEEE Robot. Autom. Lett. 2020, 5, 6583–6590. [Google Scholar] [CrossRef]
- Yunus, R.; Li, Y.; Tombari, F. Manhattanslam: Robust planar tracking and mapping leveraging mixture of manhattan frames. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 6687–6693. [Google Scholar]
- Hamid, N.; Khan, N. LSM: Perceptually accurate line segment merging. J. Electron. Imaging 2016, 25, 061620. [Google Scholar] [CrossRef]
- Zuo, X.; Xie, X.; Liu, Y.; Huang, G. Robust visual SLAM with point and line features. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 1775–1782. [Google Scholar] [CrossRef]
- Yao, E.; Zhang, H.; Song, H.; Zhang, G. Fast and robust visual odometry with a low-cost IMU in dynamic environments. Ind. Robot. Int. J. Robot. Res. Appl. 2019, 46, 882–894. [Google Scholar] [CrossRef]
- Yang, D.; Bi, S.; Wang, W.; Yuan, C.; Qi, X.; Cai, Y. DRE-SLAM: Dynamic RGB-D encoder SLAM for a differential-drive robot. Remote Sens. 2019, 11, 380. [Google Scholar] [CrossRef]
- Kim, D.H.; Han, S.B.; Kim, J.H. Visual odometry algorithm using an RGB-D sensor and IMU in a highly dynamic environment. In Robot Intelligence Technology and Applications 3: Results from the 3rd International Conference on Robot Intelligence Technology and Applications; Springer: Berlin/Heidelberg, Germany, 2015; pp. 11–26. [Google Scholar]
- Liu, G.; Zeng, W.; Feng, B.; Xu, F. DMS-SLAM: A general visual SLAM system for dynamic scenes with multiple sensors. Sensors 2019, 19, 3714. [Google Scholar] [CrossRef]
- Sun, Y.; Liu, M.; Meng, M.Q.H. Motion removal for reliable RGB-D SLAM in dynamic environments. Robot. Auton. Syst. 2018, 108, 115–128. [Google Scholar] [CrossRef]
- Zhang, H.; Fang, Z.; Yang, G. RGB-D Visual Odometry in Dynamic Environments Using Line Features. Robot 2019, 41, 75–82. [Google Scholar]
- Yuan, C.; Xu, Y.; Zhou, Q. PLDS-SLAM: Point and line features SLAM in dynamic environment. Remote Sens. 2023, 15, 1893. [Google Scholar] [CrossRef]
- Ai, Q.; Liu, Q.; Xu, Q. An RGB-D SLAM Algorithm for Robot Based on the Improved Geometric andMotion Constraints in Dynamic Environment. Robot 2021, 43, 167–176. [Google Scholar]
- Zhao, X.; Wang, C.; Ang, M.H. Real-time visual-inertial localization using semantic segmentation towards dynamic environments. IEEE Access 2020, 8, 155047–155059. [Google Scholar] [CrossRef]
- Ai, Y.; Rui, T.; Lu, M.; Fu, L.; Liu, S.; Wang, S. DDL-SLAM: A robust RGB-D SLAM in dynamic environments combined with deep learning. IEEE Access 2020, 8, 162335–162342. [Google Scholar] [CrossRef]
- Shimamura, J.; Morimoto, M.; Koike, H. Robust vSLAM for Dynamic Scenes. In Proceedings of the MVA, Nara, Japan, 13–15 June 2011; pp. 344–347. [Google Scholar]
- Cheng, J.; Sun, Y.; Meng, M.Q.H. Improving monocular visual SLAM in dynamic environments: An optical-flow-based approach. Adv. Robot. 2019, 33, 576–589. [Google Scholar] [CrossRef]
- Dai, L.; Sheng, B.; Chen, T.; Wu, Q.; Liu, R.; Cai, C.; Wu, L.; Yang, D.; Hamzah, H.; Liu, Y.; et al. A deep learning system for predicting time to progression of diabetic retinopathy. Nat. Med. 2024, 30, 584–594. [Google Scholar] [CrossRef] [PubMed]
- Dai, L.; Wu, L.; Li, H.; Cai, C.; Wu, Q.; Kong, H.; Liu, R.; Wang, X.; Hou, X.; Liu, Y.; et al. A deep learning system for detecting diabetic retinopathy across the disease spectrum. Nat. Commun. 2021, 12, 3242. [Google Scholar] [CrossRef] [PubMed]
- Cui, S.; Liu, F.; Wang, Z.; Zhou, X.; Yang, B.; Li, H.; Yang, J. DAN-YOLO: A Lightweight and Accurate Object Detector Using Dilated Aggregation Network for Autonomous Driving. Electronics 2024, 13, 3410. [Google Scholar] [CrossRef]
- Liu, Y.; Sun, X.; Shao, W.; Yuan, Y. S2ANet: Combining local spectral and spatial point grouping for point cloud processing. Virtual Real. Intell. Hardw. 2024, 6, 267–279. [Google Scholar] [CrossRef]
- Li, S.; Lee, D. RGB-D SLAM in dynamic environments using static point weighting. IEEE Robot. Autom. Lett. 2017, 2, 2263–2270. [Google Scholar] [CrossRef]
- Cui, L.; Ma, C. SDF-SLAM: Semantic depth filter SLAM for dynamic environments. IEEE Access 2020, 8, 95301–95311. [Google Scholar] [CrossRef]
- Li, Y.; Li, Z.; Liu, H.; Wang, Q. ZMNet: Feature fusion and semantic boundary supervision for real-time semantic segmentation. Vis. Comput. 2024, 1–12. [Google Scholar] [CrossRef]
- Zhao, Y.; Zhang, H.; Lu, P.; Li, P.; Wu, E.; Sheng, B. DSD-MatchingNet: Deformable sparse-to-dense feature matching for learning accurate correspondences. Virtual Real. Intell. Hardw. 2022, 4, 432–443. [Google Scholar] [CrossRef]
- Peng, X.; Liu, Z.; Wang, Q.; Kim, Y.T.; Lee, H.S. Accurate Visual-Inertial SLAM by Manhattan Frame Re-identification. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 5418–5424. [Google Scholar] [CrossRef]
- Li, Y.; Yunus, R.; Brasch, N.; Navab, N.; Tombari, F. RGB-D SLAM with structural regularities. In Proceedings of the 2021 IEEE international conference on Robotics and automation (ICRA), Xi’an, China, 30 May 2021–5 June 2021; pp. 11581–11587. [Google Scholar]
- Huang, L.; Peng, L.; Cheng, M. Matching and estimating motion of line model using geometric algebra. J. Image Graph. 2001, 6, 270–274. [Google Scholar]
- Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An efficient probabilistic 3D mapping framework based on octrees. Auton. Robot. 2013, 34, 189–206. [Google Scholar] [CrossRef]
- Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 573–580. [Google Scholar]
- Company-Corcoles, J.P.; Garcia-Fidalgo, E.; Ortiz, A. MSC-VO: Exploiting manhattan and structural constraints for visual odometry. IEEE Robot. Autom. Lett. 2022, 7, 2803–2810. [Google Scholar] [CrossRef]
- Yu, C.; Liu, Z.; Liu, X.J.; Xie, F.; Yang, Y.; Wei, Q.; Fei, Q. DS-SLAM: A semantic visual SLAM towards dynamic environments. In Proceedings of the 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1168–1174. [Google Scholar]
- Sun, Y.; Liu, M.; Meng, M.Q.H. Improving RGB-D SLAM in dynamic environments: A motion removal approach. Robot. Auton. Syst. 2017, 89, 110–122. [Google Scholar] [CrossRef]
- Kerl, C.; Sturm, J.; Cremers, D. Robust odometry estimation for RGB-D cameras. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 3748–3754. [Google Scholar]
- Grupp, M. EVO: Python Package for the Evaluation of Odometry and SLAM. 2017. Available online: https://github.com/MichaelGrupp/evo (accessed on 3 July 2024).
Dataset Sequence | ORB-SLAM2 | MSC-VO | DS-SLAM | MR-DVO | PL-SLAM | Ours | |
---|---|---|---|---|---|---|---|
Static | fr3/long_office | 0.0149 | 0.0444 | - | - | 1.2005 | 0.0589 |
Low- dynamic | fr3/sitting_static | 0.0085 | 0.0082 | 0.0065 | - | - | 0.0133 |
fr3/sitting_xyz | 0.0088 | 0.0145 | - | 0.0482 | 0.3253 | 0.0686 | |
fr3/sitting_rpy | 0.216 | 0.0182 | 0.0187 | - | - | 0.0206 | |
fr3/sitting_half | 0.0312 | 0.0582 | 0.0148 | 0.0470 | 0.3766 | 0.0254 | |
High- dynamic | fr3/walking_static | 0.4173 | 0.1488 | 0.0081 | 0.0656 | - | 0.0125 |
fr3/walking_xyz | 0.8254 | 0.4870 | 0.0247 | 0.0932 | 0.2445 | 0.0628 | |
fr3/walking_rpy | 0.9269 | 0.9362 | 0.4442 | 0.1333 | 0.1684 | 0.2925 | |
fr3/walking_half | 0.5459 | 0.5263 | 0.0303 | 0.1252 | 0.4299 | 0.0650 |
Dataset Sequence | RMSE (m/s) | ||||||||
---|---|---|---|---|---|---|---|---|---|
ORB-SLAM2 | MSC-VO | DS-SLAM | MR-DVO | DVO | PL-SLAM | SLAM | Ours | ||
Static | fr3/long_office | 0.0081 | 0.0095 | - | - | 0.0231 | 0.1489 | 0.0263 | 0.0129 |
Low- dynamic | fr3/sitting_static | 0.0089 | 0.0098 | 0.0078 | - | 0.0157 | - | 0.0174 | 0.0130 |
fr3/sitting_xyz | 0.0110 | 0.0170 | - | 0.0330 | 0.0453 | 0.1120 | 0.0301 | 0.0351 | |
fr3/sitting_rpy | 0.0247 | 0.0216 | - | - | 0.1735 | - | 0.0836 | 0.0255 | |
fr3/sitting_half | 0.0271 | 0.0276 | - | 0.0458 | 0.1005 | 0.1366 | 0.0525 | 0.0240 | |
High- dynamic | fr3/walking_static | 0.2508 | 0.0917 | 0.0102 | 0.0842 | 0.3818 | - | 0.0234 | 0.0128 |
fr3/walking_xyz | 0.4064 | 0.2630 | 0.0333 | 0.1214 | 0.4360 | 0.2125 | 0.2433 | 0.0800 | |
fr3/walking_rpy | 0.4017 | 0.3058 | 0.1503 | 0.1751 | 0.4038 | 0.1651 | 0.1560 | 0.1011 | |
fr3/walking_half | 0.3338 | 0.2185 | 0.0297 | 0.1672 | 0.2628 | 0.1939 | 0.1351 | 0.0215 |
Dataset Sequence | RMSE (deg/s) | ||||||||
---|---|---|---|---|---|---|---|---|---|
ORB-SLAM2 | MSC-VO | DS-SLAM | MR-DVO | DVO | PL-SLAM | SLAM | Ours | ||
Static | fr3/long_office | 0.4638 | 0.4418 | - | - | 1.5689 | 1.0345 | 1.5173 | 0.4789 |
Low- dynamic | fr3/sitting_static | 0.2814 | 0.2860 | 0.2735 | - | 0.6084 | - | 0.7842 | 0.3501 |
fr3/sitting_xyz | 0.4759 | 0.5181 | - | 0.9828 | 1.4980 | 0.6182 | 1.0418 | 0.7364 | |
fr3/sitting_rpy | 0.7970 | 0.6756 | - | - | 6.0164 | - | 5.3861 | 0.7191 | |
fr3/sitting_half | 0.7363 | 0.7636 | - | 2.3748 | 4.6490 | 0.4939 | 2.8663 | 0.6848 | |
High- dynamic | fr3/walking_static | 4.4196 | 1.8105 | 0.2690 | 2.0487 | 6.3502 | - | 1.8547 | 0.3098 |
fr3/walking_xyz | 7.6186 | 4.8967 | 0.8266 | 3.2346 | 7.6669 | 1.6678 | 6.9166 | 1.6627 | |
fr3/walking_rpy | 7.8268 | 6.1438 | 3.0042 | 4.3755 | 7.0662 | 1.7541 | 5.5809 | 2.0076 | |
fr3/walking_half | 6.8215 | 4.4585 | 0.8142 | 5.0108 | 5.2179 | 0.6583 | 4.6412 | 1.3622 |
Thread | Operation | ORB-SLAM2 | MSC-VO | Ours |
---|---|---|---|---|
Local Mapping | KeyFrame Insertion | 54.19 | 23.38 | 37.76 |
Map Features Culling | 0.01 | 0.54 | 0.45 | |
Map Features Creation | 0.61 | 2.91 | 2.81 | |
Local BA | 301.40 | 164.14 | 53.51 | |
KeyFrame Culling | 44.685 | 19.96 | 12.66 | |
Tracking | Features Extraction | 11.84 | 41.68 | 43.68 |
Initial Pose Estimation | 9.47 | 76.89 | 20.88 | |
Track Local Map | 2.17 | 6.72 | 13.23 |
Parameter | Properties |
---|---|
Size | 271 mm × 189 mm × 151 mm |
Weight | 1.8 kg |
Maximum speed | 1.2 m/s |
Load | 3 kg |
Main control board | STM32F407VET6 |
Algorithm | Number of keyframes | Mean Tracking Time | Number of Closed-Loop | Tracking Failure | Precision | Recall | |||
---|---|---|---|---|---|---|---|---|---|
ORB-SLAM2 | 481 | 0.027 | 3 | 1 | 0 | 1 | 1 | 100% | 33.3% |
MSC-VO | 860 | 0.158 | 3 | 0 | 0 | 1 | 2 | 0 | 0 |
Ours | 209 | 0.092 | 3 | 3 | 0 | 0 | 0 | 100% | 100% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, J.; Luo, J. PLM-SLAM: Enhanced Visual SLAM for Mobile Robots in Indoor Dynamic Scenes Leveraging Point-Line Features and Manhattan World Model. Electronics 2024, 13, 4592. https://doi.org/10.3390/electronics13234592
Liu J, Luo J. PLM-SLAM: Enhanced Visual SLAM for Mobile Robots in Indoor Dynamic Scenes Leveraging Point-Line Features and Manhattan World Model. Electronics. 2024; 13(23):4592. https://doi.org/10.3390/electronics13234592
Chicago/Turabian StyleLiu, Jiale, and Jingwen Luo. 2024. "PLM-SLAM: Enhanced Visual SLAM for Mobile Robots in Indoor Dynamic Scenes Leveraging Point-Line Features and Manhattan World Model" Electronics 13, no. 23: 4592. https://doi.org/10.3390/electronics13234592
APA StyleLiu, J., & Luo, J. (2024). PLM-SLAM: Enhanced Visual SLAM for Mobile Robots in Indoor Dynamic Scenes Leveraging Point-Line Features and Manhattan World Model. Electronics, 13(23), 4592. https://doi.org/10.3390/electronics13234592