UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction
Abstract
:1. Introduction
- We adopted a LiDAR-dominant fusion scheme to implement an unsupervised visual–LiDAR odometry. In contrast to previous vision-dominant VLOs [27,28,29], which predict both the pose and dense depth maps, our method only needs to predict the pose, avoiding the inclusion of the noise generated from the depth prediction.
- We placed a geometric consistency loss and a visual consistency loss, respectively, on locally planar regions and cluttered regions, by which the complementary characteristics of the visual and LiDAR modalities can be exploited well.
- We designed an online pose-correction module to refine the predicted pose during test time. Benefiting from the LiDAR-dominant scheme, our online pose correction is more effective than its vision-dominant counterparts.
- The proposed method outperformed previous two-frame-based learning methods. Besides, while introducing two-frame constraints only, our method achieved a performance comparable to the hybrid methods, which include a global optimization on multiple or all frames.
2. Related Work
2.1. Visual and LiDAR Odometry
2.1.1. Visual Odometry
2.1.2. LiDAR Odometry
2.1.3. Visual–LiDAR Odometry
2.2. Visual–LiDAR Fusion
2.3. Test Time Optimization
3. Materials and Methods
3.1. Data Pre-Processing
3.1.1. Vertex Map
3.1.2. Normal Map
3.1.3. Vertex Color Map
3.2. Pose Estimation
3.2.1. Network Architecture
3.2.2. Training Loss
3.3. Online Pose Correction
3.3.1. Formulation
3.3.2. Hard Sample Mining
4. Results
4.1. Experimental Settings
4.1.1. Dataset and Evaluation Metrics
4.1.2. Implementation Details
4.2. Ablation Studies
4.3. Runtime Analysis
4.4. Comparison to State-of-the-Art
4.4.1. Comparison on KITTI
4.4.2. Comparison on DSEC
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Li, Y.; Yu, A.W.; Meng, T.; Caine, B.; Ngiam, J.; Peng, D.; Shen, J.; Lu, Y.; Zhou, D.; Le, Q.V.; et al. DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection. In Proceedings of the CVPR, New Orleans, LA, USA, 19–24 June 2022. [Google Scholar]
- Vora, S.; Lang, A.H.; Helou, B.; Beijbom, O. PointPainting: Sequential Fusion for 3D Object Detection. In Proceedings of the CVPR, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Pang, S.; Morris, D.; Radha, H. Fast-CLOCs: Fast Camera-LiDAR Object Candidates Fusion for 3D Object Detection. In Proceedings of the WACV, Waikoloa, HI, USA, 4–8 January 2022. [Google Scholar]
- Ma, F.; Karaman, S. Sparse-to-dense: Depth prediction from sparse depth samples and a single image. In Proceedings of the ICRA, Brisbane, QLD, Australia, 21–25 May 2018. [Google Scholar]
- Hua, J.; Gong, X. A normalized convolutional neural network for guided sparse depth upsampling. In Proceedings of the IJCAI, Stockholm, Sweden, 13–19 July 2018. [Google Scholar]
- Hu, M.; Wang, S.; Li, B.; Ning, S.; Fan, L.; Gong, X. Penet: Towards precise and efficient image guided depth completion. In Proceedings of the ICRA, Xi’an, China, 30 May–5 June 2021. [Google Scholar]
- Rishav, R.; Battrawy, R.; Schuster, R.; Wasenmüller, O.; Stricker, D. DeepLiDARFlow: A deep learning architecture for scene flow estimation using monocular camera and sparse LiDAR. In Proceedings of the IROS, Las Vegas, NV, USA, 25–29 October 2020. [Google Scholar]
- Liu, H.; Lu, T.; Xu, Y.; Liu, J.; Li, W.; Chen, L. CamLiFlow: Bidirectional Camera-LiDAR Fusion for Joint Optical Flow and Scene Flow Estimation. In Proceedings of the CVPR, New Orleans, LA, USA, 19–24 June 2022. [Google Scholar]
- Wang, R.; Pizer, S.M.; Frahm, J.M. Recurrent Neural Network for (Un-)supervised Learning of Monocular Video Visual Odometry and Depth. In Proceedings of the CVPR, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Bian, J.; Li, Z.; Wang, N.; Zhan, H.; Shen, C.; Cheng, M.; Reid, I. Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video. In Proceedings of the NIPS, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Xiong, M.; Zhang, Z.; Zhong, W.; Ji, J.; Liu, J.; Xiong, H. Self-supervised Monocular Depth and Visual Odometry Learning with Scale-consistent Geometric Constraints. In Proceedings of the IJCAI, Online, 7–15 January 2020. [Google Scholar]
- Feng, T.; Gu, D. SGANVO: Unsupervised Deep Visual Odometry and Depth Estimation With Stacked Generative Adversarial Networks. IEEE Robot. Autom. Lett. 2019, 4, 4431–4437. [Google Scholar] [CrossRef] [Green Version]
- Li, X.; Hou, Y.; Wu, Q.; Wang, P.; Li, W. Dvonet: Unsupervised monocular depth estimation and visual odometry. In Proceedings of the VCIP, Sydney, Australia, 1–4 December 2019. [Google Scholar]
- Li, S.; Wang, X.; Cao, Y.; Xue, F.; Yan, Z.; Zha, H. Self-Supervised Deep Visual Odometry with Online Adaptation. In Proceedings of the CVPR, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Zhang, J.; Sui, W.; Wang, X.; Meng, W.; Zhu, H.; Zhang, Q. Deep Online Correction for Monocular Visual Odometry. In Proceedings of the ICRA, Xi’an, China, 30 May–5 June 2021. [Google Scholar]
- Chen, Y.; Schmid, C.; Sminchisescu, C. Self-Supervised Learning with Geometric Constraints in Monocular Video: Connecting Flow, Depth, and Camera. In Proceedings of the ICCV, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Li, Q.; Chen, S.; Wang, C.; Li, X.; Wen, C.; Cheng, M.; Li, J. LO-Net: Deep Real-Time Lidar Odometry. In Proceedings of the CVPR, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Cho, Y.; Kim, G.; Kim, A. Unsupervised Geometry-Aware Deep LiDAR Odometry. In Proceedings of the ICRA, Paris, France, 31 May–31 August 2020. [Google Scholar]
- Iwaszczuk, D.; Roth, S. Deeplio: Deep Lidar Inertial Sensor Fusion for Odometry Estimation. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2021, V-1-2021, 47–54. [Google Scholar]
- Nubert, J.; Khattak, S.; Hutter, M. Self-supervised learning of lidar odometry for robotic applications. In Proceedings of the ICRA, Xi’an, China, 30 May–5 June 2021. [Google Scholar]
- Wang, G.; Wu, X.; Jiang, S.; Liu, Z.; Wang, H. Efficient 3D Deep LiDAR Odometry. arXiv 2021, arXiv:2111.02135. [Google Scholar] [CrossRef]
- Lu, W.; Zhou, Y.; Wan, G.; Hou, S.; Song, S. L3-net: Towards learning based lidar localization for autonomous driving. In Proceedings of the CVPR, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Lu, W.; Wan, G.; Zhou, Y.; Fu, X.; Yuan, P.; Song, S. DeepVCP: An End-to-End Deep Neural Network for Point Cloud Registration. In Proceedings of the ICCV, Seoul, Republic of Korea, 27 October–2 November 2019. [Google Scholar]
- Xu, Y.; Huang, Z.; Lin, K.Y.; Zhu, X.; Shi, J.; Bao, H.; Zhang, G.; Li, H. SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural Networks. In Proceedings of the CoRL, Online, 16–18 November 2020. [Google Scholar]
- Wang, G.; Wu, X.; Liu, Z.; Wang, H. Pwclo-net: Deep lidar odometry in 3d point clouds using hierarchical embedding mask optimization. In Proceedings of the CVPR, Virtual, 19–25 June 2021. [Google Scholar]
- Xu, Y.; Lin, J.; Shi, J.; Zhang, G.; Wang, X.; Li, H. Robust self-supervised lidar odometry via representative structure discovery and 3d inherent error modeling. IEEE Robot. Autom. Lett. 2022, 7, 1651–1658. [Google Scholar] [CrossRef]
- Li, B.; Hu, M.; Wang, S.; Wang, L.; Gong, X. Self-supervised Visual-LiDAR Odometry with Flip Consistency. In Proceedings of the WACV, Waikoloa, HI, USA, 5–9 January 2021. [Google Scholar]
- Liu, Q.; Zhang, H.; Xu, Y.; Wang, L. Unsupervised Deep Learning-Based RGB-D Visual Odometry. Appl. Sci. 2020, 10, 5426. [Google Scholar] [CrossRef]
- An, Y.; Shi, J.; Gu, D.; Liu, Q. Visual-LiDAR SLAM Based on Unsupervised Multi-channel Deep Neural Networks. Cogn. Comput. 2022, 14, 1496–1508. [Google Scholar] [CrossRef]
- Song, Z.; Lu, J.; Yao, Y.; Zhang, J. Self-Supervised Depth Completion From Direct Visual-LiDAR Odometry in Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2022, 23, 11654–11665. [Google Scholar] [CrossRef]
- Sturm, J.; Engelhard, N.; Endres, F.; Burgard, W.; Cremers, D. A benchmark for the evaluation of RGB-D SLAM systems. In Proceedings of the IROS, Algarve, Portugal, 7–12 October 2012. [Google Scholar]
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Proceedings of the RSS 2014—Robotics: Science and Systems Conference, Berkeley, CA, USA, 12–16 July 2014. [Google Scholar]
- Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
- Quist, E.B.; Niedfeldt, P.C.; Beard, R.W. Radar odometry with recursive-RANSAC. IEEE Trans. Aerosp. Electron. Syst. 2016, 52, 1618–1630. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. Visual-lidar odometry and mapping: Low-drift, robust, and fast. In Proceedings of the ICRA, Seattle, WA, USA, 26–30 May 2015. [Google Scholar]
- Cvišić, I.; Marković, I.; Petrović, I. SOFT2: Stereo Visual Odometry for Road Vehicles Based on a Point-to-Epipolar-Line Metric. IEEE Trans. Robot. 2022, 23, 1–16. [Google Scholar]
- Zhou, T.; Brown, M.; Snavely, N.; Lowe, D.G. Unsupervised learning of depth and ego-motion from video. In Proceedings of the CVPR, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Zhang, J.; Su, Q.; Liu, P.; Xu, C.; Chen, Y. Unsupervised learning of monocular depth and ego-motion with space–temporal-centroid loss. Int. J. Mach. Learn. Cybern. 2020, 11, 615–627. [Google Scholar] [CrossRef]
- Yin, X.; Wang, X.; Du, X.; Chen, Q. Scale Recovery for Monocular Visual Odometry Using Depth Estimated with Deep Convolutional Neural Fields. In Proceedings of the ICCV, Venice, Italy, 22–29 October 2017. [Google Scholar]
- He, M.; Zhu, C.; Huang, Q.; Ren, B.; Liu, J. A review of monocular visual odometry. Vis. Comput. 2020, 36, 1053–1065. [Google Scholar] [CrossRef]
- Li, R.; Wang, S.; Long, Z.; Gu, D. UnDeepVO: Monocular Visual Odometry Through Unsupervised Deep Learning. In Proceedings of the ICRA, Brisbane, QLD, Australia, 21–25 May 2018. [Google Scholar]
- Yin, Z.; Shi, J. GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose. In Proceedings of the 2018 Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
- Yang, N.; Wang, R.; Stuckler, J.; Cremers, D. Deep virtual stereo odometry: Leveraging deep depth prediction for monocular direct sparse odometry. In Proceedings of the ECCV, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Yang, N.; Stumberg, L.; Wang, R.; Cremers, D. D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry. In Proceedings of the CVPR, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Li, Z.; Wang, N. Dmlo: Deep matching lidar odometry. In Proceedings of the IROS, Las Vegas, NV, USA, 24 October 2020. [Google Scholar]
- Tu, Y. UnPWC-SVDLO: Multi-SVD on PointPWC for Unsupervised Lidar Odometry. arXiv 2022, arXiv:2205.08150. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the CVPR, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Arun, K.S.; Huang, T.S.; Blostein, S.D. Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. 1987, PAMI-9, 698–700. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Segal, A.; Haehnel, D.; Thrun, S. Generalized-icp. In Proceedings of the Robotics: Science and Systems V, Seattle, WA, USA, 28 June–1 July 2009. [Google Scholar]
- Serafin, J.; Grisetti, G. NICP: Dense normal based point cloud registration. In Proceedings of the IROS, Hamburg, Germany, 28 September–3 October 2015. [Google Scholar]
- Fu, X.; Liu, C.; Zhang, C.; Sun, Z.; Song, Y.; Xu, Q.; Yuan, X. Self-supervised learning of LiDAR odometry based on spherical projection. Int. J. Adv. Robot. Syst. 2022, 19, 17298806221078669. [Google Scholar] [CrossRef]
- Tibebu, H.; De-Silva, V.; Artaud, C.; Pina, R.; Shi, X. Towards Interpretable Camera and LiDAR Data Fusion for Autonomous Ground Vehicles Localisation. Sensors 2022, 22, 8021. [Google Scholar] [CrossRef] [PubMed]
- Wang, C.; Ma, C.; Zhu, M.; Yang, X. Pointaugmenting: Cross-modal augmentation for 3d object detection. In Proceedings of the CVPR, Virtual, 19–25 June 2021. [Google Scholar]
- Casser, V.; Pirk, S.; Mahjourian, R.; Angelova, A. Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos. In Proceedings of the AAAI, Honolulu, HI, USA, 27 January–1 February 2019. [Google Scholar]
- McCraith, R.; Neumann, L.; Zisserman, A.; Vedaldi, A. Monocular depth estimation with self-supervised instance adaptation. arXiv 2020, arXiv:2004.05821. [Google Scholar]
- Hong, S.; Kim, S. Deep Matching Prior: Test-Time Optimization for Dense Correspondence. In Proceedings of the ICCV, Montreal, QC, Canada, 10–17 October 2021. [Google Scholar]
- Zhu, W.; Huang, Y.; Xu, D.; Qian, Z.; Fan, W.; Xie, X. Test-Time Training for Deformable Multi-Scale Image Registration. In Proceedings of the ICRA, Xi’an, China, 30 May–5 June 2021. [Google Scholar]
- Li, S.; Wu, X.; Cao, Y.; Zha, H. Generalizing to the Open World: Deep Visual Odometry with Online Adaptation. In Proceedings of the CVPR, Virtual, 19–25 June 2021. [Google Scholar]
- Golub, G.H.; Van Loan, C.F. Matrix Computations; The Johns Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
- Kingma, D.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Suh, Y.; Han, B.; Kim, W.; Lee, K.M. Stochastic Class-based Hard Example Mining for Deep Metric Learning. In Proceedings of the CVPR, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
- Chen, K.; Chen, Y.; Han, C.; Sang, N.; Gao, C. Hard sample mining makes person re-identification more efficient and accurate. Neurocomputing 2020, 382, 259–267. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? the KITTI vision benchmark suite. In Proceedings of the CVPR, Providence, RI, USA, 16–21 June 2012. [Google Scholar]
- Gehrig, M.; Aarents, W.; Gehrig, D.; Scaramuzza, D. Dsec: A stereo event camera dataset for driving scenarios. IEEE Robot. Autom. Lett. 2021, 6, 4947–4954. [Google Scholar] [CrossRef]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Proceedings of the NIPS, Vancouver, BC, Canada, 8–14 December 2019. [Google Scholar]
- Wagstaff, B.; Peretroukhin, V.; Kelly, J. On the Coupling of Depth and Egomotion Networks for Self-Supervised Structure from Motion. IEEE Robot. Autom. Lett. 2022, 7, 6766–6773. [Google Scholar] [CrossRef]
- Wagstaff, B.; Peretroukhin, V.; Kelly, J. Self-supervised deep pose corrections for robust visual odometry. In Proceedings of the ICRA, Paris, France, 31 May–31 August 2020. [Google Scholar]
- Tu, Y.; Xie, J. UnDeepLIO: Unsupervised Deep Lidar-Inertial Odometry. In Proceedings of the 6th Asian Conference on Pattern Recognition—ACPR 2021, Jeju Island, Republic of Korea, 9–12 November 2021. [Google Scholar]
Models | Modal | Fusion Scheme | Loss | Seq.09 | Seq.10 | ||
---|---|---|---|---|---|---|---|
VLO [27] | L-Dep+V | VDF | - | 4.33 | 1.72 | 3.30 | 1.40 |
UnVELO1 | L-2D | - | 4.36 | 1.49 | 4.10 | 2.07 | |
UnVELO2 | L-2D+V | LDF | 3.83 | 1.29 | 4.10 | 1.86 | |
UnVELO3 | L-2D+V | LDF | 3.52 | 1.12 | 2.66 | 1.71 |
Models | Iter | Seq.09 | Seq.10 | Runtime (ms) | ||
---|---|---|---|---|---|---|
VLO | 0 | 4.33 | 1.72 | 3.30 | 1.40 | 25.1 |
VLO+OC-10 | 10 | 3.69 | 1.91 | 5.51 | 2.41 | 219.6 |
VLO+OC-20 | 20 | 2.57 | 1.14 | 2.25 | 0.61 | 411.2 |
VLO+OC-40 | 40 | 2.58 | 1.02 | 1.40 | 0.66 | 705.3 |
VLO+OC-40 Opt-Dep | 40 | 2.92 | 1.12 | 1.72 | 0.76 | 742.2 |
VLO+OC-40 w/o HSM | 40 | 2.58 | 1.02 | 1.41 | 0.66 | 646.8 |
UnVELO3 | 0 | 3.52 | 1.12 | 2.66 | 1.71 | 20.1 |
UnVELO3+OC-10 | 10 | 2.10 | 0.94 | 3.26 | 1.18 | 142.1 |
UnVELO3+OC-20 | 20 | 1.53 | 0.53 | 1.42 | 0.77 | 245.4 |
UnVELO3+OC-40 | 40 | 0.99 | 0.26 | 0.71 | 0.31 | 458.6 |
UnVELO3+OC-40 w/o HSM | 40 | 0.96 | 0.32 | 0.88 | 0.30 | 412.7 |
UnVELO3+OC-40-Inter2 | 40 | 1.04 | 0.27 | 0.69 | 0.24 | - |
Method | Modal | Seq.09 | Seq.10 | |||
---|---|---|---|---|---|---|
Unsup. | DeepLO [18] | L-2D | 4.87 | 1.95 | 5.02 | 1.83 |
DeLORA [20] | L-2D | 6.05 | 2.15 | 6.44 | 3.00 | |
SeVLO [27] | L-Dep+V | 2.58 | 1.13 | 2.67 | 1.28 | |
Hybrid | SS-DPC-Net [67] | V | 2.13 | 0.80 | 3.48 | 1.38 |
DeLORA w/ mapping [20] | L-2D | 1.54 | 0.68 | 1.78 | 0.69 | |
T-Opt. | DOC [15] | V | 2.26 | 0.87 | 2.61 | 1.59 |
DOC+ [15] | V | 2.02 | 0.61 | 2.29 | 1.10 | |
Li et al. [58] | V | 1.87 | 0.46 | 1.93 | 0.30 | |
Wagstaff et al. [66] | V | 1.19 | 0.30 | 1.34 | 0.37 | |
UnVELO (Ours) | L-2D+V | 0.99 | 0.26 | 0.71 | 0.31 |
Method | Modal | Seq.07 | Seq.08 | Seq.09 | Seq.10 | |||||
---|---|---|---|---|---|---|---|---|---|---|
Sup. | PWCLO-Net [25] | L-P | 0.60 | 0.44 | 1.26 | 0.55 | 0.79 | 0.35 | 1.69 | 0.62 |
LO-Net [17] | L-2D | 1.70 | 0.89 | 2.12 | 0.77 | 1.37 | 0.58 | 1.80 | 0.93 | |
E3DLO [21] | L-2D | 0.46 | 0.38 | 1.14 | 0.41 | 0.78 | 0.33 | 0.80 | 0.46 | |
Unsup. | UnPWC-SVDLO [46] | L-P | 0.71 | 0.79 | 1.51 | 0.75 | 1.27 | 0.67 | 2.05 | 0.89 |
SelfVoxeLO [24] | L-vox | 3.09 | 1.81 | 3.16 | 1.14 | 3.01 | 1.14 | 3.48 | 1.11 | |
RLO [26] | L-vox | 3.24 | 1.72 | 2.48 | 1.10 | 2.75 | 1.01 | 3.08 | 1.23 | |
Hybrid | DMLO [45] | L-2D | 0.73 | 0.48 | 1.08 | 0.42 | 1.10 | 0.61 | 1.12 | 0.64 |
DMLO w/mapping [45] | L-2D | 0.53 | 0.51 | 0.93 | 0.48 | 0.58 | 0.30 | 0.75 | 0.52 | |
LO-Net w/mapping [17] | L-2D | 0.56 | 0.45 | 1.08 | 0.43 | 0.77 | 0.38 | 0.92 | 0.41 | |
SelfVoxeLO w/mapping [24] | L-vox | 0.31 | 0.21 | 1.18 | 0.35 | 0.83 | 0.34 | 1.22 | 0.40 | |
RLO w/mapping [26] | L-vox | 0.56 | 0.26 | 1.17 | 0.38 | 0.65 | 0.25 | 0.72 | 0.31 | |
UnVELO (Ours) | L-2D+V | 1.46 | 0.78 | 1.25 | 0.43 | 0.88 | 0.26 | 0.79 | 0.33 |
Method | Modal | Day(06_a) | Night(03_a) | |||
---|---|---|---|---|---|---|
Unsup. | DeLORA [20] † | L-2D | 22.53 | 9.93 | 39.00 | 12.48 |
DeepLO [18] † | L-2D | 42.43 | 20.77 | 51.26 | 31.12 | |
SeVLO [27] | L-Dep+V | 8.42 | 5.78 | 22.43 | 24.37 | |
UnVELO (Ours) | L-2D+V | 2.07 | 2.10 | 5.84 | 7.92 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, B.; Ye, H.; Fu, S.; Gong, X.; Xiang, Z. UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction. Sensors 2023, 23, 3967. https://doi.org/10.3390/s23083967
Li B, Ye H, Fu S, Gong X, Xiang Z. UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction. Sensors. 2023; 23(8):3967. https://doi.org/10.3390/s23083967
Chicago/Turabian StyleLi, Bin, Haifeng Ye, Sihan Fu, Xiaojin Gong, and Zhiyu Xiang. 2023. "UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction" Sensors 23, no. 8: 3967. https://doi.org/10.3390/s23083967
APA StyleLi, B., Ye, H., Fu, S., Gong, X., & Xiang, Z. (2023). UnVELO: Unsupervised Vision-Enhanced LiDAR Odometry with Online Correction. Sensors, 23(8), 3967. https://doi.org/10.3390/s23083967