RNGC-VIWO: Robust Neural Gyroscope Calibration Aided Visual-Inertial-Wheel Odometry for Autonomous Vehicle
Abstract
:1. Introduction
- (1)
- A robust neural calibration network for low-cost IMU gyroscopes called NGC-Net is proposed, which leverages the temporal convolutional network to extract the error features from the raw IMU. By effective data enhancement strategy, well-designed network structure, and multiple losses considered, our experiments show the proposed NGC-Net can achieve better de-noising performance.
- (2)
- We design an effective fusion strategy to combine the advantages of network outputs and VIWO methods and further propose a novel multi-sensor fusion tracking method to reduce the long-term drift using the heading obtained by our NGC-Net outputs.
- (3)
- Through a series of experiments on public datasets, our NGC-Net has better performance than both learning methods and competes with VIO methods. We implement the RNGC-VIWO system and validate the proposed method in complex urban driving datasets. Compared with state-of-the-art methods, our method can significantly improve the accuracy and robustness of vehicle localization in long-term and large-scale areas.
2. Related Work
2.1. Vision-Aided or -Based Methods
2.2. IMU Correction Methods
3. Method
3.1. Overview
3.1.1. System Overview
3.1.2. Notation
3.2. Gyroscope Error Calibration Based on Deep Learning
3.2.1. Gyroscope Correction Model
3.2.2. Data Preprocessing
3.2.3. Network Structure
3.2.4. Loss Function
3.2.5. Implementation Details
3.3. Muti-Sensor Fusion State Estimation
3.3.1. Image Processing
3.3.2. De-Noised IMU and Odometer Pre-Integration
3.3.3. System Initialization
3.3.4. Nonlinear Optimization
3.3.5. Yaw Attitude Correction
4. Experiments
4.1. Baselines
- (1)
- Raw IMU: Orientation computed using the original IMU readings.
- (2)
- OriNet: A 3D orientation estimation method based on an LSTM network [47].
- (3)
- DIG-Net: Attitude estimation based on a dilated convolution network [29].
- (4)
- Our proposed NGC-Net: Our learning-based method described in Section 3.2.
- (5)
- VINS-Mono: Representative of state-of-the-art visual-inertial odometry with an open-source code [11].
- (6)
- Open-VINS: A state-of-the-art filter-based visual-inertial estimator for which we choose the stereo and IMU configuration [7].
- (7)
- Proposed VIWO: An optimization-based monocular visual-inertial-wheel odometry developed based on VINS-Mono without the aid of NGC-Net, which is similar to the work proposed by [11] without the online IMU-odometer extrinsic parameter calibration module.
- (8)
- Proposed VIWO+NGC: A method that is the same as method (7), but the gyroscope inputs are the NGC-Net outputs rather than the raw gyroscope measurements.
- (9)
- Proposed RNGC-VIWO: The method described in Section 3.3, in which the de-noised gyroscope measurements are effectively fused into the overall framework.
- 3D orientation estimates based on the learning method and VIO method. We compare these deep learning methods to demonstrate the de-noising performance of our NGC-Net. For a fair comparison, we use the same training sequence and test sequence. Since the OriNet has not been published and only provides test results on the EuRoC MAV Dataset [60], we do not compare our method with the OriNet on the KAIST Urban Dataset [61]. The DIG-Net is not tested on the KAIST Urban Dataset, although it is open-sourced, and we use the default network parameters and take the best training results as the network output. We also compare the VIO methods (VINS-Mono, Open-VINS) to demonstrate that our NGC-Net can accurately estimate the orientation and compete with VIO methods.
- 6DOF pose estimates based on multi-sensor fusion. We further evaluate the localization performance of our proposed RNGC-VIWO on the KAIST dataset, which is equipped with the stereo camera, IMU, and wheel odometer for vehicle localization. However, an open-source algorithm with the same sensor configuration cannot be found at present. Thus, we compare our RNGC-VIWO with the proposed VIWO, proposed VIWO+NGC, and the state-of-the-art VIO methods (VINS-Mono, Open-VINS). In addition, reference [37] proposes an excellent VIWO algorithm and tests it on KAIST urban39; we directly include the test results of urban39 in Section 4.4 for comparison since it is not open-source.
4.2. Metrics Definitions
- (1)
- Absolute Yaw Error (AYE): The AYE computes the root mean square error between the ground truth and estimated heading as the following equation:
- (2)
- Absolute Orientation Error (AOE): The AOE calculates the root mean square error between the ground truth and estimated orientation as
- (3)
- Absolute Translation Error (ATE): The ATE calculates the root mean square error between the ground truth and estimated position as
4.3. Neural Gyroscope Calibration Network Performance
4.4. Muti-Sensor Fusion System Performance
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Gao, L.; Xiong, L.; Xia, X.; Lu, Y.; Yu, Z.; Khajepour, A. Improved Vehicle Localization Using On-Board Sensors and Vehicle Lateral Velocity. IEEE Sens. J. 2022, 22, 6818–6831. [Google Scholar] [CrossRef]
- Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234. [Google Scholar]
- Forster, C.; Pizzoli, M.; Scaramuzza, D. SVO: Fast semi-direct monocular visual odometry. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 15–22. [Google Scholar]
- Wang, Y.; Zhang, S.; Wang, J. Ceiling-View Semi-DirectMonocular Visual Odometry with Planar Constraint. Remote Sens. 2022, 14, 5447. [Google Scholar] [CrossRef]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM a versatile and accurate monocular SLAM system. IEEE Trans. Robotics. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Wang, K.; Huang, X.; Chen, J.; Cao, C.; Xiong, Z.; Chen, L. Forward and Backward Visual Fusion Approach to Motion Estimation with High Robustness and Low Cost. Remote Sens. 2019, 11, 2139. [Google Scholar] [CrossRef]
- Geneva, P.; Eckenhoff, K.; Lee, W.; Yang, Y.; Huang, G. Openvins: A research platform for visual-inertial estimation. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 4666–4672. [Google Scholar]
- Leutenegger, S.; Lynen, S.; Bosse, M.; Siegwart, R.; Furgale, P. Keyframe-based visual-inertial odometry using nonlinear optimization. Int. J. Robot. Res. 2015, 34, 314–334. [Google Scholar] [CrossRef]
- Xu, B.; Chen, Y.; Zhang, S.; Wang, J. Improved Point–Line Visual–Inertial Odometry System Using Helmert Variance Component Estimation. Remote Sens. 2020, 12, 2901. [Google Scholar] [CrossRef]
- Leutenegger, S.; Furgale, P.; Rabaud, V.; Chli, M.; Konolige, K.; Siegwart, R. Keyframe-Based Visual-Inertial SLAM using Nonlinear Optimization. In Proceedings of the Robotics: Science and Systems, Berkeley, CA, USA, 12–16 July 2014; pp. 789–795. [Google Scholar]
- Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef]
- Qin, T.; Pan, J.; Cao, S.; Shen, S. A General Optimization-based Framework for Local Odometry Estimation with Multiple Sensors. arXiv 2019, arXiv:1901.03638. [Google Scholar]
- Jiang, C.; Zhao, D.; Zhang, Q.; Liu, W. A Multi-GNSS/IMU Data Fusion Algorithm Based on the Mixed Norms for Land Vehicle Applications. Remote Sens. 2023, 15, 2439. [Google Scholar] [CrossRef]
- Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.M.; Tardós, J.D. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Wu, K.J.; Guo, C.X.; Georgiou, G.; Roumeliotis, S.I. Vins on wheels. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5155–5162. [Google Scholar]
- Yu, Z.; Zhu, L.; Lu, G. Tightly-coupled Fusion of VINS and Motion Constraint for Autonomous Vehicle. IEEE Trans. Veh. Technol. 2022, 14, 5799–5810. [Google Scholar] [CrossRef]
- Prikhodko, I.P.; Bearss, B.; Merritt, C.; Bergeron, J.; Blackmer, C. Towards self-navigating cars using MEMS IMU: Challenges and opportunities. In Proceedings of the 2018 IEEE International Symposium on Inertial Sensors and Systems (INERTIAL), Lake Como, Italy, 26–29 March 2018; pp. 1–4. [Google Scholar]
- Ru, X.; Gu, N.; Shang, H.; Zhang, H. MEMS Inertial Sensor Calibration Technology: Current Status and Future Trends. Micromachines 2022, 13, 879. [Google Scholar] [CrossRef] [PubMed]
- Gang, P.; Zezao, L.; Bocheng, C.; Shanliang, C.; Dingxin, H. Robust Tightly-Coupled Pose Estimation Based on Monocular Vision, Inertia and Wheel Speed. arXiv 2020, arXiv:2003.01496. [Google Scholar]
- Quan, M.; Piao, S.; Tan, M.; Huang, S.S. Tightly-Coupled Monocular Visual-Odometric SLAM Using Wheels and a MEMS Gyroscope. IEEE Access 2019, 7, 97374–97389. [Google Scholar] [CrossRef]
- Liu, J.; Gao, W.; Hu, Z. Visual-Inertial Odometry Tightly Coupled with Wheel Encoder Adopting Robust Initialization and Online Extrinsic Calibration. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 5391–5397. [Google Scholar]
- Li, T.; Zhang, H.; Gao, Z.; Niu, X.; El-sheimy, N. Tight Fusion of a Monocular Camera, MEMS-IMU, and Single-Frequency Multi-GNSS RTK for Precise Navigation in GNSS-Challenged Environments. Remote Sens. 2019, 11, 610. [Google Scholar] [CrossRef]
- Gu, N.; Xing, F.; You, Z. Visual/Inertial/GNSS Integrated Navigation System under GNSS Spoofing Attack. Remote Sens. 2022, 14, 5975. [Google Scholar] [CrossRef]
- Zhang, X.; Su, Y.; Zhu, X. Loop closure detection for visual SLAM systems using convolutional neural network. In Proceedings of the 23rd International Conference on Automation and Computing (ICAC), Huddersfield, UK, 7–8 September 2017; pp. 1–6. [Google Scholar]
- Naseer, T.; Ruhnke, M.; Stachniss, C.; Spinello, L.; Burgard, W. Robust visual SLAM across seasons. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 2529–2535. [Google Scholar]
- Schneider, T.; Dymczyk, M.; Fehr, M.; Egger, K.; Lynen, S.; Gilitschenski, I.; Siegwart, R. Maplab: An open framework for research in visual-inertial mapping and localization. IEEE Robot. Autom. Lett. 2018, 3, 1418–1425. [Google Scholar] [CrossRef]
- Liu, W.; Caruso, D.; Ilg, E.; Dong, J.; Mourikis, A.I.; Daniilidis, K.; Kumar, V.; Engel, J. TLIO: Tight learned inertial odometry. IEEE Robot. Autom. Lett. 2020, 5, 5653–5660. [Google Scholar] [CrossRef]
- Chen, C.; Lu, X.; Markham, A.; Trigoni, N. IoNet: Learning to cure the curse of drift in inertial odometry. In Proceedings of the 32th AAAI Conference on Artificial Intelligence 2018, New Orleans, LA, USA, 2–7 February 2018; pp. 6468–6476. [Google Scholar]
- Brossard, M.; Bonnabel, S.; Barrau, A. Denoising imu gyroscopes with deep learning for open-loop attitude estimation. IEEE Robot. Autom. Lett. 2020, 5, 4796–4803. [Google Scholar] [CrossRef]
- Liu, Y.; Liang, W.; Cui, J. LGC-Net: A Lightweight Gyroscope Calibration Network for Efficient Attitude Estimation. arXiv 2022, arXiv:2209.08816. [Google Scholar]
- Gao, Y.; Shi, D.; Li, R.; Liu, Z.; Sun, W. Gyro-Net: IMU Gyroscopes Random Errors Compensation Method Based on Deep Learning. IEEE Robot. Autom. Lett. 2023, 8, 1471–1478. [Google Scholar] [CrossRef]
- Li, R.; Yi, W.; Fu, C.; Yi, X. Calib-Net: Calibrating the low-cost IMU via deep convolutional neural network. Front. Robot. AI 2021, 8, 772583. [Google Scholar] [CrossRef] [PubMed]
- Xia, X.; Meng, Z.; Han, X.; Li, H.; Tsukiji, T.; Xu, R.; Zhang, Z.; Ma, J. Automated Driving Systems Data Acquisition and Processing Platform. arXiv 2022, arXiv:2211.13425. [Google Scholar]
- Mourikis, A.I.; Roumeliotis, S.I. A Multi-State Constraint Kalman Filter for Vision-Aided Inertial Navigation. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation, Rome, Italy, 10–14 April 2007; pp. 3565–3572. [Google Scholar]
- Zhang, Z.; Jiao, Y.; Huang, S.; Wang, Y.; Xiong, R. Map-based Visual-Inertial Localization: Consistency and Complexity. arXiv 2020, arXiv:2204.12173. [Google Scholar]
- Jung, J.H.; Cha, J.; Chung, J.Y. Monocular Visual-Inertial-Wheel Odometry Using Low-Grade IMU in Urban Areas. IEEE Trans. Intell. Transp. Syst. 2020, 23, 925–938. [Google Scholar] [CrossRef]
- Lee, W.; Eckenhoff, K.; Yang, Y.; Geneva, P.; Huang, G. Visual-Inertial-Wheel Odometry with Online Calibration. In Proceedings of the 2020 International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24–30 October 2020; pp. 4559–4566. [Google Scholar]
- Ghanipoor, F.; Hashemi, M.; Salarieh, H. Toward Calibration of Low-Precision MEMS IMU Using a Nonlinear Model and TUKF. IEEE Sens. J. 2020, 20, 4131–4138. [Google Scholar]
- Jung, J.H.; Heo, S.; Park, C.G. Observability analysis of IMU intrinsic parameters in stereo visual–inertial odometry. IEEE Trans. Instrum. Meas. 2020, 69, 7530–7541. [Google Scholar] [CrossRef]
- Liu, W.; Xiong, L.; Xia, X.; Lu, Y.; Gao, L.; Song, S. Vision-aided intelligent vehicle sideslip angle estimation based on a dynamic model. IET Intell. Transp. Syst. 2020, 14, 1183–1189. [Google Scholar] [CrossRef]
- Xia, X.; Hashemi, E.; Xiong, L.; Khajepour, A. Autonomous Vehicle Kinematics and Dynamics Synthesis for Sideslip Angle Estimation Based on Consensus Kalman Filter. IEEE Trans. Control Syst. Technol. 2022, 31, 179–192. [Google Scholar] [CrossRef]
- Liu, W.; Xia, X.; Xiong, L.; Lu, Y.; Gao, L.; Yu, Z. Automated Vehicle Sideslip Angle Estimation Considering Signal Measurement Characteristic. IEEE Sens. J. 2021, 21, 21675–21687. [Google Scholar] [CrossRef]
- Xia, X.; Xiong, L.; Huang, Y.; Lu, Y.; Gao, L.; Xu, N.; Yu, Z. Estimation on IMU yaw misalignment by fusing information of automotive onboard sensors. Mech. Syst. Signal Process. 2021, 162, 107993. [Google Scholar] [CrossRef]
- Clark, R.; Wang, S.; Wen, H.; Markham, A.; Trigoni, N. VINet: Visual-inertial Odometry as a Sequence-to-Sequence Learning Problem. In Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 3995–4001. [Google Scholar]
- Chen, D.; Wang, N.; Xu, R.; Xie, W.; Bao, H.; Zhang, G. RNIN-VIO: Robust Neural Inertial Navigation Aided Visual-Inertial Odometry in Challenging Scenes. In Proceedings of the 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Bari, Italy, 4–8 October 2021; pp. 275–283. [Google Scholar]
- Chen, H.; Aggarwal, P.; Taha, T.M.; Chodavarapu, V.P. Improving Inertial Sensor by Reducing Errors using Deep Learning Methodology. In Proceedings of the NAECON 2018—IEEE National Aerospace and Electronics Conference, Dayton, OH, USA, 23–26 July 2018; pp. 197–202. [Google Scholar]
- Esfahani, M.A.; Wang, H.; Wu, K.; Yuan, S. OriNet: Robust 3-D orientation estimation with a single particular IMU. IEEE Robot. Autom. Lett. 2019, 5, 399–406. [Google Scholar] [CrossRef]
- Rehder, J.; Nikolic, J.; Schneider, T.; Hinzmann, T.; Siegwart, R. Extending kalibr: Calibrating the extrinsics of multiple IMUs and of individual axes. In Proceedings of the IEEE International Conference on Robotics and Automation, ICRA, Stockholm, Sweden, 16–21 May 2016; pp. 4304–4311. [Google Scholar]
- Rohac, J.; Sipos, M.; Simanek, J. Calibration of low-cost triaxial inertial sensors. IEEE Instrum. Meas. Mag. 2015, 18, 32–38. [Google Scholar] [CrossRef]
- Zhang, M.; Zhang, M.; Chen, Y.; Li, M. IMU data processing for inertial aided navigation: A recurrent neural network based approach. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA) 2021, Xi’an, China, 30 May–5 June 2021; pp. 3992–3998. [Google Scholar]
- Bai, S.; Kolter, J.Z.; Koltun, V. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv 2018, arXiv:1803.01271. [Google Scholar]
- Yu, F.; Koltun, V. Multi-Scale Context Aggregation by Dilated Convolutions. In Proceedings of the International Conference on Learning Representations 2016, San Juan, Puerto Rico, 2–4 May 2016. [Google Scholar]
- Hendrycks, D.; Gimpel, K. Gaussian error linear units (GELUs). arXiv 2016, arXiv:1606.08415. [Google Scholar]
- Salimans, T.; Kingma, D.P. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. Adv. Neural Inf. Process. Syst. 2016, 29, 901–909. [Google Scholar]
- Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Jianbo, S.; Tomasi. Good features to track. In Proceedings of the 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 593–600. [Google Scholar]
- Lucas, B.D.; Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, IJCAI’81, San Francisco, CA, USA, 24–28 August 1981; pp. 674–679. [Google Scholar]
- Agarwal, S.; Mierle, K.; The Ceres Solver Team. Ceres Solver. Available online: http://ceres-solver.org (accessed on 1 March 2022).
- Burri, M.; Nikolic, J.; Gohl, P.; Schneider, T.; Rehder, J.; Omari, S.; Achtelik, M.W.; Siegwart, R. The EuRoC micro aerial vehicle datasets. Int. J. Robot. Res. 2016, 35, 1157–1163. [Google Scholar] [CrossRef]
- Jeong, J.; Cho, Y.; Shin, Y.S.; Roh, H.; Kim, A. Complex urban dataset with multi-level sensors from highly diverse urban environments. Int. J. Robot. Res. 2019, 38, 642–657. [Google Scholar] [CrossRef]
- Grupp, M. Evo: Python Package for the Evaluation of Odometry and Slam. Available online: https://github.com/MichaelGrupp/evo (accessed on 10 March 2022).
- Liu, W.; Quijano, K.; Crawford, M.M. YOLOv5-Tassel: Detecting tassels in RGB UAV imagery with improved YOLOv5 based on transfer learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 8085–8094. [Google Scholar] [CrossRef]
Sequence | Raw IMU | VINS-Mono [11] | Open-VINS [7] | OriNet [47] | DIG-Net [29] | NGC-Net |
---|---|---|---|---|---|---|
MH_02_easy | 146/130 | 1.34/1.32 | 1.11/1.05 | 5.75/0.51 | 1.39/0.85 | 1.70/1.20 |
MH_04_difficult | 130/77.9 | 1.44/1.40 | 1.60/1.16 | 8.85/7.27 | 1.40/0.25 | 0.93/0.23 |
V1_01_easy | 71.3/71.2 | 0.97/0.90 | 0.80/0.67 | 6.36/2.09 | 1.13/0.49 | 0.78/0.48 |
V1_03_difficult | 119/84.9 | 4.72/4.68 | 2.32/2.27 | 14.7/11.5 | 2.70/0.96 | 1.05/0.75 |
V2_02_medium | 117/86.0 | 2.58/2.41 | 1.85/1.61 | 11.7/6.03 | 3.85/2.25 | 3.19/1.57 |
mean | 125/89.0 | 2.21/2.14 | 1.55/1.37 | 9.46/5.48 | 2.10/0.96 | 1.53/0.85 |
Sequence | Raw IMU | VINS-Mono [11] | Open-VINS [7] | DIG-Net [29] | NGC-Net |
---|---|---|---|---|---|
urban29 | 37.64/26.47 | 3.15/2.82 | 1.36/0.81 | 3.33/0.54 | 2.91/0.69 |
urban31 | 93.68/60.74 | 16.02/15.72 | -- | 8.39/6.98 | 5.91/2.07 |
urban33 | 84.98/75.94 | 10.25/9.93 | 3.61/3.37 | 6.12/3.69 | 3.43/0.65 |
urban34 | 43.95/40.75 | 24.38/24.08 | -- | 2.57/2.16 | 1.98/0.48 |
urban37 | 48.23/29.99 | 67.27/5.57 | 7.34/6.29 | 3.24/2.22 | 3.12/0.98 |
urban39 | 122.34/120.48 | 19.97/19.94 | 3.18/2.95 | 7.46/5.60 | 5.11/0.89 |
mean | 71.80/59.06 | 23.51/13.01 | -- | 4.85/3.31 | 3.74/0.96 |
Sequence | Length | VINS-Mono [11] | Open-VINS [7] | VIWO [37] | Proposed VIWO | Proposed VIWO+NGC | Proposed RNGC-VIWO |
---|---|---|---|---|---|---|---|
urban29 | 3.6 km | 256.73/3.15 | 5.68/1.36 | - | 17.33/1.92 | 17.08/1.87 | 4.06/0.84 |
urban31 | 11.4 km | 291.81/16.02 | -- | - | 255.15/16.97 | 238.06/15.99 | 49.83/2.99 |
urban33 | 7.6 km | 106.16/10.25 | 25.34/3.61 | - | 72.31/10.75 | 70.35/10.81 | 11.74/1.55 |
urban34 | 7.8 km | 340.29/24.38 | -- | - | 171.58/17.60 | 149.00/15.21 | 17.60/2.37 |
urban37 | 11.77 km | 1282.76/67.27 | 182.51/7.34 | - | 463.97/40.50 | 463.84/40.25 | 27.47/1.43 |
urban39 | 11.06 km | 153.70/19.97 | 24.12/3.18 | 52.65/2.87 * 42.74/1.71 ** | 99.02/18.60 | 98.82/18.77 | 20.11/1.79 |
mean | 8.87 km | 380.80/23.51 | -- | - | 179.89/17.72 | 172.86/17.15 | 21.80/1.83 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhi, M.; Deng, C.; Zhang, H.; Tang, H.; Wu, J.; Li, B. RNGC-VIWO: Robust Neural Gyroscope Calibration Aided Visual-Inertial-Wheel Odometry for Autonomous Vehicle. Remote Sens. 2023, 15, 4292. https://doi.org/10.3390/rs15174292
Zhi M, Deng C, Zhang H, Tang H, Wu J, Li B. RNGC-VIWO: Robust Neural Gyroscope Calibration Aided Visual-Inertial-Wheel Odometry for Autonomous Vehicle. Remote Sensing. 2023; 15(17):4292. https://doi.org/10.3390/rs15174292
Chicago/Turabian StyleZhi, Meixia, Chen Deng, Hongjuan Zhang, Hongqiong Tang, Jiao Wu, and Bijun Li. 2023. "RNGC-VIWO: Robust Neural Gyroscope Calibration Aided Visual-Inertial-Wheel Odometry for Autonomous Vehicle" Remote Sensing 15, no. 17: 4292. https://doi.org/10.3390/rs15174292
APA StyleZhi, M., Deng, C., Zhang, H., Tang, H., Wu, J., & Li, B. (2023). RNGC-VIWO: Robust Neural Gyroscope Calibration Aided Visual-Inertial-Wheel Odometry for Autonomous Vehicle. Remote Sensing, 15(17), 4292. https://doi.org/10.3390/rs15174292