LiDAR-360 RGB Camera-360 Thermal Camera Targetless Calibration for Dynamic Situations
Abstract
:1. Introduction
- Ensuring effective sensor operation in dynamic conditions requires addressing deformation correction caused by velocity influences, a critical task for reliable data acquisition [22]. Most current approaches leverage additional support devices [23] to re-measure speed and distance; however, while effective in ideal conditions, these devices can be significantly impacted by adverse weather or complex, obstacle-rich environments. Moreover, employing multiple devices increases computational demand, potentially introducing delays in real-world applications. To mitigate these limitations, we propose a novel way for calculating velocities from point clouds based on extracted key points from projected images.
- Sensor calibration is always the first task performed to optimize systems. However, as mentioned above, target-specific calibration is not always feasible, particularly for 360-degree cameras. Additionally, widely used calibration methods with pre-existing datasets are typically suited to standard images, not 360-degree images, which are central to this paper. Therefore, we introduce a new method of targetless calibration in dynamic situations for 360 LiDARs and 360 RGB cameras based on the above approach of points accumulation.
- The significant aspect ratio disparity between 360 and degree images and standard images can render conventional outlier removal and matching methods prone to overfitting. To improve the pairing efficiency of 360 images and point cloud datasets, we propose an advanced approach to enhance the percentage of matching features for 360 images and filter noise in pair features.
- The code and dataset can be accessed at https://github.com/baokhanhtran/Multimodal-Targetless-Calibration (accessed on 6 October 2024).
2. Related Works
2.1. Calibration
2.1.1. Target Calibration
2.1.2. Targetless Calibration
2.2. Ego-Motion Compensation
2.3. Velocity Estimation
3. Targetless Calibration
3.1. Feature Extraction
3.1.1. Feature Extraction from Camera Images
Algorithm 1 Feature extraction in RGB images |
|
3.1.2. Feature Extraction from LiDAR Point Clouds
Algorithm 2 Feature extraction in LiDAR images |
|
3.2. Registration
3.2.1. Coarse Matching
3.2.2. Fine Matching
4. Ego-Motion Compensation
4.1. Point Cloud Accumulation
4.2. Ego-Motion Compensation in LiDARs
Algorithm 3 Velocity estimation |
|
4.3. Ego-Motion Compensation in Cameras
5. Experiment Results
5.1. Velocity Estimation
5.2. 360 RGB Camera–360 Thermal Camera Registration
5.3. 360 RGB Camera–LiDAR Registration
5.3.1. Quantitative Results in Static Situations
5.3.2. Quantitative Results in Dynamic Situations
5.3.3. Comparison
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Carballo, A.; Ohya, A.; Yuta, S. Fusion of double layered multiple laser range finders for people detection from a mobile robot. In Proceedings of the 2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Seoul, Republic of Korea, 20–22 August 2008; pp. 677–682. [Google Scholar]
- Carballo, A.; Monrroy, A.; Wong, D.; Narksri, P.; Lambert, J.; Kitsukawa, Y.; Takeuchi, E.; Kato, S.; Takeda, K. Characterization of multiple 3D LiDARs for localization and mapping performance using the NDT algorithm. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops), Nagoya, Japan, 11–17 July 2021; pp. 327–334. [Google Scholar]
- ElSheikh, A.; Abu-Nabah, B.A.; Hamdan, M.O.; Tian, G.Y. Infrared camera geometric calibration: A review and a precise thermal radiation checkerboard target. Sensors 2023, 23, 3479. [Google Scholar] [CrossRef]
- Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and sensing for autonomous vehicles under adverse weather conditions: A survey. ISPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
- Zhou, L.; Li, Z.; Kaess, M. Automatic extrinsic calibration of a camera and a 3d lidar using line and plane correspondences. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5562–5569. [Google Scholar]
- Kim, E.S.; Park, S.Y. Extrinsic calibration between camera and LiDAR sensors by matching multiple 3D planes. Sensors 2019, 20, 52. [Google Scholar] [CrossRef] [PubMed]
- Zhang, D.; Ma, L.; Gong, Z.; Tan, W.; Zelek, J.; Li, J. An overlap-free calibration method for LiDAR-camera platforms based on environmental perception. IEEE Trans. Instrum. Meas. 2023, 72, 1–7. [Google Scholar] [CrossRef]
- Xu, J.; Li, R.; Zhao, L.; Yu, W.; Liu, Z.; Zhang, B.; Li, Y. Cammap: Extrinsic calibration of non-overlapping cameras based on slam map alignment. IEEE Robot. Autom. Lett. 2022, 7, 11879–11885. [Google Scholar] [CrossRef]
- Carrera, G.; Angeli, A.; Davison, A.J. SLAM-based automatic extrinsic calibration of a multi-camera rig. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 2652–2659. [Google Scholar]
- Zuo, X.; Yang, Y.; Geneva, P.; Lv, J.; Liu, Y.; Huang, G.; Pollefeys, M. Lic-fusion 2.0: Lidar-inertial-camera odometry with sliding-window plane-feature tracking. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2020; pp. 5112–5119. [Google Scholar]
- Zhang, J.; Siritanawan, P.; Yue, Y.; Yang, C.; Wen, M.; Wang, D. A two-step method for extrinsic calibration between a sparse 3d lidar and a thermal camera. In Proceedings of the 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), Singapore, 18–21 November 2018; pp. 1039–1044. [Google Scholar]
- Shivakumar, S.S.; Rodrigues, N.; Zhou, A.; Miller, I.D.; Kumar, V.; Taylor, C.J. Pst900: Rgb-thermal calibration, dataset and segmentation network. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 9441–9447. [Google Scholar]
- Zhao, Y.; Huang, K.; Lu, H.; Xiao, J. Extrinsic calibration of a small FoV LiDAR and a camera. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 3915–3920. [Google Scholar]
- Kannala, J.; Brandt, S.S. A generic camera model and calibration method for conventional, wide-angle, and fish-eye lenses. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 1335–1340. [Google Scholar] [CrossRef]
- Chai, Z.; Sun, Y.; Xiong, Z. A novel method for lidar camera calibration by plane fitting. In Proceedings of the 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Auckland, New Zealand, 9–12 July 2018; pp. 286–291. [Google Scholar]
- Bu, Z.; Sun, C.; Wang, P.; Dong, H. Calibration of camera and flash LiDAR system with a triangular pyramid target. Appl. Sci. 2021, 11, 582. [Google Scholar] [CrossRef]
- Duan, J.; Huang, Y.; Wang, Y.; Ye, X.; Yang, H. Multipath-Closure Calibration of Stereo Camera and 3D LiDAR Combined with Multiple Constraints. Remote Sens. 2024, 16, 258. [Google Scholar] [CrossRef]
- Beltrán, J.; Guindel, C.; De La Escalera, A.; García, F. Automatic extrinsic calibration method for lidar and camera sensor setups. IEEE Trans. Intell. Transp. Syst. 2022, 23, 17677–17689. [Google Scholar] [CrossRef]
- Hua, H.; Ahuja, N. A high-resolution panoramic camera. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 1, p. I. [Google Scholar]
- Krishnan, A.; Ahuja, N. Panoramic image acquisition. In Proceedings of the Proceedings CVPR IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 18–20 June 1996; pp. 379–384. [Google Scholar]
- Han, M.; Lee, S.H.; Ok, S. A real-time architecture of 360-degree panoramic video streaming system. In Proceedings of the 2019 IEEE 2nd International Conference on Knowledge Innovation and Invention (ICKII), Seoul, Republic of Korea, 12–15 July 2019; pp. 477–480. [Google Scholar]
- Blaga, B.-C.-Z.; Nedevschi, S. Online cross-calibration of camera and lidar. In Proceedings of the 2017 13th IEEE International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 7–9 September 2017; pp. 295–301. [Google Scholar]
- Shi, J.; Wang, W.; Li, X.; Yan, Y.; Yin, E. Motion distortion elimination for LiDAR-inertial odometry under rapid motion conditions. IEEE Trans. Instrum. Meas. 2023, 72, 9514516. [Google Scholar] [CrossRef]
- Kato, S.; Takeuchi, E.; Ishiguro, Y.; Ninomiya, Y.; Takeda, K.; Hamada, T. An open approach to autonomous vehicles. IEEE Micro 2015, 35, 60–68. [Google Scholar] [CrossRef]
- Mishra, S.; Pandey, G.; Saripalli, S. Extrinsic Calibration of a 3D-LIDAR and a Camera. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1765–1770. [Google Scholar]
- Khoramshahi, E.; Campos, M.B.; Tommaselli, A.M.G.; Vilijanen, N.; Mielonen, T.; Kaartinen, H.; Kukko, A.; Honkavaara, E. Accurate calibration scheme for a multi-camera mobile mapping system. Remote Sens. 2019, 11, 2778. [Google Scholar] [CrossRef]
- Nhat Quang, N. Universal Calibration Target for Joint Calibration of Thermal Cameras, RGB Cameras, and LiDAR Sensors. Master’s Thesis, Graduate School of Engineering, Nagoya University, Nagoya, Japan, 2023. [Google Scholar]
- Jeong, S.; Kim, S.; Kim, J.; Kim, M. O3 LiDAR-Camera Calibration: One-Shot, One-Target and Overcoming LiDAR Limitations. IEEE Sens. J. 2024, 24, 18659–18671. [Google Scholar] [CrossRef]
- Zhang, J.; Liu, Y.; Wen, M.; Yue, Y.; Zhang, H.; Wang, D. L2V2T2Calib: Automatic and Unified Extrinsic Calibration Toolbox for Different 3D LiDAR, Visual Camera and Thermal Camera. In Proceedings of the 2023 IEEE Intelligent Vehicles Symposium (IV), Anchorage, AK, USA, 4–7 June 2023; pp. 1–7. [Google Scholar]
- Yuan, C.; Liu, X.; Hong, X.; Zhang, F. Pixel-level extrinsic self calibration of high resolution lidar and camera in targetless environments. IEEE Robot. Autom. Lett. 2021, 6, 7517–7524. [Google Scholar] [CrossRef]
- Koide, K.; Oishi, S.; Yokozuka, M.; Banno, A. General, single-shot, target-less, and automatic lidar-camera extrinsic calibration toolbox. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 11301–11307. [Google Scholar]
- Yu, H.; Zhen, W.; Yang, W.; Scherer, S. Line-based 2-D–3-D registration and camera localization in structured environments. IEEE Trans. Instrum. Meas. 2020, 69, 8962–8972. [Google Scholar] [CrossRef]
- Renzler, T.; Stolz, M.; Schratter, M.; Watzenig, D. Increased accuracy for fast moving LiDARS: Correction of distorted point clouds. In Proceedings of the 2020 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Dubrovnik, Croatia, 25–28 May 2020; pp. 1–6. [Google Scholar]
- Li, S.; Wang, L.; Li, J.; Tian, B.; Chen, L.; Li, G. 3D LiDAR/IMU calibration based on continuous-time trajectory estimation in structured environments. IEEE Access 2021, 9, 138803–138816. [Google Scholar] [CrossRef]
- Yang, W.; Gong, Z.; Huang, B.; Hong, X. Lidar with velocity: Correcting moving objects point cloud distortion from oscillating scanning lidars by fusion with camera. IEEE Robot. Autom. Lett. 2022, 7, 8241–8248. [Google Scholar] [CrossRef]
- Hong, S.; Ko, H.; Kim, J. VICP: Velocity updating iterative closest point algorithm. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 1893–1898. [Google Scholar]
- Meng, K.; Sun, H.; Qi, J.; Wang, H. Section-LIO: A High Accuracy LiDAR-Inertial Odometry Using Undistorted Sectional Point. IEEE Access 2023, 11, 144918–144927. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October–24 January 2020; pp. 5135–5142. [Google Scholar]
- Zhang, S.; Xiao, L.; Nie, Y.; Dai, B.; Hu, C. Lidar odometry and mapping based on two-stage feature extraction. In Proceedings of the 2020 39th Chinese Control Conference (CCC), Shenyang, China, 27–29 July 2020; pp. 3966–3971. [Google Scholar]
- Lin, J.; Zhang, F. Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV. In Proceedings of the 2020 IEEE international conference on robotics and automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3126–3131. [Google Scholar]
- Carballo, A.; Ohya, A.; Yuta, S. Laser reflection intensity and multi-layered Laser Range Finders for people detection. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication, Viareggio, Italy, 13–15 September 2010; pp. 379–384. [Google Scholar]
- Carballo, A.; Takeuchi, E.; Takeda, K. High density ground maps using low boundary height estimation for autonomous vehicles. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 3811–3818. [Google Scholar]
- Zhao, H.; Jiang, L.; Fu, C.W.; Jia, J. Pointweb: Enhancing local neighborhood features for point cloud processing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 5565–5573. [Google Scholar]
- Carballo, A.; Lambert, J.; Monrroy, A.; Wong, D.; Narksri, P.; Kitsukawa, Y.; Takeuchi, E.; Kato, S.; Takeda, K. LIBRE: The multiple 3D LiDAR dataset. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1094–1101. [Google Scholar]
- Park, J.; Thota, B.K.; Somashekar, K. Sensor-fused nighttime system for enhanced pedestrian detection in ADAS and autonomous vehicles. Sensors 2024, 24, 4755. [Google Scholar] [CrossRef]
- Javed, Z.; Kim, G.W. OmniVO: Toward Robust Omni Directional Visual Odometry With Multicamera Collaboration for Challenging Conditions. IEEE Access 2022, 10, 99861–99874. [Google Scholar] [CrossRef]
- DeTone, D.; Malisiewicz, T.; Rabinovich, A. Superpoint: Self-supervised interest point detection and description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 224–236. [Google Scholar]
- Parihar, A.S.; Singh, K. A study on Retinex based method for image enhancement. In Proceedings of the 2018 2nd International Conference on Inventive Systems and Control (ICISC), Coimbatore, India, 19–20 January 2018; pp. 619–624. [Google Scholar]
- Jakubović, A.; Velagić, J. Image feature matching and object detection using brute-force matchers. In Proceedings of the 2018 International Symposium ELMAR, Zadar, Croatia, 16–19 September 2018; pp. 83–86. [Google Scholar]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for mobilenetv3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
- Xu, J.; Lu, K.; Wang, H. Attention fusion network for multi-spectral semantic segmentation. Pattern Recognit. Lett. 2021, 146, 179–184. [Google Scholar] [CrossRef]
- Zhang, Q.; Zhao, S.; Luo, Y.; Zhang, D.; Huang, N.; Han, J. ABMDRNet: Adaptive-weighted bi-directional modality difference reduction network for RGB-T semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 2633–2642. [Google Scholar]
- Lan, X.; Gu, X.; Gu, X. MMNet: Multi-modal multi-stage network for RGB-T image semantic segmentation. Appl. Intell. 2022, 52, 5817–5829. [Google Scholar] [CrossRef]
- Sun, Y.; Zuo, W.; Yun, P.; Wang, H.; Liu, M. FuseSeg: Semantic segmentation of urban scenes based on RGB and thermal data fusion. IEEE Trans. Autom. Sci. Eng. 2020, 18, 1000–1011. [Google Scholar] [CrossRef]
- Ha, Q.; Watanabe, K.; Karasawa, T.; Ushiku, Y.; Harada, T. MFNet: Towards real-time semantic segmentation for autonomous vehicles with multi-spectral scenes. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 1–24 September 2017; pp. 5108–5115. [Google Scholar]
- Milioto, A.; Vizzo, I.; Behley, J.; Stachniss, C. Rangenet++: Fast and accurate lidar semantic segmentation. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3 November 2019; pp. 4213–4220. [Google Scholar]
- Gu, K.; Zhang, Y.; Liu, X.; Li, H.; Ren, M. DWT-LSTM-based fault diagnosis of rolling bearings with multi-sensors. Electronics 2021, 10, 2076. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 886–893. [Google Scholar]
- Martínez-Otzeta, J.M.; Rodríguez-Moreno, I.; Mendialdua, I.; Sierra, B. Ransac for robotic applications: A survey. Sensors 2022, 23, 327. [Google Scholar] [CrossRef]
- Mahalanobis, P.C. On the generalized distance in statistics. Sankhyā Indian J. Stat. Ser. A (2008-) 2018, 80, S1–S7. [Google Scholar]
- Fu, K.; Liu, S.; Luo, X.; Wang, M. Robust point cloud registration framework based on deep graph matching. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8893–8902. [Google Scholar]
- Wang, Y.; Solomon, J.M. Deep closest point: Learning representations for point cloud registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3523–3532. [Google Scholar]
- Horn, B.K.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef]
- Jekal, S.; Kim, J.; Kim, D.H.; Noh, J.; Kim, M.J.; Kim, H.Y.; Kim, M.S.; Oh, W.C.; Yoon, C.M. Synthesis of LiDAR-Detectable True Black Core/Shell Nanomaterial and Its Practical Use in LiDAR Applications. Nanomaterials 2022, 12, 3689. [Google Scholar] [CrossRef]
- Swari, M.H.P.; Handika, I.P.S.; Satwika, I.K.S. Comparison of simple moving average, single and modified single exponential smoothing. In Proceedings of the 2021 IEEE 7th Information Technology International Seminar (ITIS), Surabaya, Indonesia, 6–8 October 2021; pp. 1–5. [Google Scholar]
- Yang, Z.; Dan, T.; Yang, Y. Multi-temporal remote sensing image registration using deep convolutional features. IEEE Access 2018, 6, 38544–38555. [Google Scholar] [CrossRef]
- Zhu, D.; Zhan, W.; Fu, J.; Jiang, Y.; Xu, X.; Guo, R.; Chen, Y. RI-MFM: A Novel Infrared and Visible Image Registration with Rotation Invariance and Multilevel Feature Matching. Electronics 2022, 11, 2866. [Google Scholar] [CrossRef]
- Li, J.; Hu, Q.; Ai, M. RIFT: Multi-modal image matching based on radiation-variation insensitive feature transform. IEEE Trans. Image Process. 2019, 29, 3296–3310. [Google Scholar] [CrossRef] [PubMed]
- Kassam, S. The mean-absolute-error criterion for quantization. In Proceedings of the ICASSP’77. IEEE International Conference on Acoustics, Speech, and Signal Processing, Hartford, CT, USA, 9–11 May 1977; Volume 2, pp. 632–635. [Google Scholar]
- Childs, D.R.; Coffey, D.M.; Travis, S.P. Error measures for normal random variables. IEEE Trans. Aerosp. Electron. Syst. 1978, AES-14, 64–68. [Google Scholar] [CrossRef]
- Farris, J.S. Estimating phylogenetic trees from distance matrices. Am. Nat. 1972, 106, 645–668. [Google Scholar] [CrossRef]
- Ou, N.; Cai, H.; Wang, J. Targetless Lidar-camera Calibration via Cross-modality Structure Consistency. IEEE Trans. Intell. Veh. 2023, 9, 2636–2648. [Google Scholar] [CrossRef]
- Li, X.; Duan, Y.; Wang, B.; Ren, H.; You, G.; Sheng, Y.; Ji, J.; Zhang, Y. EdgeCalib: Multi-Frame Weighted Edge Features for Automatic Targetless LiDAR-Camera Calibration. arXiv 2023, arXiv:2310.16629. [Google Scholar] [CrossRef]
Name | Max Velocity | Mean Velocity |
---|---|---|
Ouster | 9.51 m/s | 5.56 m/s |
Velodyne | 9.74 m/s | 5.59 m/s |
Ground Truth | 9.47 m/s | 5.59 m/s |
Mean Errors | Without Ego-Motion Compensation | With Ego-Motion Compensation |
---|---|---|
Roll Error (degrees) (°) | 1.0714 | 1.0886 |
Pitch Error (degrees) (°) | 0.2162 | 0.3624 |
Yaw Error (degrees) (°) | 0.7999 | 1.0015 |
Translation (X) Error (m) | 0.0032 | 0.0054 |
Translation (Y) Error (m) | 0.0051 | 0.0081 |
Translation (Z) Error (m) | 0.0048 | 0.0064 |
Velocity | Mean Errors | Without Ego-Motion Compensation | With Ego-Motion Compensation |
---|---|---|---|
2 m/s–3 m/s | Roll Error (degrees) (°) | 1.2446 | 1.1996 |
Pitch Error (degrees) (°) | 0.4448 | 0.3734 | |
Yaw Error (degrees) (°) | 1.1874 | 1.1131 | |
Translation (X) Error (m) | 0.0184 | 0.0121 | |
Translation (Y) Error (m) | 0.0136 | 0.0086 | |
Translation (Z) Error (m) | 0.0087 | 0.0080 | |
3 m/s–4 m/s | Roll Error (degrees) (°) | 1.3190 | 1.2146 |
Pitch Error (degrees) (°) | 0.5779 | 0.3440 | |
Yaw Error (degrees) (°) | 1.2619 | 1.1733 | |
Translation (X) Error (m) | 0.0268 | 0.0152 | |
Translation (Y) Error (m) | 0.0185 | 0.0093 | |
Translation (Z) Error (m) | 0.0099 | 0.0082 | |
4 m/s–5 m/s | Roll Error (degrees) (°) | 1.3797 | 1.2636 |
Pitch Error (degrees) (°) | 0.6200 | 0.3866 | |
Yaw Error (degrees) (°) | 1.3021 | 1.1652 | |
Translation (X) Error (m) | 0.0239 | 0.0137 | |
Translation (Y) Error (m) | 0.0210 | 0.0102 | |
Translation (Z) Error (m) | 0.0101 | 0.0088 | |
5 m/s–6 m/s | Roll Error (degrees) (°) | 1.4497 | 1.3053 |
Pitch Error (degrees) (°) | 0.7387 | 0.4228 | |
Yaw Error (degrees) (°) | 1.3870 | 1.2093 | |
Translation (X) Error (m) | 0.0213 | 0.0148 | |
Translation (Y) Error (m) | 0.0206 | 0.0105 | |
Translation (Z) Error (m) | 0.0110 | 0.0087 | |
6 m/s–7 m/s | Roll Error (degrees) (°) | 1.5521 | 1.2817 |
Pitch Error (degrees) (°) | 0.6482 | 0.4519 | |
Yaw Error (degrees) (°) | 1.4743 | 1.2412 | |
Translation (X) Error (m) | 0.0242 | 0.0157 | |
Translation (Y) Error (m) | 0.0199 | 0.0108 | |
Translation (Z) Error (m) | 0.0116 | 0.0094 | |
7 m/s–8 m/s | Roll Error (degrees) (°) | 1.6495 | 1.3175 |
Pitch Error (degrees) (°) | 0.7719 | 0.4841 | |
Yaw Error (degrees) (°) | 1.6465 | 1.2775 | |
Translation (X) Error (m) | 0.0236 | 0.0162 | |
Translation (Y) Error (m) | 0.0189 | 0.0106 | |
Translation (Z) Error (m) | 0.0132 | 0.0091 | |
8 m/s–9 m/s | Roll Error (degrees) (°) | 1.7663 | 1.3422 |
Pitch Error (degrees) (°) | 0.8377 | 0.5072 | |
Yaw Error (degrees) (°) | 1.7007 | 1.2710 | |
Translation (X) Error (m) | 0.0249 | 0.0154 | |
Translation (Y) Error (m) | 0.0226 | 0.0114 | |
Translation (Z) Error (m) | 0.0147 | 0.0110 | |
9 m/s–9.5 m/s | Roll Error (degrees) (°) | 1.9336 | 1.3637 |
Pitch Error (degrees) (°) | 0.9996 | 0.5446 | |
Yaw Error (degrees) (°) | 1.8124 | 1.3053 | |
Translation (X) Error (m) | 0.0256 | 0.0184 | |
Translation (Y) Error (m) | 0.0241 | 0.0138 | |
Translation (Z) Error (m) | 0.0153 | 0.0113 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tran, K.B.; Carballo, A.; Takeda, K. LiDAR-360 RGB Camera-360 Thermal Camera Targetless Calibration for Dynamic Situations. Sensors 2024, 24, 7199. https://doi.org/10.3390/s24227199
Tran KB, Carballo A, Takeda K. LiDAR-360 RGB Camera-360 Thermal Camera Targetless Calibration for Dynamic Situations. Sensors. 2024; 24(22):7199. https://doi.org/10.3390/s24227199
Chicago/Turabian StyleTran, Khanh Bao, Alexander Carballo, and Kazuya Takeda. 2024. "LiDAR-360 RGB Camera-360 Thermal Camera Targetless Calibration for Dynamic Situations" Sensors 24, no. 22: 7199. https://doi.org/10.3390/s24227199
APA StyleTran, K. B., Carballo, A., & Takeda, K. (2024). LiDAR-360 RGB Camera-360 Thermal Camera Targetless Calibration for Dynamic Situations. Sensors, 24(22), 7199. https://doi.org/10.3390/s24227199