Automatic Calibration between Multi-Lines LiDAR and Visible Light Camera Based on Edge Refinement and Virtual Mask Matching
Abstract
:1. Introduction
2. Proposed Calibration System and Computational Flow Chart
3. Proposed Method
3.1. Automatic Locating Calibration Board
3.2. Approximation Edge Fitting
3.3. 3D Virtual Mask Matching
3.4. 2D Corner Points Detected in Image
3.5. Optimization Equation Modelling
4. Experiments and Discussions
4.1. Proposed Experiment System
4.2. Experiments
- Experiment 1: 3D feature points detected
- Experiment 2: Automatic locating results
- Experiment 3: Re-projection error analyses
- Experiment 4: Parameter consistency check
- Experiment 5: Qualitative analysis of calibration accuracy
4.3. Discussions
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Goian, A.; Ashour, R.; Ahmad, U.; Taha, T.; Almoosa, N.; Seneviratne, L. Victim Localization in USAR Scenario Exploiting Multi-Layer Mapping Structure. Remote Sens. 2019, 11, 2704. [Google Scholar] [CrossRef] [Green Version]
- Hong, Z.; Zhong, H.; Pan, H.; Liu, J.; Zhou, R.; Zhang, Y.; Han, Y.; Wang, J.; Yang, S.; Zhong, C. Classification of Building Damage Using a Novel Convolutional Neural Network Based on Post-Disaster Aerial Images. Sensors 2022, 22, 5920. [Google Scholar] [CrossRef] [PubMed]
- Raman, M.; Carlos, E.; Sankaran, S. Optimization and Evaluation of Sensor Angles for Precise Assessment of Architectural Traits in Peach Trees. Sensors 2022, 22, 4619. [Google Scholar] [CrossRef] [PubMed]
- Zhu, W.; Sun, Z.; Peng, J.; Huang, Y.; Li, J.; Zhang, J.; Yang, B.; Liao, X. Estimating Maize Above-Ground Biomass Using 3D Point Clouds of Multi-Source Unmanned Aerial Vehicle Data at Multi-Spatial Scales. Remote Sens. 2019, 11, 2678. [Google Scholar] [CrossRef] [Green Version]
- Wang, D.; Xing, S.; He, Y.; Yu, J.; Xu, Q.; Li, P. Evaluation of a new Lightweight UAV-Borne Topo-Bathymetric LiDAR for Shallow Water Bathymetry and Object Detection. Sensors 2022, 22, 1379. [Google Scholar] [CrossRef]
- Chen, S.; Nian, Y.; He, Z.; Che, M. Measuring the Tree Height of Picea Crassifolia in Alpine Mountain Forests in Northwest China Based on UAV-LiDAR. Forests 2022, 13, 1163. [Google Scholar] [CrossRef]
- Song, J.; Qian, J.; Li, Y.; Liu, Z.; Chen, Y.; Chen, J. Automatic Extraction of Power Lines from Aerial Images of Unmanned Aerial Vehicles. Sensors 2022, 22, 6431. [Google Scholar] [CrossRef]
- Zhang, Z. Camera Calibration with One-dimensional Objects. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 2004, 26, 892–899. [Google Scholar] [CrossRef]
- Wu, F.; Hu, Z.; Zhu, H. Camera Calibration with Moving One-dimensional Objects. Pattern Recognit. 2005, 38, 755–765. [Google Scholar] [CrossRef]
- Bai, Z.; Jiang, G.; Xu, A. LiDAR-Camera Calibration Using Line Correspondences. Sensors 2020, 20, 6319. [Google Scholar] [CrossRef]
- Geiger, A.; Moosmann, F.; Car, Ö.; Schuster, B. Automatic Camera and Range Sensor Calibration Using a Single Shot. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation (IEEE ICRA), Saint Paul, MN, USA, 14–18 May 2012; pp. 3936–3943. [Google Scholar]
- Cai, H.; Pang, W.; Chen, X.; Wang, Y.; Liang, H. A Novel Calibration Board and Experiments for 3D LiDAR and Camera Calibration. Sensors 2020, 20, 1130. [Google Scholar] [CrossRef] [Green Version]
- Zhou, L.; Li, Z.; Kaess, M. Automatic Extrinsic Calibration of a Camera and a 3D LiDAR Using Line and Plane Correspondences. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5562–5569. [Google Scholar]
- Guindel, C.; Beltrán, J.; Martin, D.; Garcia, F. Automatic Extrinsic Calibration for Lidar-stereo Vehicle Sensor Setups. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–6. [Google Scholar]
- Gong, X.; Lin, Y.; Liu, J. 3D LIDAR-Camera Extrinsic Calibration Using an Arbitrary Trihedron. Sensors 2013, 13, 1902–1918. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Pusztai, Z.; Hajder, L. Accurate Calibration of Lidar-camera Systems Using Ordinary Boxes. In Proceedings of the IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 October 2017; pp. 394–402. [Google Scholar]
- Kümmerle, J.; Kühner, T. Unified Intrinsic and Extrinsic Camera and LiDAR Calibration under Uncertainties. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 6028–6034. [Google Scholar]
- Park, Y.; Yun, S.; Won, C.S.; Cho, K.; Um, K.; Sim, S. Calibration between Color Camera and 3D LIDAR Instruments with a Polygonal Planar Board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [Green Version]
- Xu, X.; Zhang, L.; Yang, J.; Liu, C.; Xiong, Y.; Luo, M.; Tan, Z.; Liu, B. LiDAR–camera Calibration Method Based on Ranging Statistical Characteristics and Improved RANSAC Algorithm. Robot. Auton. Syst. 2021, 141, 103776–103789. [Google Scholar] [CrossRef]
- An, P.; Ma, T.; Yu, K.; Fang, B.; Zhang, J.; Fu, W.; Ma, J. Geometric Calibration for LiDAR-camera System Fusing 3D-2D and 3D-3D Point Correspondences. Opt. Express 2020, 28, 2122–2141. [Google Scholar] [CrossRef]
- Ye, Q.; Shu, L.; Zhang, W. Extrinsic Calibration of a Monocular Camera and a Single Line Scanning LiDAR. In Proceedings of the 2019 IEEE International Conference on Mechatronics and Automation (ICMA), Tianjin, China, 4–7 August 2019; pp. 1047–1054. [Google Scholar]
- Liao, Q.; Chen, Z.; Liu, Y.; Wang, Z.; Liu, M. Extrinsic Calibration of Lidar and Camera with Polygon. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 200–205. [Google Scholar]
- Yao, Y.; Huang, X.; Lv, J. A Space Joint Calibration method for LiDAR and Camera on Self-driving Car and Its Experimental Verification. In Proceedings of the 6th International Symposium on Computer and Information Processing Technology (ISCIPT), Changsha, China, 11–13 June 2021; pp. 388–394. [Google Scholar]
- Huang, J.; Grizzle, J. Improvements to Target-Based 3D LiDAR to Camera Calibration. IEEE Access 2020, 8, 134101–134110. [Google Scholar] [CrossRef]
- Huang, J.; Wang, S.; Ghaffari, M.; Grizzle, J. LiDARTag: A Real-Time Fiducial Tag System for Point Clouds. In Proceedings of the IEEE Robot and Automation Letters, 31 March 2021; pp. 4875–4882. [Google Scholar]
- Wang, W.; Sakurada, K.; Kawaguchi, N. Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard. Remote Sens. 2017, 9, 851. [Google Scholar] [CrossRef] [Green Version]
- Pandey, G.; McBride, J.R.; Savarese, S.; Eustice, R.M. Automatic Extrinsic Calibration of Vision and LiDAR by Maximizing Mutual Information. J. Field Robot. 2014, 32, 696–722. [Google Scholar] [CrossRef] [Green Version]
- Taylor, Z.; Nieto, J. A Mutual Information Approach to Automatic Calibration of Camera and LiDAR in Natural Environments. In Proceedings of the Australasian Conference on Robotics and Automation (ACRA), Victoria University of Wellington, Wellington, New Zealand, 3 December 2012; pp. 3–5. [Google Scholar]
- Taylor, Z.; Nieto, J. Motion-based Calibration of Multimodal Sensor Extrinsics and Timing Offset Estimation. IEEE Trans. Robot. 2016, 32, 1215–1229. [Google Scholar] [CrossRef]
- Taylor, Z.; Nieto, J. Motion-based Calibration of Multimodal Sensor Arrays. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 4843–4850. [Google Scholar]
- Schneider, N.; Piewak, F.; Stiller, C.; Franke, U. RegNet: Multimodal Sensor Registration Using Deep Neural Networks. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1803–1810. [Google Scholar]
- Iyer, G.; Ram, R.; Murthy, J.; Krishna, K. CalibNet: Geometrically Supervised Extrinsic Calibration Using 3D Spatial Transformer Networks. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 1110–1117. [Google Scholar]
- Yuan, K.; Guo, Z.; Wang, Z. RGGNet: Tolerance Aware LiDAR-Camera Online Calibration with Geometric Deep Learning and Generative Model. IEEE Robot. Autom. Lett. 2020, 5, 6956–6963. [Google Scholar] [CrossRef]
- Lv, X.; Wang, B.; Dou, Z.; Ye, D.; Wang, S. LCCNet: LiDAR and Camera Self-Calibration using Cost Volume Network. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 2888–2895. [Google Scholar]
- Rotter, P.; Klemiato, M.; Skruch, P. Automatic Calibration of a LiDAR–Camera System Based on Instance Segmentation. Remote Sens. 2022, 14, 2531. [Google Scholar] [CrossRef]
- Lim, H.; Myung, H. Patchwork: Concentric Zone-based Region-wise Ground Segmentation with Ground Likelihood Estimation Using a 3D LiDAR Sensor. IEEE Robot. Autom. Lett. 2021, 6, 6458–6465. [Google Scholar] [CrossRef]
- Giyenko, A.; Cho, Y. Intelligent UAV in Smart Cities Using IoT. In Proceedings of the 16th international Conference on Control,Automation and Systems (ICCAS’16), Gyeongju, Korea, 16–19 October 2016; pp. 207–210. [Google Scholar]
- Unnikrishnan, R.; Hebert, M. Fast Extrinsic Calibration of a Laser Rangefinder to a Camera; Robotics Institute: Pittsburgh, PA, USA, Technical Report, CMU-RI-TR-05-09; 2005. [Google Scholar]
- Lepetit, V.; Moreno-Noguer, F.; Fua, P. EPnP: An Accurate O(n) Solution to the PnP Problem. Int. J. Comput. Vis. 2009, 81, 155–166. [Google Scholar] [CrossRef] [Green Version]
- Kneip, L.; Li, H.; Seo, Y. UPnP: An optimal O(n) Solution to The Absolute Pose Problem with Universal Applicability. In Proceedings of the European Conference on Computer Vision (ECCV), Zurich, Switzerland, 6–12 September 2014; Springer: Berlin, Germany, 2014; pp. 127–142. [Google Scholar]
- Grammatikopoulos, L.; Papanagnou, A.; Venianakis, A.; Kalisperakis, I.; Stentoumis, C. An Effective Camera-to-LiDAR Spatiotemporal Calibration Based on a Simple Calibration Target. Sensors 2022, 22, 5576. [Google Scholar] [CrossRef]
- Nnez, P.; Jr, P.D.; Rocha, R.; Dias, J. Data Fusion Calibration for a 3D Laser Range Finder and a Camera Using Inertial Data. In Proceedings of the European Conference on Mobile Robots (ECMR), Dubrovnik, Croatia, 23–25 September 2009; pp. 31–36. [Google Scholar]
- Kim, E.; Park, S. Extrinsic Calibration between Camera and LiDAR Sensors by Matching Multiple 3D Planes. Sensors 2020, 20, 52. [Google Scholar] [CrossRef] [PubMed]
- Chai, Z.; Sun, Y.; Xiang, Z. A Novel Method for LiDAR Camera Calibration by Plane Fitting. In Proceedings of the 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Auckland, New Zealand, 9–12 July 2018; pp. 286–291. [Google Scholar]
Symbols and Functions | Description |
---|---|
a 3D point in LiDAR coordinate system | |
a 3D point in camera coordinate system | |
a 2D point in image coordinate system | |
a 3D/2D points set | |
a point cloud belonging to a same class of objects | |
a 3D space vector | |
scale | a scale factor about |
ss | the step size of scale |
th | the expanded threshold of edges |
a scanning line of LiDAR | |
θ/θs | θ is the horizontal angle resolutions, θs is the plural form of θ |
a rotation matrix around the X, Y, and Z axes | |
a translation matrix in the direction of X, Y, and Z axes | |
the coordinates of the image principal point | |
the scale factors in image and axes | |
the distortion coefficient of the image | |
a bounding box in 2D plane | |
a matching score between and the preset point cloud | |
() | the number of points in |
() | the area of enclosing |
c | the number of cells for image segmentation |
ε | the gray threshold of image |
the error function of and | |
the loss function of |
Distance(m) | θ(°) | BB | Scale = 0.1 | Scale = 0.2 | Scale = 0.4 | ||||
---|---|---|---|---|---|---|---|---|---|
w (mm) | h (mm) | w (mm) | h (mm) | w (mm) | h (mm) | w (mm) | h (mm) | ||
7.0 | 0.1 | 1012.34 | 996.22 | 984.11 | 981.82 | 997.90 | 991.14 | 998.51 | 997.95 |
0.2 | 994.32 | 1004.66 | 979.91 | 987.12 | 993.41 | 997.94 | 997.20 | 996.22 | |
0.4 | 1003.72 | 993.731 | 974.41 | 988.20 | 994.90 | 997.01 | 997.81 | 999.17 | |
10.0 | 0.1 | 1003.50 | 967.51 | 981.64 | 970.10 | 992.73 | 995.74 | 998.20 | 999.46 |
0.2 | 984.19 | 1004.14 | 970.04 | 983.17 | 990.73 | 997.76 | 997.21 | 998.03 | |
0.4 | 992.16 | 1009.49 | 981.64 | 970.10 | 992.73 | 995.74 | 994.20 | 996.76 | |
12.0 | 0.1 | 977.75 | 979.99 | 960.40 | 971.44 | 982.07 | 990.14 | 998.00 | 994.41 |
0.2 | 978.03 | 979.76 | 954.70 | 988.00 | 991.43 | 996.01 | 996.24 | 996.90 | |
0.4 | 975.39 | 969.98 | 954.55 | 965.54 | 981.14 | 989.33 | 997.29 | 996.71 |
θ(°) | Object | N | ScoreA | ScoreB | ScoreC | Score | IsTrue |
---|---|---|---|---|---|---|---|
0.1 | Figure 10a | 34,884 | 0.031361 | 0.997999 | 0.996798 | 0.031198 | False |
Figure 10b | 4180 | 0.136842 | 0.992114 | 0.987158 | 0.134019 | False | |
Figure 10c | 1116 | 0.870968 | 0.994576 | 0.982246 | 0.850865 | True | |
Figure 10d | 312 | 0.282051 | 0.134227 | 0.250382 | 0.009479 | False | |
Figure 10e | 283 | 0.745583 | 0.461738 | 0.302227 | 0.104046 | False | |
Figure 10f | 263 | 0.365019 | 0.279073 | 0.349486 | 0.035601 | False | |
Figure 10g | 243 | 0.786008 | 0.400130 | 0.255705 | 0.080421 | False | |
Figure 10h | 202 | 0.876238 | 0.338324 | 0.215452 | 0.063871 | False | |
Figure 10i | 201 | 0.671642 | 0.323524 | 0.214348 | 0.046576 | False | |
0.2 | No. 1 | 9514 | 0.04225 | 0.99512 | 0.98902 | 0.04158 | False |
No. 2 | 7917 | 0.04724 | 0.99461 | 0.99401 | 0.04670 | False | |
No. 3 | 2004 | 0.12974 | 0.99707 | 0.98869 | 0.12789 | False | |
No. 4 | 555 | 0.81441 | 0.98390 | 0.94908 | 0.76051 | True | |
No. 5 | 156 | 0.30769 | 0.17733 | 0.18889 | 0.01030 | False | |
No. 6 | 142 | 0.76056 | 0.46460 | 0.30043 | 0.10616 | False | |
No. 7 | 132 | 0.36363 | 0.29602 | 0.31777 | 0.03420 | False | |
No. 8 | 106 | 0.94340 | 0.39433 | 0.36821 | 0.13698 | False | |
No. 9 | 101 | 0.66336 | 0.34315 | 0.21464 | 0.04886 | False | |
0.4 | No. 10 | 3887 | 0.02727 | 0.96699 | 0.96381 | 0.02541 | False |
No. 11 | 2276 | 0.06678 | 0.48287 | 0.80706 | 0.02602 | False | |
No. 12 | 1689 | 0.13025 | 0.99809 | 0.99416 | 0.12924 | False | |
No. 13 | 529 | 0.24952 | 0.99838 | 0.99115 | 0.24692 | False | |
No. 14 | 447 | 0.30648 | 0.98646 | 0.97205 | 0.29389 | False | |
No. 15 | 318 | 0.24842 | 0.52587 | 0.65521 | 0.08559 | False | |
No. 16 | 279 | 0.87813 | 0.97943 | 0.95880 | 0.82464 | True | |
No. 17 | 49 | 0.91837 | 0.72605 | 0.19707 | 0.13140 | False | |
No. 18 | 48 | 0.33333 | 0.32933 | 0.30258 | 0.03321 | False |
Re-Projection Result | ||||||||
---|---|---|---|---|---|---|---|---|
Method | θ (°) | Error (pixel) | Improved (pixel) | Gain (%) | Intrinsic Parameters | Extrinsic Parameters | Stability | Automation Level |
Proposed method | 0.1 | 0.9350 | - | - | √ | √ | High | High |
0.2 | 1.0942 | - | - | |||||
0.4 | 1.1992 | - | - | |||||
[18] | 0.1 | 2.7422 | 2.2021 | +19.7 | √ | √ | Low | Low |
0.2 | 3.2414 | 2.4074 | +25.7 | |||||
0.4 | 5.9693 | 3.0073 | +49.6 | |||||
[22] | 0.1 | 2.1751 | 1.0416 | +52.1 | × | √ | Middle | Middle |
0.2 | 2.9751 | 2.0465 | +31.2 | |||||
0.4 | 4.8978 | 3.3172 | +32.3 | |||||
[19] | 0.1 | 2.2684 | 1.6406 | +27.7 | √ | √ | Middle | Low |
0.2 | 2.6452 | 1.9234 | +27.3 | |||||
0.4 | 4.4114 | 3.7310 | +15.4 | |||||
[20] | 0.1 | 1.3317 | - | - | × | √ | Middle | Low |
0.2 | 1.3982 | - | - | |||||
0.4 | 3.1492 | - | - | |||||
[24] | 0.1 | 0.7433 | - | - | × | √ | Middle | High |
0.2 | 2.4157 | - | - | |||||
0.4 | 3.5512 | - | - |
θ(°) | NP | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|---|
0.1 | Mean | 1.535 | 1.469 | 1.426 | 1.301 | 1.283 | 1.274 | 1.272 | 1.249 | 1.21 |
Std | 0.696 | 0.703 | 0.549 | 0.658 | 0.565 | 0.529 | 0.547 | 0.492 | 0.476 | |
0.2 | Mean | 2.109 | 1.945 | 1.637 | 1.470 | 1.457 | 1.408 | 1.324 | 1.303 | 1.256 |
Std | 1.007 | 0.682 | 0.740 | 0.623 | 0.697 | 0.644 | 0.737 | 0.624 | 0.552 | |
0.4 | Mean | 2.301 | 1.858 | 1.750 | 1.745 | 1.620 | 1.453 | 1.388 | 1.311 | 1.28 |
Std | 1.372 | 0.720 | 0.712 | 0.630 | 0.720 | 0.802 | 0.790 | 0.755 | 0.586 |
θ(°) | fx | fy | ux(pixel) | vy(pixel) | k1 | k2 | rx(°) | ry(°) | rz(°) | tx(cm) | ty(cm) | tz(cm) | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.1 | Mean | 2825.75 | 2817.57 | 969.026 | 597.901 | −0.220 | 0.187 | 89.767 | −89.738 | −0.321 | −6.0651 | 9.6231 | −1.4969 |
Std | 1.947 | 2.382 | 6.471 | 2.873 | 0.001 | 0.010 | 0.042 | 0.032 | 0.103 | 0.5986 | 0.3348 | 0.1995 | |
0.2 | Mean | 2826.23 | 2818.55 | 970.405 | 595.841 | −0.230 | 0.161 | 90.001 | −89.775 | −0.582 | −6.0065 | 10.0176 | −1.5216 |
Std | 2.827 | 3.463 | 5.824 | 3.604 | 0.009 | 0.000 | 0.069 | 0.035 | 0.111 | 0.4551 | 0.3172 | 0.4093 | |
0.4 | Mean | 2826.85 | 2818.18 | 966.150 | 602.771 | −0.236 | 0.201 | 90.106 | −89.690 | −0.624 | −5.9884 | 8.4497 | −1.7355 |
Std | 3.146 | 3.955 | 9.0291 | 5.042 | 0.007 | 0.001 | 0.044 | 0.042 | 0.171 | 0.4098 | 0.4856 | 0.4395 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, C.; Lan, J.; Liu, H.; Chen, S.; Wang, X. Automatic Calibration between Multi-Lines LiDAR and Visible Light Camera Based on Edge Refinement and Virtual Mask Matching. Remote Sens. 2022, 14, 6385. https://doi.org/10.3390/rs14246385
Chen C, Lan J, Liu H, Chen S, Wang X. Automatic Calibration between Multi-Lines LiDAR and Visible Light Camera Based on Edge Refinement and Virtual Mask Matching. Remote Sensing. 2022; 14(24):6385. https://doi.org/10.3390/rs14246385
Chicago/Turabian StyleChen, Chengkai, Jinhui Lan, Haoting Liu, Shuai Chen, and Xiaohan Wang. 2022. "Automatic Calibration between Multi-Lines LiDAR and Visible Light Camera Based on Edge Refinement and Virtual Mask Matching" Remote Sensing 14, no. 24: 6385. https://doi.org/10.3390/rs14246385
APA StyleChen, C., Lan, J., Liu, H., Chen, S., & Wang, X. (2022). Automatic Calibration between Multi-Lines LiDAR and Visible Light Camera Based on Edge Refinement and Virtual Mask Matching. Remote Sensing, 14(24), 6385. https://doi.org/10.3390/rs14246385