Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS)
Abstract
:1. Introduction
1.1. Previous Studies
1.2. Purpose of Study
- The novel straightness constraint was formulated on the pre-processed candidates with a cross-entropy loss function used to define the problem of error minimization.
- The robustness of the proposed system was extensively validated in real-time testing on various vision-based ADAS tasks under numerous scenarios.
- An experimental attempt of using parameter sharing approach and model-specific empirical -residual rectification factors has been investigated and the limitations of the proposed system were analyzed for future betterment.
2. Lens-Distortion Modeling
Polynomial Lens-Distortion Model
3. Proposed Self-Calibration Design
3.1. Pre-Processing and Straightness Constraint Loss
3.2. Error Minimization and Distortion Parameter Estimation
3.2.1. Ablation Study Validating the Attributes in Distortion Parameter Estimation
3.2.2. Investigation of Parameter Sharing Option in Higher-Resolution Frame Processing
Algorithm 1: Straightness constraint loss and Distortion parameter estimation. |
3.3. Undistortion Module
3.4. Model-Specific Empirical -Residual Rectification Factor
Algorithm 2: Undistortion algorithm using distortion parameters, with and without hyper parameter |
Possible Ways to Overcome Severe Distortions from Low-Quality Lens
- Employing a deep neural network to model the lens-specific distortions caused by the low-quality set of camera lens. Such an approach would be feasible for contextual usage in practical applications such as ADAS, video surveillance etc.
- An attempt of applying stretch-free super-resolution can be considered on the full view undistorted image to reduce the stretching along the image edges and to maintain pixel consistency.
4. Experimental Data
4.1. Data Collection Scenarios
4.2. Synthetic Data Generation
5. Performance Analysis and Evaluation Metrics
5.1. Quantitative Analysis: Image Quality
5.2. Quantitative Analysis: Salient Point Estimation
5.3. Qualitative Analysis
6. Results and Discussions
6.1. Quantitative Analysis on the Synthetic Dataset
6.1.1. PSNR and SSIM Metrics on the Synthetic Dataset
6.1.2. Principal Point Estimation on the Synthetic Dataset
6.2. Practical Testing on ADAS Workbench Alongside Vision-Based Tasks
7. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Appendix A. Investigation of Distortion Distributions Corresponding to the Camera Models
References
- Ziebinski, A.; Cupek, R.; Erdogan, H.; Waechter, S. A survey of ADAS technologies for the future perspective of sensor fusion. In International Conference on Computational Collective Intelligence; Springer: Berlin, Gemany, 2016; pp. 135–146. [Google Scholar]
- Dabral, S.; Kamath, S.; Appia, V.; Mody, M.; Zhang, B.; Batur, U. Trends in camera based automotive driver assistance systems (adas). In Proceedings of the 2014 IEEE 57th International Midwest Symposium on Circuits and Systems (MWSCAS), College Station, TX, USA, 3–6 August 2014; pp. 1110–1115. [Google Scholar]
- Tadjine, H.; Hess, M.; Karsten, S. Object detection and classification using a rear in-vehicle fisheye camera. In Proceedings of the FISITA 2012 World Automotive Congress; Springer: Berlin, Germany, 2013; pp. 519–528. [Google Scholar]
- Hughes, C.; Glavin, M.; Jones, E.; Denny, P. Review of Geometric Distortion Compensation in Fish-Eye Cameras. In Proceedings of the IET Irish Signals and Systems Conference (ISSC 2008), Galway, Ireland, 18–19 June 2008; pp. 162–167. [Google Scholar]
- Remondino, F.; Fraser, C. Digital camera calibration methods: considerations and comparisons. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2006, 36, 266–272. [Google Scholar]
- Urban, S.; Leitloff, J.; Hinz, S. Improved wide-angle, fisheye and omnidirectional camera calibration. ISPRS J. Photogramm. Remote Sens. 2015, 108, 72–79. [Google Scholar] [CrossRef]
- Luhmann, T.; Fraser, C.; Maas, H.G. Sensor modelling and camera calibration for close-range photogrammetry. ISPRS J. Photogramm. Remote Sens. 2016, 115, 37–46. [Google Scholar] [CrossRef]
- Hughes, C.; Glavin, M.; Jones, E.; Denny, P. Wide-angle camera technology for automotive applications: A review. IET Intell. Transp. Syst. 2009, 3, 19–31. [Google Scholar] [CrossRef]
- Tai, Y.C.; Hsieh, Y.Y.; Chuang, J.H. A fully automatic approach for fisheye camera calibration. In Proceedings of the 2018 IEEE Visual Communications and Image Processing (VCIP), Taichung, Taiwan, 9–12 November 2019; pp. 1–4. [Google Scholar]
- Podbreznik, P.; Potocnik, B. Influence of temperature variations on calibrated cameras. Int. J. Comput. Inf. Sci. Eng. 2008, 2, 261–267. [Google Scholar]
- Handel, H. Analyzing the influence of camera temperature on the image acquisition process. In Three-Dimensional Image Capture and Applications 2008; International Society for Optics and Photonics: San Diego, CA, USA, 2008; Volume 6805, p. 68050X. [Google Scholar]
- Devernay, F.; Faugeras, O.D. Automatic calibration and removal of distortion from scenes of structured environments. In Investigative and Trial Image Processing; International Society for Optics and Photonics: San Diego, CA, USA, 1995; Volume 2567, pp. 62–73. [Google Scholar]
- Pollefeys, M.; Koch, R.; Van Gool, L. Self-calibration and metric reconstruction inspite of varying and unknown intrinsic camera parameters. Int. J. Comput. Vis. 1999, 32, 7–25. [Google Scholar] [CrossRef]
- Schwalbe, E. Geometric Modelling and Calibration of Fisheye Lens Camera Systems; Institute of Photogrammetry and Remote Sensing-Dresden University of Technology: Dresden, Germany, 2005. [Google Scholar]
- Ho, T.H.; Davis, C.C.; Milner, S.D. Using geometric constraints for fisheye camera calibration. In Proceedings of the IEEE OMNIVIS Workshop, Beijing, China, 17–20 October 2005. [Google Scholar]
- Thirthala, S.; Pollefeys, M. The radial trifocal tensor: A tool for calibrating the radial distortion of wide-angle cameras. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 321–328. [Google Scholar]
- Hartley, R.I. Self-calibration of stationary cameras. Int. J. Comput. Vis. 1997, 22, 5–23. [Google Scholar] [CrossRef]
- Triggs, B. Autocalibration and the absolute quadric. In Proceedings of the IEEE Computer Society International Conference on Computer Vision & Pattern Recognition (CVPR’97), San Juan, Puerto Rico, 17–19 June 1997; pp. 609–614. [Google Scholar]
- Strand, R.; Hayman, E. Correcting Radial Distortion by Circle Fitting. In Proceedings of the British Machine Vision Conference, Oxford, UK, 5–8 September 2005. [Google Scholar]
- Duane, C.B. Close-range camera calibration. Photogramm. Eng. 1971, 37, 855–866. [Google Scholar]
- Brauer-Burchardt, C.; Voss, K. A new algorithm to correct fish-eye-and strong wide-angle-lens-distortion from single images. In Proceedings of the IEEE 2001 International Conference on Image Processing (Cat. No. 01CH37205), Thessaloniki, Greece, 7–10 October 2001; Volume 1, pp. 225–228. [Google Scholar]
- Fitzgibbon, A.W. Simultaneous linear estimation of multiple view geometry and lens distortion. In Proceedings of the CVPR 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; Volume 1, p. I. [Google Scholar]
- Prescott, B.; McLean, G. Line-based correction of radial lens distortion. Graph. Models Image Process. 1997, 59, 39–47. [Google Scholar] [CrossRef]
- Barreto, J.; Roquette, J.; Sturm, P.; Fonseca, F. Automatic camera calibration applied to medical endoscopy. In BMVC 2009—20th British Machine Vision Conference; The British Machine Vision Association (BMVA): London, UK, 2009; pp. 1–10. [Google Scholar]
- Zhang, Z.; Matsushita, Y.; Ma, Y. Camera calibration with lens distortion from low-rank textures. In Proceedings of the IEEE CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2321–2328. [Google Scholar]
- Cipolla, R.; Drummond, T.; Robertson, D.P. Camera Calibration from Vanishing Points in Image of Architectural Scenes; BMVC: Nottingham, UK, 1999; Volume 99, pp. 382–391. [Google Scholar]
- Bukhari, F.; Dailey, M.N. Automatic radial distortion estimation from a single image. J. Math. Imaging Vis. 2013, 45, 31–45. [Google Scholar] [CrossRef]
- Santana-Cedrés, D.; Gomez, L.; Alemán-Flores, M.; Salgado, A.; Esclarín, J.; Mazorra, L.; Alvarez, L. An iterative optimization algorithm for lens distortion correction using two-parameter models. Image Process. Line 2016, 6, 326–364. [Google Scholar] [CrossRef]
- Alvarez, L.; Gomez, L.; Sendra, J.R. Algebraic lens distortion model estimation. Image Process. Line 2010, 1, 1–10. [Google Scholar] [CrossRef]
- Alemán-Flores, M.; Alvarez, L.; Gomez, L.; Santana-Cedrés, D. Automatic lens distortion correction using one-parameter division models. Image Process. Line 2014, 4, 327–343. [Google Scholar] [CrossRef]
- Shah, S.; Aggarwal, J. A simple calibration procedure for fish-eye (high distortion) lens camera. In Proceedings of the 1994 IEEE international Conference on Robotics and Automation, San Diego, CA, USA, 8–13 May 1994; pp. 3422–3427. [Google Scholar]
- Nayar, S.K. Catadioptric omnidirectional camera. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Juan, PR, USA, 17–19 June 1997; pp. 482–488. [Google Scholar]
- Sáez, Á.; Bergasa, L.M.; López-Guillén, E.; Romera, E.; Tradacete, M.; Gómez-Huélamo, C.; del Egido, J. Real-time semantic segmentation for fisheye urban driving images based on ERFNet. Sensors 2019, 19, 503. [Google Scholar] [CrossRef] [PubMed]
- Fremont, V.; Bui, M.; Boukerroui, D.; Letort, P. Vision-based people detection system for heavy machine applications. Sensors 2016, 16, 128. [Google Scholar] [CrossRef] [PubMed]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The KITTI dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef] [Green Version]
- Huang, X.; Cheng, X.; Geng, Q.; Cao, B.; Zhou, D.; Wang, P.; Lin, Y.; Yang, R. The apolloscape dataset for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 954–960. [Google Scholar]
- Endres, F.; Hess, J.; Engelhard, N.; Sturm, J.; Cremers, D.; Burgard, W. An Evaluation of the RGB-D SLAM System; Icra: St. Paul, MN, USA, 2012; Volume 3, pp. 1691–1696. [Google Scholar]
- Sturm, P. Pinhole camera model. Computer Vision: A Reference Guide; Springer: Boston, MA, USA, 2014; pp. 610–613. [Google Scholar]
- Zhang, Z. Projection Transformation. Computer Vision: A Reference Guide; Springer: Boston, MA, USA, 2014; pp. 639–640. [Google Scholar]
- Miyamoto, K. Fish eye lens. JOSA 1964, 54, 1060–1061. [Google Scholar] [CrossRef]
- Akinlar, C.; Topal, C. EDPF: A real-time parameter-free edge segment detector with a false detection control. Int. J. Pattern Recognit. Artif. Intell. 2012, 26, 1255002. [Google Scholar] [CrossRef]
- Geary, J.M. Introduction to Lens Design: With Practical ZEMAX Examples; Willmann-Bell: Richmond, VA, USA, 2002. [Google Scholar]
- Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the IEEE 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
Approach (Optimization Technique + Loss) | Performance | Parameter Estimation Processing Time (in Sec.) | |
---|---|---|---|
Average SSIM | Average Error in Principal Point Estimation (in Pixels) | ||
NR-optim + M-S loss | 0.27 | 32.6 | 4.51 |
NR-optim + L-S loss | 0.20 | 40.7 | 4.47 |
NR-optim + C-E loss | 0.28 | 30.9 | 4.42 |
LM- optim + M-S loss | 0.22 | 37.2 | 3.92 |
LM- optim + L-S loss | 0.20 | 42.4 | 4.36 |
LM-optim + C-E loss | 0.33 | 18.7 | 3.07 |
Image Size (in Pixels) | Field of View [FOV] (in Degrees) | Parameter Estimation Time (in Sec.) | ||
---|---|---|---|---|
Normal Mode | Parameter Sharing | |||
2880 × 1620 | 140 | 32.95 | 22.04 | |
5120 × 2880 | 200 | 64.92 | 54.47 | |
7680 × 4832 | 190 | 58.8 | 40.11 |
Dataset Type | Camera Lens Model | Image Size (w × h) | Distortion Ranges | Data Collection scenarios |
---|---|---|---|---|
Real data | Wide-angle Fisheye Super-wide-angle | 1280 × 720 (HD) 640 × 480 (VGA) 320 × 240 (QVGA) | Distortions induced by lens models with FOV ranging from 120 < FOV < 200 | Daytime, Dawn time, Illumination changes, pedestrians, heavy traffic |
Synthetic data | Normal lens with FOV less than 100 | 1242 x 375 default size of the KITTI data samples | Synthesized from D = 0.5 to 0.9 in the intervals of 0.05 | On road with various objects such as cars, pedestrians, van, cyclist etc. |
Synthetic Distortion | OpenCV Traditional Method | Bukhari et al. [27] | Santana-Cedrés, Daniel et al. [28] | Proposed Method | ||||
---|---|---|---|---|---|---|---|---|
PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | PSNR | SSIM | |
D = 0.5 | 9.495 | 0.1347 | 16.259 | 0.217 | 20.826 | 0.403 | 21.16 | 0.425 |
D = 0.55 | 8.733 | 0.217 | 13.203 | 0.285 | 17.453 | 0.332 | 19.753 | 0.484 |
D = 0.6 | 8.591 | 0.229 | 14.103 | 0.201 | 21.738 | 0.498 | 20.062 | 0.471 |
D = 0.65 | 7.941 | 0.155 | 12.536 | 0.301 | 17.386 | 0.513 | 19.648 | 0.543 |
D = 0.7 | 8.023 | 0.321 | 14.414 | 0.421 | 16.453 | 0.568 | 21.05 | 0.612 |
D = 0.75 | 10.101 | 0.236 | 14.693 | 0.281 | 17.432 | 0.367 | 18.354 | 0.414 |
D = 0.8 | 9.21 | 0.206 | 12.28 | 0.321 | 16.15 | 0.405 | 16.343 | 0.411 |
D = 0.85 | 8.712 | 0.315 | 14.77 | 0.405 | 17.96 | 0.467 | 18.152 | 0.472 |
D = 0.9 | 7.957 | 0.249 | 10.295 | 0.299 | 12.83 | 0.335 | 16.694 | 0.421 |
Synthetic Image Principal Point (Original) | Alvarez et al. [29] | Santana-Cedrés, Daniel et al. [28] | Proposed Method | |||
---|---|---|---|---|---|---|
Principal Point (Estimate) | Error (in pixel) | Principal Point (Estimate) | Error (in Pixel) | Principal Point (Estimate) | Error (in Pixel) | |
(320,240) | (330,235) | 0 | (320,240) | 0 | (320,240) | 0 |
(330,250) | (332,257) | 21.3 | (320,240) | 14.1 | (323,242) | 10.6 |
(340,260) | (342,269) | 32.2 | (320,240) | 28.3 | (326,245) | 20.5 |
(350,270) | (354,275) | 28.4 | (320,240) | 42.4 | (338,257) | 17.6 |
(360,280) | (369,288) | 17.0 | (346,268) | 18.4 | (357,264) | 16.2 |
(370,290) | (378,296) | 47.4 | (358,273) | 20.8 | (361,272) | 20.1 |
(380,300) | (392,311) | 34.4 | (361,273) | 33.0 | (364,274) | 31.5 |
(390,310) | (406,321) | 49.5 | (369,276) | 39.9 | (376,284) | 29.8 |
Average pixel error | 28.8 | 24.5 | 18.7 |
Camera Model | Miss-Detection Percentage Along the Edges Due to Distortion (Number of Observations = 100) | |||
---|---|---|---|---|
Right Side | Left Side | Top | Bottom | |
Fisheye | 41% | 34% | 37% | 29% |
Wide-angle | 24% | 19% | 22% | 26% |
Super-wide-angle | 36% | 25% | 27% | 32% |
Camera Model (Position) | Vision-Based Scenario (Testing) | Average Reprojection Error (in Pixels) (Number of Observations = 100) | |
---|---|---|---|
Distorted Data | Undistorted Data | ||
Fisheye (rear) | Auto-parking | 49.55 | 34.26 |
Wide-angle (front) | multiple-object tracking | 48.17 | 30.64 |
Super-wide-angle (front) | localization and mapping | 45.54 | 9.28 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kakani, V.; Kim, H.; Kumbham, M.; Park, D.; Jin, C.-B.; Nguyen, V.H. Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS). Sensors 2019, 19, 3369. https://doi.org/10.3390/s19153369
Kakani V, Kim H, Kumbham M, Park D, Jin C-B, Nguyen VH. Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS). Sensors. 2019; 19(15):3369. https://doi.org/10.3390/s19153369
Chicago/Turabian StyleKakani, Vijay, Hakil Kim, Mahendar Kumbham, Donghun Park, Cheng-Bin Jin, and Van Huan Nguyen. 2019. "Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS)" Sensors 19, no. 15: 3369. https://doi.org/10.3390/s19153369
APA StyleKakani, V., Kim, H., Kumbham, M., Park, D., Jin, C. -B., & Nguyen, V. H. (2019). Feasible Self-Calibration of Larger Field-of-View (FOV) Camera Sensors for the Advanced Driver-Assistance System (ADAS). Sensors, 19(15), 3369. https://doi.org/10.3390/s19153369