Optimizing Camera Exposure Time for Automotive Applications
Abstract
:1. Introduction
- A physics-based simulation tool to generate samples test images that models pixel noise, ambient lighting, exposure time, optical blur, and motion blur, which extends previous work by the authors. The simulation software is available at https://github.com/HaoLin97/Motion_Blur_Simulation.
- An analysis of the effects of motion blur and exposure time on a range of Image Quality (IQ) metrics, including widely-used metrics such as MTF and SNR, as well as additional metrics.
- Analysis of the efficacy of image quality metrics as predictors of motion blur and also their correlation with object detection performance.
- A detailed investigation of Optical Character Recognition (OCR) and object detection on stop signs with different exposure times and different degrees of motion blur.
- A set of methodologies to optimize exposure time for a given scene illumination and relative motion.
2. Background and Related Works
2.1. Camera Configurations
2.1.1. Image Signal Processor (ISP)
2.1.2. Noise Sources
2.1.3. Exposure and Motion Blur
2.2. Image Quality Metrics
2.2.1. MTF and SNR
2.2.2. Shannon Information Capacity (SIC)
2.2.3. Noise Equivalent Quanta (NEQ)
2.3. Object Detection
3. Simulation
3.1. Overview
3.2. Light Model
3.3. Blur Model
3.4. Noise Model
3.5. Trigonometric Tool
3.6. Methodology
4. Results
4.1. Image Quality Metric Analysis
4.1.1. MTF
4.1.2. SNR
4.1.3. SIC
4.1.4. NEQ
4.2. Object Detection Performance—Stop Sign
4.2.1. Frequency Content
- For large targets, the spatial frequency range, the range of SFR values for the derivative peaks, is 0.04–0.1 cy/px. This corresponds approximately to Nyquist/4.
- For the medium targets, the range of spatial frequencies is 0.08–0.2 cy/px. This corresponds to the MTF range between Nyquist/2 and Nyquist/4.
- For the small target, the range of spatial frequencies is 0.125–0.3 cy/px. This corresponds approximately to Nyquist/2.
4.2.2. Object Detection Performance
4.2.3. Optical Character Recognition (OCR)
5. Discussion
6. Conclusions and Future Works
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Xiao, Y.; Jiang, A.; Ye, J.; Wang, M.W. Making of Night Vision: Object Detection under Low-Illumination. IEEE Access 2020, 8, 123075–123086. [Google Scholar] [CrossRef]
- Zhang, J.; Zhu, L.; Xu, L.; Xie, Q. Research on the Correlation between Image Enhancement and Underwater Object Detection. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 5928–5933. [Google Scholar] [CrossRef]
- Brophy, T.; Mullins, D.; Parsi, A.; Horgan, J.; Ward, E.; Denny, P.; Eising, C.; Deegan, B.; Glavin, M.; Jones, E. A Review of the Impact of Rain on Camera-Based Perception in Automated Driving Systems. IEEE Access 2023, 11, 67040–67057. [Google Scholar] [CrossRef]
- Molloy, D.; Müller, P.; Deegan, B.; Mullins, D.; Horgan, J.; Ward, E.; Jones, E.; Braun, A.; Glavin, M. Analysis of the Impact of Lens Blur on Safety-Critical Automotive Object Detection. IEEE Access 2024, 12, 3554–3569. [Google Scholar] [CrossRef]
- Molloy, D.; Deegan, B.; Mullins, D.; Ward, E.; Horgan, J.; Eising, C.; Denny, P.; Jones, E.; Glavin, M. Impact of ISP Tuning on Object Detection. J. Imaging 2023, 9, 260. [Google Scholar] [CrossRef] [PubMed]
- Deegan, B.; Molloy, D.; Cahill, J.; Horgan, J.; Ward, E.; Jones, E.; Glavin, M.; Molloy, D.; Cahill, J.; Horgan, J.; et al. The influence of image capture and processing on MTF for end of line test and validation. Electron. Imaging 2023, 35, 126-6. [Google Scholar] [CrossRef]
- Yahiaoui, L.; Horgan, J.; Deegan, B.; Yogamani, S.; Hughes, C.; Denny, P. Overview and Empirical Analysis of ISP Parameter Tuning for Visual Perception in Autonomous Driving. J. Imaging 2019, 5, 78. [Google Scholar] [CrossRef] [PubMed]
- Cahill, J.; Parsi, A.; Mullins, D.; Horgan, J.; Ward, E.; Eising, C.; Denny, P.; Deegan, B.; Glavin, M.; Jones, E. Exploring the Viability of Bypassing the Image Signal Processor for CNN-Based Object Detection in Autonomous Vehicles. IEEE Access 2023, 11, 42302–42313. [Google Scholar] [CrossRef]
- Xin, L.; Yuting, K.; Tao, S. Investigation of the Relationship between Speed and Image Quality of Autonomous Vehicles. J. Min. Sci. 2021, 57, 264–273. [Google Scholar] [CrossRef]
- Müller, P.; Braun, A. MTF as a performance indicator for AI algorithms? Electron. Imaging Soc. Imaging Sci. Technol. 2023, 35, 125-1–125-7. [Google Scholar] [CrossRef]
- Kumar, V.R.; Eising, C.; Witt, C.; Yogamani, S.K. Surround-View Fisheye Camera Perception for Automated Driving: Overview, Survey & Challenges. IEEE Trans. Intell. Transp. Syst. 2023, 24, 3638–3659. [Google Scholar] [CrossRef]
- Huang, Y.; Du, J.; Yang, Z.; Zhou, Z.; Zhang, L.; Chen, H. A Survey on Trajectory-Prediction Methods for Autonomous Driving. IEEE Trans. Intell. Veh. 2022, 7, 652–674. [Google Scholar] [CrossRef]
- Yahiaoui, L.; Hughes, C.; Horgan, J.; Deegan, B.; Denny, P.; Yogamani, S. Optimization of ISP Parameters for Object Detection Algorithms. Electron. Imaging 2019, 31, 44-1–44-8. [Google Scholar] [CrossRef]
- Irie, K.; McKinnon, A.E.; Unsworth, K.; Woodhead, I.M. A model for measurement of noise in CCD digital-video cameras. Meas. Sci. Technol. 2008, 19, 045207. [Google Scholar] [CrossRef]
- Healey, G.; Kondepudy, R. Radiometric CCD camera calibration and noise estimation. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 267–276. [Google Scholar] [CrossRef]
- Dodge, S.; Karam, L. Understanding how image quality affects deep neural networks. In Proceedings of the 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), Lisbon, Portugal, 6–8 June 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Dodge, S.; Karam, L. A Study and Comparison of Human and Deep Learning Recognition Performance under Visual Distortions. In Proceedings of the 2017 26th International Conference on Computer Communication and Networks (ICCCN), Vancouver, BC, Canada, 31 July–3 August 2017; pp. 1–7. [Google Scholar] [CrossRef]
- Kamann, C.; Rother, C. Benchmarking the Robustness of Semantic Segmentation Models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June2020; pp. 8828–8838. [Google Scholar]
- Vasiljevic, I.; Chakrabarti, A.; Shakhnarovich, G. Examining the Impact of Blur on Recognition by Convolutional Networks. arXiv 2017, arXiv:1611.05760. [Google Scholar]
- Chen, F.; Ma, J. An Empirical Identification Method of Gaussian Blur Parameter for Image Deblurring. IEEE Trans. Signal Process. 2009, 57, 2467–2478. [Google Scholar] [CrossRef]
- Zheng, S.; Wu, Y.; Jiang, S.; Lu, C.; Gupta, G. Deblur-YOLO: Real-Time Object Detection with Efficient Blind Motion Deblurring. In Proceedings of the International Joint Conference on Neural Networks, Shenzhen, China, 18–22 July 2021. [Google Scholar] [CrossRef]
- Nayar, S.; Ben-Ezra, M. Motion-based motion deblurring. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 689–698. [Google Scholar] [CrossRef] [PubMed]
- Hu, Z.; Cho, S.; Wang, J.; Yang, M.H. Deblurring low-light images with light streaks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 3382–3389. [Google Scholar]
- Argaw, D.M.; Kim, J.; Rameau, F.; Cho, J.W.; Kweon, I.S. Optical Flow Estimation from a Single Motion-blurred Image. Proc. AAAI Conf. Artif. Intell. 2021, 35, 891–900. [Google Scholar] [CrossRef]
- ISO 12233:2017; Photography—Electronic Still Picture Imaging Resolution and Spatial Frequency Responses. ISO: Geneva, Switzerland, 2017. Available online: https://www.iso.org/standard/71696.html (accessed on 1 June 2024).
- Nasse, H. How to Read mtf Curves. Carl Zeiss, Camera Lens Division. 2008. Available online: https://lenspire.zeiss.com/photo/app/uploads/2022/02/technical-article-how-to-read-mtf-curves-01.pdf (accessed on 1 June 2024).
- Welvaert, M.; Rosseel, Y. On the Definition of Signal-To-Noise Ratio and Contrast-To-Noise Ratio for fMRI Data. PLoS ONE 2013, 8, e77089. [Google Scholar] [CrossRef] [PubMed]
- Koren, N.L. Measuring camera shannon information capacity with a siemens star image. IS T Int. Symp. Electron. Imaging Sci. Technol. 2020, 2020, 347-1–347-9. [Google Scholar] [CrossRef]
- Keelan, B.W. Imaging Applications of Noise Equivalent Quanta. Electron. Imaging 2016, 28, 1–7. [Google Scholar] [CrossRef]
- Sara, U.; Akter, M.; Uddin, M.S. Image Quality Assessment through FSIM, SSIM, MSE and PSNR—A Comparative Study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef]
- Horé, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar] [CrossRef]
- Palubinskas, G. Image Similarity/Distance Measures: What Is Really behind MSE and SSIM? Int. J. Image Data Fusion 2017, 8, 32–53. [Google Scholar] [CrossRef]
- Setiadi, D.R.I.M. PSNR vs. SSIM: Imperceptibility Quality Assessment for Image Steganography. Multimed. Tools Appl. 2021, 80, 8423–8444. [Google Scholar] [CrossRef]
- Peli, E. Contrast in Complex Images. J. Opt. Soc. Am. A 1990, 7, 2032–2040. [Google Scholar] [CrossRef] [PubMed]
- Imatest. Sharpness: What Is It and How It Is Measured. Available online: https://www.imatest.com/docs/sharpness/ (accessed on 17 April 2024).
- Group, I.P.W. IEEE Draft Standard for Automotive System Image Quality. IEEE P2020/D3 2022; pp. 1–284.
- Imatest. Available online: https://www.imatest.com/docs/ (accessed on 17 April 2024).
- Imatest. Available online: https://www.imatest.com/ (accessed on 17 April 2024).
- Koren, N. Camera Information Capacity: A Key Performance Indicator for Machine Vision and Artificial Intelligence Systems. Available online: https://www.imatest.com/wp-content/uploads/2020/03/Information_capacity_white_paper.pdf (accessed on 17 April 2024).
- Michaelis, C.; Mitzkus, B.; Geirhos, R.; Rusak, E.; Bringmann, O.; Ecker, A.S.; Bethge, M.; Brendel, W. Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming. arXiv 2020, arXiv:1907.07484. [Google Scholar]
- Hendrycks, D.; Dietterich, T. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations. arXiv 2019, arXiv:1903.12261. [Google Scholar]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-End Object Detection with Transformers. arXiv 2020, arXiv:2005.12872. [Google Scholar]
- Gupta, A.; Narayan, S.; Joseph, K.J.; Khan, S.; Khan, F.S.; Shah, M. OW-DETR: Open-world Detection Transformer. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 9225–9234. [Google Scholar] [CrossRef]
- Lv, W.; Zhao, Y.; Xu, S.; Wei, J.; Wang, G.; Cui, C.; Du, Y.; Dang, Q.; Liu, Y. DETRs Beat YOLOs on Real-time Object Detection. arXiv 2023, arXiv:2304.08069. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Region-Based Convolutional Networks for Accurate Object Detection and Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 142–158. [Google Scholar] [CrossRef] [PubMed]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Doll, P.; Girshick, R.; Ai, F. Mask R-CNN Ar. arXiv 2017, arXiv:1703.06870. [Google Scholar]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.-Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Computer Vision—ECCV 2016. ECCV 2016; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015. [Google Scholar] [CrossRef]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.Y.M. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Redmon, J.; Farhadi, A. Yolo V2.0. arXiv 2017, arXiv:1612.08242. [Google Scholar]
- Ultralytics. Yolov5. (accessed on 1 January 2023). 2020. [Google Scholar] [CrossRef]
- Kennerley, M.; Wang, J.G.; Veeravalli, B.; Tan, R.T. 2PCNet: Two-Phase Consistency Training for Day-to-Night Unsupervised Domain Adaptive Object Detection. arXiv 2023, arXiv:2303.13853. [Google Scholar]
- Liu, Z.; Shah, D.; Rahimpour, A.; Upadhyay, D.; Farrell, J.; Wandell, B.A. Using simulation to quantify the performance of automotive perception systems. arXiv 2023, arXiv:2303.00983. [Google Scholar] [CrossRef]
- Salas, A.J.; Meza-Lovon, G.; Fernandez, M.E.; Raposo, A. Training with synthetic images for object detection and segmentation in real machinery images. In Proceedings of the 2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Porto de Galinhas, Brazil, 7–10 November 2020; pp. 226–233. [Google Scholar] [CrossRef]
- Douglass, K.M. Modeling Noise for Image Simulations. 2017. Available online: http://kmdouglass.github.io/posts/modeling-noise-for-image-simulations/ (accessed on 1 January 2023).
- Jenkin, R.; Kane, P. Fundamental Imaging System Analysis for Autonomous Vehicles. Electron. Imaging 2018, 30, 105-1–105-10. [Google Scholar] [CrossRef]
- Jenkin, R.; Zhao, C. Radiometry and Photometry for Autonomous Vehicles and Machines—Fundamental Performance Limits. Electron. Imaging 2021, 33, 211-1–211-10. [Google Scholar] [CrossRef]
- Jenkin, R.B. Comparison of detectability index and contrast detection probability. IS T Int. Symp. Electron. Imaging Sci. Technol. 2019, 2019, 060405-1–060405-9. [Google Scholar] [CrossRef]
- Lin, H.; Deegan, B.; Horgan, J.; Ward, E.; Denny, P.; Eising, C.; Glavin, M.; Jones, E.; Deegan, B.; Horgan, J.; et al. Simulating motion blur and exposure time and evaluating its effect on image quality. Electron. Imaging 2023, 35, 117-1–117-5. [Google Scholar] [CrossRef]
- EMVA. EMVA Standard 1288: Standard for Characterization of Image Sensors and Cameras. European Machine Vision Association 2005. Available online: https://www.emva.org/standards-technology/emva-1288/ (accessed on 7 August 2024).
- Carter, E.C.; Schanda, J.D.; Hirschler, R.; Jost, S.; Luo, M.R.; Melgosa, M.; Ohno, Y.; Pointer, M.R.; Rich, D.C.; Viénot, F.; et al. CIE 015:2018 Colorimetry, 4th ed.; International Commission on Illumination (CIE): Vienna, Austria, 2018. [Google Scholar] [CrossRef]
- Wu, J.; Zheng, C.; Hu, X.; Wang, Y.; Zhang, L. Realistic Rendering of Bokeh Effect Based on Optical Aberrations. Vis. Comput. 2010, 26, 555–563. [Google Scholar] [CrossRef]
- Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 2633–2642. [Google Scholar] [CrossRef]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. arXiv 2014, arXiv:1405.0312. [Google Scholar]
- Wenkel, S.; Alhazmi, K.; Liiv, T.; Alrshoud, S.; Simon, M. Confidence Score: The Forgotten Dimension of Object Detection Performance Evaluation. Sensors 2021, 21, 4350. [Google Scholar] [CrossRef] [PubMed]
- Li, C.; Liu, W.; Guo, R.; Yin, X.; Jiang, K.; Du, Y.; Du, Y.; Zhu, L.; Lai, B.; Hu, X.; et al. PP-OCRv3: More Attempts for the Improvement of Ultra Lightweight OCR System. arXiv 2022, arXiv:2206.03001. [Google Scholar]
- Jenkin, R. Contrast Signal to Noise Ratio. Electron. Imaging 2021, 33, 186–1–186–6. [Google Scholar] [CrossRef]
- Sharp, P.; Barber, D.; Brown, D.; Burgess, A.; Metz, C.; Myers, K.; Taylor, C.; Wagner, R.; Brooks, R.; Hill, C.; et al. ICRU Report 54: Medical Imaging—The Assessment of Image Quality; International Commision on Radiation Units and Measurements: Bethesda, MD, USA, 1996. [Google Scholar]
Image Quality Metrics | Units |
---|---|
SNR | dB |
MTF50 | Cycles per pixel |
SIC | Bits per pixel |
NEQ | Photons |
Distance | Target Speed | Target Speed | Target Width | Target Height | Exposure Time | Estimated Motion Blur |
---|---|---|---|---|---|---|
(m) | (kmph) | () | (m) | (m) | (ms) | (Pixels) |
25 | 10 | 2.778 | 0.5 | 1.75 | 15 | 3.938 |
50 | 10 | 2.778 | 0.5 | 1.75 | 15 | 1.969 |
100 | 10 | 2.778 | 0.5 | 1.75 | 15 | 0.984 |
100 | 20 | 5.556 | 0.5 | 1.75 | 15 | 1.969 |
100 | 20 | 5.556 | 4.9 | 1.8 | 30 | 3.94 |
100 | 50 | 13.889 | 4.9 | 1.8 | 30 | 9.84 |
50 | 100 | 13.889 | 4.9 | 1.8 | 15 | 19.675 |
50 | 100 | 13.889 | 4.9 | 1.8 | 30 | 39.35 |
100 | 100 | 27.778 | 4.9 | 1.8 | 30 | 19.69 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, H.; Mullins, D.; Molloy, D.; Ward, E.; Collins, F.; Denny, P.; Glavin, M.; Deegan, B.; Jones, E. Optimizing Camera Exposure Time for Automotive Applications. Sensors 2024, 24, 5135. https://doi.org/10.3390/s24165135
Lin H, Mullins D, Molloy D, Ward E, Collins F, Denny P, Glavin M, Deegan B, Jones E. Optimizing Camera Exposure Time for Automotive Applications. Sensors. 2024; 24(16):5135. https://doi.org/10.3390/s24165135
Chicago/Turabian StyleLin, Hao, Darragh Mullins, Dara Molloy, Enda Ward, Fiachra Collins, Patrick Denny, Martin Glavin, Brian Deegan, and Edward Jones. 2024. "Optimizing Camera Exposure Time for Automotive Applications" Sensors 24, no. 16: 5135. https://doi.org/10.3390/s24165135
APA StyleLin, H., Mullins, D., Molloy, D., Ward, E., Collins, F., Denny, P., Glavin, M., Deegan, B., & Jones, E. (2024). Optimizing Camera Exposure Time for Automotive Applications. Sensors, 24(16), 5135. https://doi.org/10.3390/s24165135