Sensors on the Move: Onboard Camera-Based Real-Time Traffic Alerts Paving the Way for Cooperative Roads
Abstract
:1. Introduction
2. Related Works
2.1. Traffic Sign Recognition
2.2. Fog Detection
3. Datasets
3.1. Existing Traffic Sign Recognition Datasets
3.2. Ceit Traffic Sign Recognition Dataset
3.3. Existing Fog Detection Datasets
3.4. Ceit Fog Detection Dataset
4. Materials and Methods
4.1. Traffic Sign Recognition
4.1.1. Traffic Sign Detection
4.1.2. Traffic Sign Classification
4.2. Fog Detection
Rule-Based Method
5. Results and Discussion
5.1. Traffic Sign Recognition Module
5.1.1. Detector Training
- the inference time was always about 200 ms per image;
- up to 5000 learners or decision trees do not improve the results;
- the object training size should be maximum 30 × 30 pixels, as bigger dimensions may increase the false negatives;
- a confidence threshold is key to compensate for false positives and negatives.
5.1.2. Classifier Training
5.2. Fog Detection Module
6. Conclusions
7. Future Research Direction
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
Appendix A.1. RGB Colour Space
Appendix A.2. HSV Colour Space
Appendix A.3. XYZ Colour Space
References
- European Comission. COMMISSION STAFF WORKING DOCUMENT—EU Road Safety Framework 2021–2030—Next Steps towards “Vision Zero”; Swedish Transport Administration: Brussels, Belgium, 2019; p. 2.
- European Transport Safety Council PIN Report: Lockdowns Resulted in an Unprecedented 36% Drop in Road Deaths in the EU; 2020; pp. 1–3. Available online: https://etsc.eu/pin-report-lockdowns-resulted-in-an-unprecedented-36-drop-in-road-deaths-in-the-eu/ (accessed on 9 July 2020).
- A European Strategy on Cooperative Intelligent transport Systems, a Milestone towards Cooperative, Connected and Automated Mobility. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2016%3A766%3AFIN (accessed on 15 January 2021).
- Asociacion Española de la Carretera Auditoria del estado de las carreteras 2019–2020. Available online: https://tv.aecarretera.com/estudio-sobre-necesidades-de-inversion-en-conservacion-de-carreteras-en-espana-2019-2020/ (accessed on 20 October 2020).
- Stallkamp, J.; Schlipsing, M.; Salmen, J.; Igel, C. Man vs. Computer: {B}enchmarking Machine Learning Algorithms for Traffic Sign Recognition. Neural Netw. 2012, 32, 323–332. [Google Scholar] [CrossRef] [PubMed]
- Houben, S.; Stallkamp, J.; Salmen, J.; Schlipsing, M.; Igel, C. Detection of traffic signs in real-world images: The German traffic sign detection benchmark. Proc. Int. Jt. Conf. Neural Netw. 2013, 1–8. [Google Scholar] [CrossRef]
- Baró, X.; Escalera, S.; Vitrià, J.; Pujol, O.; Radeva, P. Adaboost Detection and Forest-ECOC Classification. IEEE Trans. Intell. Transp. Syst. 2009, 10, 113–126. [Google Scholar] [CrossRef]
- Zaklouta, F.; Stanciulescu, B. Real-time traffic sign recognition in three stages. Rob. Auton. Syst. 2014, 62, 16–24. [Google Scholar] [CrossRef]
- Liu, C.; Chang, F.; Chen, Z. Rapid multiclass traffic sign detection in high-resolution images. IEEE Trans. Intell. Transp. Syst. 2014, 15, 2394–2403. [Google Scholar] [CrossRef]
- Yuan, Y.; Xiong, Z.; Wang, Q. An Incremental Framework for Video-Based Traffic Sign Detection, Tracking, and Recognition. IEEE Trans. Intell. Transp. Syst. 2017, 18, 1918–1929. [Google Scholar] [CrossRef]
- Mogelmose, A.; Liu, D.; Trivedi, M.M. Detection of U.S. Traffic Signs. IEEE Trans. Intell. Transp. Syst. 2015, 16, 3116–3125. [Google Scholar] [CrossRef] [Green Version]
- Zhu, Y.; Liao, M.; Yang, M.; Liu, W.; Member, S. Cascaded Segmentation-Detection Networks for Text-Based Traffic Sign Detection. IEEE Trans. Intell. Transp. Syst. 2018, 19, 209–219. [Google Scholar] [CrossRef]
- Zhu, Z.; Liang, D.; Zhang, S.; Huang, X.; Li, B.; Hu, S. Traffic-Sign Detection and Classification in the Wild Zhe. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016 ; pp. 2110–2118. [Google Scholar] [CrossRef]
- Wali, S.B.; Abdullah, M.A.; Hannan, M.A.; Hussain, A.; Samad, S.A.; Ker, P.J.; Mansor, M. Bin Vision-based traffic sign detection and recognition systems: Current trends and challenges. Sensors (Switzerland) 2019, 19, 2093. [Google Scholar] [CrossRef] [Green Version]
- Qian, R.; Zhang, B.; Yue, Y.; Wang, Z.; Coenen, F. Robust Chinese traffic sign detection and recognition with deep convolutional neural network. In Proceedings of the 2015 11th International Conference on Natural Computation (ICNC), Zhangjiajie, China, 15–17 August 2015; pp. 791–796. [Google Scholar] [CrossRef]
- Liu, C.; Li, S.; Chang, F.; Wang, Y. Machine Vision Based Traffic Sign Detection Methods: Review, Analyses and Perspectives. IEEE Access 2019, 7, 86578–86596. [Google Scholar] [CrossRef]
- Temel, D.; Chen, M.; Alregib, G. Traffic Sign Detection Under Challenging Conditions: A Deeper Look into Performance Variations and Spectral Characteristics. IEEE Trans. Intell. Transp. Syst. 2020, 21, 3663–3673. [Google Scholar] [CrossRef] [Green Version]
- Temel, D.; Alshawi, T.; Chen, M.-H.; AlRegib, G. Challenging Environments for Traffic Sign Detection: Reliability Assessment under Inclement Conditions. arXiv Prepr. 2019, arXiv:1902.06857. [Google Scholar]
- Hautière, N.; Tarel, J.P.; Halmaoui, H.; Brémond, R.; Aubert, D. Enhanced fog detection and free-space segmentation for car navigation. Mach. Vis. Appl. 2014, 25, 667–679. [Google Scholar] [CrossRef]
- Pavlic, M.; Rigoll, G.; Ilic, S. Classification of images in fog and fog-free scenes for use in vehicles. IEEE Intell. Veh. Symp. Proc. 2013, 481–486. [Google Scholar] [CrossRef] [Green Version]
- Liu, C.; Lu, X.; Ji, S.; Geng, W. A fog level detection method based on image HSV color histogram. In Proceedings of the 2014 IEEE International Conference on Progress in Informatics and Computing, Shanghai, China, 16–18 May 2014; pp. 373–377. [Google Scholar] [CrossRef]
- Mathias, M.; Timofte, R.; Benenson, R.; Van Gool, L. Traffic sign recognition—How far are we from the solution? In Proceedings of the 2013 International Joint Conference on Neural Networks (IJCNN), Dallas, TX, USA, 4–9 August 2013; pp. 1–8. [Google Scholar] [CrossRef]
- Saadna, Y.; Behloul, A. An overview of traffic sign detection and classification methods. Int. J. Multimed. Inf. Retr. 2017, 6, 193–210. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar] [CrossRef]
- Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The Cityscapes Dataset for Semantic Urban Scene Understanding. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. [Google Scholar] [CrossRef] [Green Version]
- Neuhold, G.; Ollmann, T.; Bulo, S.R.; Kontschieder, P. The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 5000–5009. [Google Scholar] [CrossRef]
- Huang, X.; Wang, P.; Cheng, X.; Zhou, D.; Geng, Q.; Yang, R. The ApolloScape Open Dataset for Autonomous Driving and Its Application. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2702–2719. [Google Scholar] [CrossRef] [Green Version]
- Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2633–2642. [Google Scholar] [CrossRef]
- Tarel, J.P.; Hautière, N.; Cord, A.; Gruyer, D.; Halmaoui, H. Improved visibility of road scene images under heterogeneous fog. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV’10), San Diego, CA, USA, 21–24 June 2010; pp. 478–485. [Google Scholar] [CrossRef]
- Tarel, J.; Hauti, N.; Caraffa, L.; Halmaoui, H.; Gruyer, D.; Tarel, J.; Hauti, N.; Caraffa, L.; Halmaoui, H.; Tarel, J.; et al. Heterogeneous Fog To cite this version: Vision Enhancement in Homogeneous and Heterogeneous Fog. IEEE Intell. Transp. Syst. Mag. 2012, 4, 6–20. [Google Scholar] [CrossRef] [Green Version]
- Sakaridis, C.; Dai, D.; Van Gool, L. Semantic Foggy Scene Understanding with Synthetic Data. Int. J. Comput. Vis. 2018, 126, 973–992. [Google Scholar] [CrossRef] [Green Version]
- Dai, D.; Sakaridis, C.; Hecker, S.; Van Gool, L. Curriculum Model Adaptation with Synthetic and Real Data for Semantic Foggy Scene Understanding. Int. J. Comput. Vis. 2020, 128, 1182–1204. [Google Scholar] [CrossRef] [Green Version]
- Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; Heide, F. Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11679–11689. [Google Scholar] [CrossRef]
- Wang, Y.-Q. An Analysis of the Viola-Jones Face Detection Algorithm. Image Process. Line 2014, 4, 128–148. [Google Scholar] [CrossRef]
- Dollar, P.; Appel, R.; Belongie, S.; Perona, P. Fast Feature Pyramids for Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1532–1545. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Jahnen, M.; Budweiser, P. German Traffic Sign Recognition Benchmark (GTSRB) AlexNet Pycaffe Model. GitHub Repository. Available online: https://github.com/magnusja/GTSRB-caffe-model (accessed on 15 January 2021).
- Characteristics of Adverse Weather Conditions v1.0. Available online: https://www.dense247.eu/fileadmin/user_upload/PDF/DENSE_D2.1_Characteristics_of_Adverse_Weather_Conditions.pdf (accessed on 27 December 2020).
- Paschotta, R. RP Photonics Encyclopedia - Color Spaces. Available online: https://www.rp-photonics.com/encyclopedia_cite.html?article=colorspaces (accessed on 20 October 2020).
Purpose | Number of Images | Number Bounding Boxes | Classes | Categories | Resolution | Videos | Challenging Conditions | Country | Publication Year | |
---|---|---|---|---|---|---|---|---|---|---|
GTSDB | Detection | 900 | 1213 | 43 | 4 | 1360 × 1024 | no | no | Germany | 2011 |
GTSRB | Classification | 51,840 | 1728 | 43 | 4 | 1360 × 1025 | no | no | Germany | 2013 |
Extended-GTSDB | Detection/Classification | 900 | 2655 | 164 | 8 | 1360 × 1024 | no | no | Germany | 2020 |
BTSD | Detection | 25,634 (9006 annotated) | 13,444 | -- | -- | 1628 × 1236 | yes (4) | no | Belgium | 2011 |
BTSC | Classification | 7125 | -- | 62 | -- | 1628 × 1237 | yes | no | Belgium | 2011 |
TT100K | Detection/Classification | 100,000 | 30,000 | 45 | -- | 2048 × 2048 | no | no | China | 2016 |
STS | Detection/Classification | 20,000 | 3488 | 7 | -- | 1280 × 960 | no | no | Sweden | 2011 |
RSTD | Detection/Classification | 179,138 | 15,630 | 156 | 6 | 1280 × 720 1920 × 1080 | no | no | Russia | 2016 |
LISA | Detection/Classification | 6610 | 7855 | 49 | 640 × 480–1024 × 522 | yes (17) | no | USA | 2012 | |
Stereopolois | Detection/Classification | 847 | 251 | -- | 10 | 960 × 1080 | no | no | France | 2010 |
MASTIF | Detection/Classification | 4875 | 13,036 | -- | 5 | 720 × 576 | yes | no | Croatia | 2011 |
CTSD (Chinese) | Detection/Classification | 1110 | 1574 | 48 | -- | 1024 × 768 and 1270 × 800 | no | no | China | 2016 |
CCTSD | Detection/Classification | 10,000 | 13,361 | -- | 3 | several | no | no | China | 2017 |
ETSDB | Detection/Classification | 82,476 | -- | 164 | 4 | several | no | yes | Belgium, Croatia, France, Germany, Netherlands, Sweden | 2018 |
DITS (1) | Detection | 1887 | -- | -- | 3 | 1280 × 720 | no | yes | Italy | 2016 |
DITS (2) | Classification | 9254 | -- | 58 | 1280 × 721 | no | yes | Italy | 2016 | |
KTSD | Detection | 498 | 832 | -- | 3 | several | no | yes | Korea | 2017 |
CTSD (Complex) | Detection/Classification | 2205 | 3755 | 153 | 3 | several | no | yes | China | 2018 |
Cure TSD | Detection/Classification | 896,700 | 648,186 | 14 | -- | 1628 × 1236 | yes (2989) | yes | Belgium | 2020 |
Ceit-TSR | Detection/Classification | 264 | 418 | -- | 6 | several | no | yes | Spain | 2020 |
Number of Images | Classes | Resolution | Country | Synthetic | Publication Year | |
---|---|---|---|---|---|---|
FRIDA1 | 90 | -- | 640 × 480 | -- | yes | 2010 |
FRIDA2 | 330 | -- | 640 × 480 | -- | yes | 2012 |
Foggy Cityscapes-coarse- | 20,000 | 8 | 2040 × 1016 | Germany and Switzerland | synthetic fog | 2018 |
Foggy Cityscapes-refined- | 550 | 8 | 2040 × 1016 | Germany and Switzerland | synthetic fog | 2018 |
Foggy Driving | 101 | 19 | 960 × 1280 | Zurich | no | 2018 |
Foggy Zurich | 3808 (40 annotated) | 19 | 1920 × 1080 | Zurich | no | 2019 |
Seeing Through Fog | 1,429,060 | -- | 1920 × 1024 | Germany, Sweden, Denmark, and Finland | no | 2020 |
Ceit-Foggy | 4480 | -- | several | Spain | no | 2020 |
Sunny | Z > 0.35 && ZYdiff > 0.1 bluelevel > 0.3 | ||
Cloudy | Z < 0,35 | ||
Foggy | Z > 0.35 && ZYdiff < 0.1 | Light | greylevel (20–30) |
Moderate | greylevel (30–60) | ||
Dense | greylevel (60–100) |
Test DB | Precision | Recall | F-Score | Inference Time (ms) |
---|---|---|---|---|
GTSB (300 images) | 1 | 0.76 | 0.864 | 102 |
Ceit-TSR (264 images) | 0.96 | 0.68 | 0.796 | 118 |
Pre-Processing Technique | Test Images | Accuracy | Inference Time (ms) |
---|---|---|---|
image normalization + 10 random cropping | 50 images ROI cropped | 0.98 | 40 |
50 original images | 0.76 | 42 | |
image normalization + 227 × 227 re-scaling | 50 images ROI cropped | 0.96 | 18 |
50 original images | 0.52 | 22 |
Test DB | Accuracy | Inference Time (ms) |
---|---|---|
GTSB (300 images) | 0.92 | 43 |
Ceit-TSR (264 images) | 0.75 | 43 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Iparraguirre, O.; Amundarain, A.; Brazalez, A.; Borro, D. Sensors on the Move: Onboard Camera-Based Real-Time Traffic Alerts Paving the Way for Cooperative Roads. Sensors 2021, 21, 1254. https://doi.org/10.3390/s21041254
Iparraguirre O, Amundarain A, Brazalez A, Borro D. Sensors on the Move: Onboard Camera-Based Real-Time Traffic Alerts Paving the Way for Cooperative Roads. Sensors. 2021; 21(4):1254. https://doi.org/10.3390/s21041254
Chicago/Turabian StyleIparraguirre, Olatz, Aiert Amundarain, Alfonso Brazalez, and Diego Borro. 2021. "Sensors on the Move: Onboard Camera-Based Real-Time Traffic Alerts Paving the Way for Cooperative Roads" Sensors 21, no. 4: 1254. https://doi.org/10.3390/s21041254
APA StyleIparraguirre, O., Amundarain, A., Brazalez, A., & Borro, D. (2021). Sensors on the Move: Onboard Camera-Based Real-Time Traffic Alerts Paving the Way for Cooperative Roads. Sensors, 21(4), 1254. https://doi.org/10.3390/s21041254