Enhanced Vision-Based Taillight Signal Recognition for Analyzing Forward Vehicle Behavior
Abstract
:1. Introduction
2. Related Work
2.1. Integrated Techniques for Edge and Shape Detection in Autonomous Driving
2.2. Deep Learning Models for Vehicle Indicator Analysis
2.3. Vehicle Taillight Recognition
3. Proposed Method
3.1. Proposed System Architecture
3.2. Extraction of Individual Taillight Areas
Algorithm 1. Individual Taillight Image Extraction | |
Input: Vehicle image data, I | |
Output: Left-taillight image, Ileft; Right-taillight image, Iright | |
1: | Load the vehicle image data, I |
2: | Obtain the dimensions of I: h, w, c |
3: | Convert I to grayscale: Igray |
4: | Apply Canny edge detection to Igray, obtaining edges: Iedges |
5: | Perform probabilistic Hough transform on Iedges to obtain line segments: lines |
6: | Sort lines in descending order based on length |
7: | Keep lines that overlap with the longest line by 50% or more, resulting in filtered lines |
8: | For each line in sorted lines, perform the following: |
9: | Overlap count ← 0 |
10: | for each existing line in filtered lines, perform the following: |
11: | Extract endpoints of existing line: (x1, y1), (x2, y2) |
12: | Extract endpoints of current line: (x3, y3), (x4, y4) |
13: | Overlap count ← overlap count + max(0, min(x2, x4) − max(x1, x3)) |
▷ Calculate overlapping x-coordinate interval | |
14: | end for |
15: | Calculate total length of the current line: total length = |
16: | Calculate overlap percentage: overlap percentage = overlap count/total length |
17: | If overlap percentage ≥ 0.5 then |
18: | Add current line to filtered lines |
19: | End if |
20: | End for |
21: | Extract endpoints of the longest line: (x1, y1 ), (x2, y2) |
22: | Extract endpoints of the second longest line: (x3, y3), (x4, y4) |
23: | Calculate the centroid of the two lines: centroid = (x1 + x2 + x3 + x4/4, y1 + y2 + y3 + y4/4) |
24: | Divide the image vertically at the centroid to obtain left and right regions |
25: | Crop the left region to obtain the left-taillight image: Ileft |
26: | Crop the right region to obtain the right-taillight image: Iright |
3.3. Taillight Analysis and State Classification
4. Experiment
4.1. Results
4.2. Performance Evaluation
4.3. Weather Condition-Specific Performance Evaluation
4.4. Evaluation of Processing Time for the Proposed Method within Vehicle Edge Computing Environment
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Grigorescu, S.; Trasnea, B.; Cocias, T.; Macesanu, G. A survey of deep learning techniques for autonomous driving. J. Field Robot. 2020, 37, 362–386. [Google Scholar] [CrossRef]
- Yaqoob, I.; Khan, L.U.; Kazmi, S.A.; Imran, M.; Guizani, N.; Hong, C.S. Autonomous driving cars in smart cities: Recent advances, requirements, and challenges. IEEE Netw. 2019, 34, 174–181. [Google Scholar] [CrossRef]
- Song, W.; Liu, S.; Zhang, T.; Yang, Y.; Fu, M. Action-state joint learning-based vehicle taillight recognition in diverse actual traffic scenes. IEEE Trans. Intell. Transp. Syst. 2022, 23, 18088–18099. [Google Scholar] [CrossRef]
- Nguyen, D.H.; Seo, A.; Nnamdi, N.P.; Son, Y. False Alarm Reduction Method for Weakness Static Analysis Using BERT Model. Appl. Sci. 2023, 13, 3502. [Google Scholar] [CrossRef]
- Guerrero-Ibáñez, J.; Zeadally, S.; Contreras-Castillo, J. Sensor technologies for intelligent transportation systems. Sensors 2018, 18, 1212. [Google Scholar] [CrossRef]
- Tong, B.; Chen, W.; Li, C.; Du, L.; Xiao, Z.; Zhang, D. An improved approach for real-time taillight intention detection by intelligent vehicles. Machines 2022, 10, 626. [Google Scholar] [CrossRef]
- Leung, H.K.; Chen, X.Z.; Yu, C.W.; Liang, H.Y.; Wu, J.Y.; Chen, Y.L. A deep-learning-based vehicle detection approach for insufficient and nighttime illumination conditions. Appl. Sci. 2019, 9, 4769. [Google Scholar] [CrossRef]
- Parvin, S.; Rozario, L.J.; Islam, M.E. Vision-based on-road nighttime vehicle detection and tracking using taillight and headlight features. J. Comput. Commun. 2021, 9, 29. [Google Scholar] [CrossRef]
- Wieczorek, G.; Tahir, S.B.U.D.; Akhter, I.; Kurek, J. Vehicle detection and recognition approach in multi scale traffic monitoring system via graph-based data optimization. Sensors 2023, 23, 1731. [Google Scholar] [CrossRef]
- Yu, H.; Nnamdi, N.P.; Seo, A.; Park, J.; Son, Y. Motility Analysis of Diaphragm in Patients with Chronic Pulmonary Lung Disease Based on Computed Tomography Technique. IEEE Access 2023, 11, 101544–101555. [Google Scholar] [CrossRef]
- Sakagawa, Y.; Nakajima, K.; Ohashi, G. Vision based nighttime vehicle detection using adaptive threshold and multiclass classification. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2019, 102, 1235–1245. [Google Scholar] [CrossRef]
- Biswas, A.; Wang, H.C. Autonomous vehicles enabled by the integration of IoT, edge intelligence, 5G, and blockchain. Sensors 2023, 23, 1963. [Google Scholar] [CrossRef] [PubMed]
- Alabdulwahab, S.; Kim, Y.T.; Seo, A.; Son, Y. Generating Synthetic Dataset for ML-Based IDS Using CTGAN and Feature Selection to Protect Smart IoT Environments. Appl. Sci. 2023, 13, 10951. [Google Scholar] [CrossRef]
- Zhou, X.; Ke, R.; Yang, H.; Liu, C. When intelligent transportation systems sensing meets edge computing: Vision and challenges. Appl. Sci. 2021, 11, 9680. [Google Scholar] [CrossRef]
- Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
- Hough, P.V. Method and Means for Recognizing Complex Patterns. U.S. Patent No. 3,069,654, 18 December 1962. [Google Scholar]
- Kiryati, N.; Eldar, Y.; Bruckstein, A.M. A probabilistic Hough transform. Pattern Recognit. 1991, 24, 303–316. [Google Scholar] [CrossRef]
- Vellaidurai, A.; Rathinam, M. Autonomous Vehicle Detection and Tracking Based on Improved Yolov5 and Gmmpf in Harsh Weather Conditions. Res. Sq. 2023. [Google Scholar] [CrossRef]
- Sobel, I.; Feldman, G. A 3x3 isotropic gradient operator for image processing. In Proceedings of the A Talk at the Stanford Artificial Project, Stanford, CA, USA, 14 October 1968; pp. 271–272. [Google Scholar]
- Lee, K.H.; Tagawa, T.; Pan, J.E.M.; Gaidon, A.; Douillard, B. An attention-based recurrent convolutional network for vehicle taillight recognition. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2365–2370. [Google Scholar]
- Yan, L.; Jia, L.; Lu, S.; Peng, L.; He, Y. LSTM-based deep learning framework for adaptive identifying eco-driving on intelligent vehicle multivariate time-series data. IET Intell. Transp. Syst. 2024, 18, 186–202. [Google Scholar] [CrossRef]
- Alatabani, L.E.; Ali, E.S.; Saeed, R.A. Deep learning approaches for IoV applications and services. In Intelligent Technologies for Internet of Vehicles; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 253–291. [Google Scholar]
- Liao, W.; Chen, X.; Zhang, W.; Liu, H.; Yan, J.; Lou, Y.; Mei, S. Trajectory prediction from ego view: A coordinate transform and taillight event driven approach. In Proceedings of the 2022 IEEE International Conference on Multimedia and Expo(ICME), Taipei, Taiwan, 18–22 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–6. [Google Scholar]
- Tran, D.; Bourdev, L.; Fergus, R.; Torresani, L.; Paluri, M. Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 4489–4497. [Google Scholar]
- Li, Q.; Garg, S.; Nie, J.; Li, X.; Liu, R.W.; Cao, Z.; Hossain, M.S. A highly efficient vehicle taillight detection approach based on deep learning. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4716–4726. [Google Scholar] [CrossRef]
- Hsu, H.K.; Tsai, Y.H.; Mei, X.; Lee, K.H.; Nagasaka, N.; Prokhorov, D.; Yang, M.H. Learning to tell brake and turn signals in videos using cnn-lstm structure. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–6. [Google Scholar]
- Thammakaroon, P.; Tangamchit, P. Predictive brake warning at night using taillight characteristic. In Proceedings of the 2009 IEEE International Symposium on Industrial Electronics, Seoul, Republic of Korea, 5–8 July 2009; pp. 217–221. [Google Scholar]
- Mounika, G.; Vasundra, S. Deep Learning Model for Vehicle Taillight Detection and Recognization in Autonomous Driving. NeuroQuantology 2023, 21, 179. [Google Scholar]
- Zhang, K.; Luo, W.; Zhong, Y.; Ma, L.; Stenger, B.; Liu, W.; Li, H. Deblurring by realistic blurring. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2737–2746. [Google Scholar]
- Bishop, C.M.; Nasrabadi, N.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4, p. 738. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Moon, J. Rear-lights detection and brake-lights detection datasets. Harv. Dataverse 2023, 1. [Google Scholar] [CrossRef]
- Du, Y.; Zhao, Z.; Song, Y.; Zhao, Y.; Su, F.; Gong, T.; Meng, H. Strongsort: Make deepsort great again. IEEE Trans. Multimed. 2023, 25, 8725–8737. [Google Scholar] [CrossRef]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
- Elouaret, T.; Colomer, S.; Demelo, F.; Cuperlier, N.; Romain, O.; Kessal, L.; Zuckerman, S. Implementation of a bio-inspired neural architecture for autonomous vehicle on a reconfigurable platform. In Proceedings of the 2022 IEEE 31st International Symposium on Industrial Electronics (ISIE), Anchorage, AK, USA, 1–3 June 2022; pp. 661–666. [Google Scholar]
- Ganjoo, S. Yolo and mask r-cnn for vehicle number plate identification. arXiv 2022, arXiv:2207.13165. [Google Scholar] [CrossRef]
- Shi, K.; Huang, L.; Jiang, D.; Sun, Y.; Tong, X.; Xie, Y.; Fang, Z. Path planning optimization of intelligent vehicle based on improved genetic and ant colony hybrid algorithm. Front. Bioeng. Biotechnol. 2022, 10, 905983. [Google Scholar] [CrossRef]
Label Name | Meaning of Label |
---|---|
BOO | brake: Only the brake light is on |
BLO | brake-left: Both the brake light and the left-turn signal are on |
BOR | brake-right: Both the brake light and the right-turn signal are on |
BLR | brake-emergency: Both the brake light and both turn signals are on |
OOO | none: No lights are on |
OLO | left: Only the left-turn signal is on |
OOR | right: Only the right-turn signal is on |
OLR | emergency: Both turn signals are on |
Accuracy (%) | |
---|---|
W | 68.18 |
W + H | 63.80 |
C | 84.88 |
Ours (C + H) | 85.47 |
Precision (As a Proportion) | Recall (As a Proportion) | |||||
---|---|---|---|---|---|---|
CNN-LSTM | C3D | Proposed | CNN-LSTM | C3D | Proposed | |
BOO | 0.4463 | 0.4074 | 0.9000 | 0.5258 | 0.7452 | 0.9204 |
BLR | 0.6392 | 0.9486 | 0.8421 | 0.8328 | 1.0000 | 1.0000 |
BLO | 0.7283 | 0.6763 | 0.8933 | 0.6390 | 0.5131 | 0.8815 |
BOR | 0.6329 | 0.6956 | 0.9000 | 0.3888 | 0.4848 | 0.8571 |
OOO | 0.3753 | 0.5906 | 0.9552 | 0.4786 | 0.5135 | 0.8421 |
OLR | 0.7161 | 0.5362 | 0.9638 | 0.8702 | 0.7708 | 1.0000 |
OLO | 0.8318 | 0.4338 | 0.9726 | 0.6126 | 0.2510 | 0.8987 |
OOR | 0.8293 | 0.4923 | 0.9518 | 0.6183 | 0.3004 | 0.9518 |
Classes | Precision (As a Proportion) | Recall (As a Proportion) | ||
---|---|---|---|---|
Cloudy | Rainy | Cloudy | Rainy | |
BOO | 0.8461 | 0.8709 | 0.8048 | 0.9000 |
OOO | 0.8928 | 1.0000 | 0.8928 | 0.9411 |
OLO | - | 1.0000 | - | 1.0000 |
OOR | - | 1.0000 | - | 0.5000 |
Simulated Vehicle Edge Computing Environment | High-Performance Server Environment for Comparison | |
---|---|---|
CPU | AMD Ryzen 5 4500U | AMD EPYC 7742 64-Core Processor |
RAM | 16 GB | 4 TB |
GPU | Not used | Not used |
OS | Android 7.1 Nougat | Ubuntu 20.04.6 |
Simulated Vehicle Edge Computing Environment (Seconds) | High-Performance Server Environment for Comparison (Seconds) | |
---|---|---|
Measurement speed | 0.729–1.657 | 0.085–1.532 |
Average speed | 0.9027 | 0.3061 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Seo, A.; Woo, S.; Son, Y. Enhanced Vision-Based Taillight Signal Recognition for Analyzing Forward Vehicle Behavior. Sensors 2024, 24, 5162. https://doi.org/10.3390/s24165162
Seo A, Woo S, Son Y. Enhanced Vision-Based Taillight Signal Recognition for Analyzing Forward Vehicle Behavior. Sensors. 2024; 24(16):5162. https://doi.org/10.3390/s24165162
Chicago/Turabian StyleSeo, Aria, Seunghyun Woo, and Yunsik Son. 2024. "Enhanced Vision-Based Taillight Signal Recognition for Analyzing Forward Vehicle Behavior" Sensors 24, no. 16: 5162. https://doi.org/10.3390/s24165162
APA StyleSeo, A., Woo, S., & Son, Y. (2024). Enhanced Vision-Based Taillight Signal Recognition for Analyzing Forward Vehicle Behavior. Sensors, 24(16), 5162. https://doi.org/10.3390/s24165162