Vision-Based Algorithm for Precise Traffic Sign and Lane Line Matching in Multi-Lane Scenarios
Abstract
:1. Introduction
- The sliding window method is employed to achieve multi-lane detection.
- A custom dataset is employed to train the traffic sign recognition model, and SSR enhancement is applied to enhance the recognition accuracy under poor lighting conditions.
- The system employs a matching multi-lane configuration, with the integration of multiple traffic signs.
2. Multi-Lane Detection and Distance Localization Based on Computer Vision
2.1. Image Preprocessing
2.2. Color Space Transformation
2.3. Lane Detection Algorithm Design
2.4. Horizontal Positioning and Markings of Multi-Lane Lines
2.5. Horizontal Positioning of Stop Lines
- A 90° clockwise rotation of Figure 5b is performed;
- The histogram technique is utilized to determine the location of the stop line;
- Using the stop line search method based on one-directional sliding window growth and the stop line representation method based on polynomial fitting, the that represents the lateral coordinate position of the rotated stop line in the pixel coordinate system and the that represents the vertical coordinate position of the actual stop line in pixels are determined, as follows:
3. Multi-Lane Traffic Sign Recognition Based on YOLO
3.1. Fabrication of a Multi-Lane Traffic Signage Dataset
- The first method was to augment the sample count of a specific category by duplicating and pasting fewer traffic sign images in different environments. This was achieved by copying the original image into other images and then resizing them. The images were then pasted into the corresponding position or near the original sign. During the augmentation process, it is important to avoid overlapping with other signs, which is typically achieved by keeping the number of signs augmented in each image below five. Additionally, the between the signs in the new environment images was calculated after augmentation, as shown in (23). If , it is necessary to select alternative positions for pasting.
- Random scaling was utilized. This process involves scaling the image up or down by a specified ratio, which alters the original image’s resolution and generates new images. This step enhanced the model’s generalization performance during training by increasing the diversity of the training data;
- Random rotation was conducted. In real road scenarios, four-lane traffic signs are typically fixed at the roadside and extend a certain distance from the edge of the road. As autonomous vehicles traverse different lanes, images captured by cameras exhibit varying degrees of tilt. To ensure the comprehensiveness of the dataset, the original images were randomly rotated by different angles to the right or left, thereby generating new images;
- Gaussian noise was introduced. Gaussian noise is a type of noise characterized by its probability density function, which follows the Gaussian distribution, also known as the normal distribution. The addition of Gaussian noise to images facilitates the learning of more image features by neural network models. The images resulting from the addition of Gaussian noise were obtained by sampling a random number matrix obtained using a Gaussian distribution and then adding the RGB pixels of the original image and the random number matrix containing a Gaussian distribution.
3.2. YOLO Modeling Training
3.3. Model Training Results and Analysis
3.4. Insufficient Light Image Enhancement Based on SSR Algorithm under V Channel
4. Matching and Joint Experimental Verification of Multi-Lane Traffic Signage and Ground Multi-Lane
4.1. Lane and Traffic Sign Matching Algorithm
4.2. Joint Experimental Validation
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Swathi, M.; Suresh, K.V. Automatic Traffic Sign Detection and Recognition: A Review. In Proceedings of the 2017 International Conference on Algorithms, Methodology, Models and Applications in Emerging Technologies, Chennai, India, 16–18 February 2017; pp. 1–6. [Google Scholar]
- Liang, Z.; Zhao, J.; Liu, B.; Wang, Y.; Ding, Z. Velocity-Based Path Following Control for Autonomous Vehicles to Avoid Exceeding Road Friction Limits Using Sliding Mode Method. IEEE Trans. Intell. Transp. Syst. 2019, 23, 1947–1958. [Google Scholar]
- Ahmed, N.; Anwar, A.; Eckelmann, S. Lane Marking Detection Techniques for Autonomous Driving. In Proceedings of the 16th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, Fukuoka, Japan, 28–30 October 2021; pp. 217–226. [Google Scholar]
- Tu, C.; Van Wyk, B.J.; Hamam, Y. Vehicle Position Monitoring Using Hough Transform. IERI Procedia 2013, 4, 316–322. [Google Scholar] [CrossRef]
- Wei, W.; Dong, X.; Shen, Y. Research on a two value Generalized Hough transform method of identification. In Proceedings of the 2011 International Conference on Computer Science and Network Technology, Harbin, China, 24–26 December 2011; pp. 278–281. [Google Scholar]
- Li, H.Y.; Sima, C.; Dai, J.F. Delving into the Devils of Bird’s-Eye-View Perception: A Review, Evaluation and Recipe. IEEE Comput. Soc. 2024, 46, 2151–2170. [Google Scholar] [CrossRef] [PubMed]
- Bhupathi, K.C.; Ferdowsi, H. An Augmented Sliding Window Technique to Improve Detection of Curved Lanes in Autonomous Vehicles. In Proceedings of the 2020 International Conference on Electro Information Technology, Chicago, IL, USA, 31 July–1 August 2020; pp. 522–527. [Google Scholar]
- Zhang, Q.; Liu, J.; Jiang, X. Lane Detection Algorithm in Curves Based on Multi-Sensor Fusion. Sensors 2023, 23, 5751. [Google Scholar] [CrossRef] [PubMed]
- Liang, Z.; Wang, Z.; Zhao, J.; Ma, X. Fast Finite-Time Path-Following Control for Autonomous Vehicle via Complete Model-Free Approach. IEEE Trans. Ind. Inf. 2023, 19, 2838–2846. [Google Scholar] [CrossRef]
- Wang, Z.; Wu, Y.; Niu, Q. Multi-Sensor Fusion in Automated Driving: A Survey. IEEE Access 2020, 8, 2847–2868. [Google Scholar] [CrossRef]
- Rahman, Z.; Morris, B.T. LVLane: Deep Learning for Lane Detection and Classification in Challenging Conditions. In Proceedings of the 2023 IEEE 26th International Conference on Intelligent Transportation Systems, Bilbao, Spain, 24–28 September 2023; pp. 3901–3907. [Google Scholar]
- Yan, F.; Nie, M.; Cai, X.Y. ONCE-3DLanes: Building Monocular 3D Lane Detection. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 July 2022; pp. 17122–17131. [Google Scholar]
- Zheng, Z.; Zhang, X.; Mou, Y.; Gao, X.; Li, C.; Huang, G.; Pun, C.-M.; Yuan, X. PVALane: Prior-Guided 3D Lane Detection with View-Agnostic Feature Alignment. In Proceedings of the 38th AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 7597–7604. [Google Scholar]
- Rongqiang, Q.; Zhang, B.; Yue, Y. Traffic Sign Detection by Template Matching Based on Multi-Level Chain Code Histogram. In Proceedings of the 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery, Zhangjiajie, China, 15–17 August 2015; pp. 2400–2404. [Google Scholar]
- Pandey, P.; Kulkarni, R. Traffic Sign Detection Using Template Matching Technique. In Proceedings of the 2018 Fourth International Conference on Computing Communication Control and Automation, Pune, India, 15–17 August 2018; pp. 1–6. [Google Scholar]
- Gan, Y.; Li, G.; Togo, R.; Maeda, K. Zero-Shot Traffic Sign Recognition Based on Midlevel Feature Matching. Sensors 2023, 23, 9607. [Google Scholar] [CrossRef] [PubMed]
- Cao, J.; Song, C.; Peng, S. Improved Traffic Sign Detection and Recognition Algorithm for Intelligent Vehicles. Sensors 2019, 19, 4021. [Google Scholar] [CrossRef] [PubMed]
- Xie, G.; Xu, Z.; Lin, Z.; Liao, X. GRFS-YOLOv8: An Efficient Traffic Sign Detection Algorithm Based on Multiscale Features and Enhanced Path Aggregation. Signal Image Video Process. 2024, 1–16. [Google Scholar] [CrossRef]
- Yalamanchili, S.; Kodepogu, K.; Manjeti, V.B. Optimizing Traffic Sign Detection and Recognition by Using Deep Learning. Int. J. Transp. Dev. Integr. 2024, 8, 131–139. [Google Scholar] [CrossRef]
- Korshunova, K.P. A Convolutional Fuzzy Neural Network for Image Classification. In Proceedings of the 2018 3rd Russian-Pacific Conference on Computer Technology and Applications, Vladivostok, Russia, 18–25 August 2018; pp. 1–4. [Google Scholar]
- Wang, Y.; Shen, D.; Teoh, E.K. Lane Detection Using Spline Model. Pattern Recognit. Lett. 2000, 21, 677–689. [Google Scholar] [CrossRef]
- Yoo, J.H.; Lee, S.-W.; Park, S.-K. A Robust Lane Detection Method Based on Vanishing Point Estimation Using the Relevance of Line Segments. IEEE Trans. Intell. Transport. Syst. 2017, 18, 3254–3266. [Google Scholar] [CrossRef]
- Chen, Z.; Liu, Q.; Lian, C. PointLaneNet: Efficient End-to-End CNNs for Accurate Real-Time Lane Detection. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium, Paris, France, 9–12 June 2019; pp. 2563–2568. [Google Scholar]
- Haris, M.; Glowacz, A. Lane Line Detection Based on Object Feature Distillation. Electronics 2021, 10, 1102. [Google Scholar] [CrossRef]
- Du, X.; Tan, K.K.; Ko Htet, K.K. Vision-Based Lane Line Detection for Autonomous Vehicle Navigation and Guidance. In Proceedings of the 2015 10th Asian Control Conference, Kota Kinabalu, Malaysia, 31 May–3 June 2015; pp. 1–5. [Google Scholar]
- Bow, S.-T. Pattern Recognition and Image Preprocessing, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2002; pp. 271–399. [Google Scholar]
- Loehlin, J.C. The Cholesky Approach: A Cautionary Note. Behav. Genet. 1996, 26, 65–69. [Google Scholar] [CrossRef]
- Vikram Mutneja, D. Methods of Image Edge Detection: A Review. J. Elec. Electron. Syst. 2015, 4, 1000150. [Google Scholar] [CrossRef]
- Bhupathi, K.C.; Ferdowsi, H. Sharp Curve Detection of Autonomous Vehicles using DBSCAN and Augmented Sliding Window Techniques. Int. J. ITS Res. 2022, 20, 651–671. [Google Scholar] [CrossRef]
- Abbas, S.A.; Zisserman, A. A Geometric Approach to Obtain a Bird’s Eye View from an Image. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop, Seoul, Republic of Korea, 27–28 October 2019; pp. 4095–4104. [Google Scholar]
- Marita, T.; Negru, M.; Danescu, R. Stop-Line Detection and Localization Method for Intersection Scenarios. In Proceedings of the 2011 IEEE 7th International Conference on Intelligent Computer Communication and Processing, Cluj-Napoca, Romania, 25–27 August 2011; pp. 293–298. [Google Scholar]
- Jiang, P.; Ergu, D.; Liu, F. A Review of Yolo Algorithm Developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
- Du, J. Understanding of Object Detection Based on CNN Family and YOLO. J. Phys. Conf. Ser. 2018, 1004, 012029. [Google Scholar] [CrossRef]
- Sural, S.; Gang, Q.; Pramanik, S. Segmentation and Histogram Generation Using the HSV Color Space for Image Retrieval. In Proceedings of the Proceedings. International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2002; p. II. [Google Scholar]
- Mario, D.G.; Alberto, J.R.S.; Francisco, J.G.F. Cromaticity Improvement in Images with Poor Lighting Using the Multiscale-Retinex MSR Algorithm. In Proceedings of the 2016 9th International Kharkiv Symposium on Physics and Engineering of Microwaves, Millimeter and Submillimeter Waves, Kharkiv, Ukraine, 20–24 June 2016; pp. 1–4. [Google Scholar]
- Sun, B.; Tao, W.; Chen, W. Luminance Based MSR for Color Image Enhancement. In Proceedings of the 2008 Congress on Image and Signal Processing, Sanya, China, 27–30 May 2008; pp. 358–362. [Google Scholar]
- Wang, J.; He, N.; Lu, K. A New Single Image Dehazing Method with MSRCR Algorithm. In Proceedings of the 7th International Conference on Internet Multimedia Computing and Service, Zhangjiajie, China, 19 August 2015; pp. 1–4. [Google Scholar]
- Lee, C.; Moon, J.-H. Robust Lane Detection and Tracking for Real-Time Applications. IEEE Trans. Intell. Transp. Syst. 2018, 19, 4043–4048. [Google Scholar] [CrossRef]
- Liang, Z.; Shen, M.; Li, Z.; Yang, J. Model-Free Output Feedback Path Following Control for Autonomous Vehicle With Prescribed Performance Independent of Initial Conditions. IEEE-ASME Trans. Mechatron. 2024, 29, 1076–1087. [Google Scholar] [CrossRef]
Serial Number | Traffic Sign | Symbol Meaning | Category | Serial Number | Traffic Sign | Symbol Meaning | Category |
---|---|---|---|---|---|---|---|
1 | Left turn | I10 | 6 | Turnaround | I16 | ||
2 | Right turn | I12 | 7 | Left turn or turnaround | I17 | ||
3 | Straight forward | I13 | 8 | Straight forward or turnaround | I18 | ||
4 | Straight forward or right turn | I14 | 9 | Four lane plate traffic signage | |||
5 | Straight forward or left turn | I15 |
Computer Operating System | Deep Learning Framework | Development Languages and Environments | Central Processing Unit | Graphic Processing Unit |
---|---|---|---|---|
Ubuntu18.04 | PyTorch11.0 | Python3.7/PyCharm | Intel Core i5 9600K | NVIDIA GeForce GTX 960m |
Parameter Type | Parameter Value | Parameter Type | Parameter Value |
---|---|---|---|
Algorithm learning epoch | 300 | Batch size | 16 |
Initial learning rate | 0.0001 | Momentum | 0.9 |
Minimum learning rate | 0.000001 | Decay | 0.0005 |
Frequency Bands | Range |
---|---|
Low frequency | [0, 85] |
Medium frequency | (85, 170] |
High frequency | (170, 255] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xia, K.; Hu, J.; Wang, Z.; Wang, Z.; Huang, Z.; Liang, Z. Vision-Based Algorithm for Precise Traffic Sign and Lane Line Matching in Multi-Lane Scenarios. Electronics 2024, 13, 2773. https://doi.org/10.3390/electronics13142773
Xia K, Hu J, Wang Z, Wang Z, Huang Z, Liang Z. Vision-Based Algorithm for Precise Traffic Sign and Lane Line Matching in Multi-Lane Scenarios. Electronics. 2024; 13(14):2773. https://doi.org/10.3390/electronics13142773
Chicago/Turabian StyleXia, Kerui, Jiqing Hu, Zhongnan Wang, Zijian Wang, Zhuo Huang, and Zhongchao Liang. 2024. "Vision-Based Algorithm for Precise Traffic Sign and Lane Line Matching in Multi-Lane Scenarios" Electronics 13, no. 14: 2773. https://doi.org/10.3390/electronics13142773
APA StyleXia, K., Hu, J., Wang, Z., Wang, Z., Huang, Z., & Liang, Z. (2024). Vision-Based Algorithm for Precise Traffic Sign and Lane Line Matching in Multi-Lane Scenarios. Electronics, 13(14), 2773. https://doi.org/10.3390/electronics13142773