Robust Traffic Light and Arrow Detection Using Digital Map with Spatial Prior Information for Automated Driving
Abstract
:1. Introduction
- Ranging sensors: light detection and ranging (LiDAR), millimeter-wave radar (MWR)
- Imaging sensor: camera
- Positioning sensor: global navigation satellite system and inertial navigation system (GNSS/INS).
- Self-localization: Estimating the position of the vehicle on the HD map in the accuracy of decimeter-level
- Surrounding perception: Recognizing static/dynamic objects including traffic participants and road objects (e.g., lane marking, traffic signals)
- Motion planning: Designing the optimal collision-free trajectories following the traffic rules
- Motion control: Determining adequate control signals such as steering, acceleration and braking.
2. Related Works
- Determine the search region: A region-of-interest (ROI) is extracted from the captured image by using the predefined map.
- Extract candidate objects: Circular lighting areas or rectangular objects are extracted from the search region as candidate TLs.
- Classify the state of the candidates: Simple color filtering or machine learning algorithms identify lighting colors and arrow light directions.
- The performance evaluation is performed in challenging dataset including small objects of TLs and arrow TLs in the day and night.
- The effectiveness of prior information is evaluated with respect to the performance in recognition of distant TLs.
- Verification results are presented by introducing the proposed method in automated driving.
3. Proposed Traffic Light Detection
3.1. Traffic Light Detection
3.2. Predefined Digital Map
- the 2-D TL positions (latitude, longitude, and heading direction)
- attribute information of the TL directions (horizontal or vertical)
- attribute information of the type of the TL patterns (see Figure 2a).
3.3. Method
- Search target TLs and compute ROI
- Generate a highlighted image as a feature image which emphasizes the lights of TLs in the ROI image
- Extract candidates for TL lights in the generated highlighted image using three types of different methods
- Compute the probability of existence area containing TLs using a time-series processing
- Recognize the arrow light, if the target TL has attribute information of arrow light.
3.4. Coordinate System and Traffic Light Selection
3.5. ROI Clipping
3.6. Highlighted Image Generation
- Normalize the brightness value to emphasize the lighting.
- Update the saturation value to eliminate background noise.
- Weighting with respect to hue value, close to the lighting color of traffic signals.
3.7. Candidate Extraction and State Classification
3.8. Probability Updating
3.9. Arrow Signal Recognition
3.10. Prior Information using Digital Map
4. Evaluations
4.1. Condition
- Analysis of the contribution to the recognition rate by using spatial prior information in the proposed method
- Comparison between the proposed method and a general object detection algorithm using DNN (YOLOv3 [28])
- Performance comparison of each candidate object detector (SURF, blob, AdaBoost) in the proposed method
- Comparison of processing time in recognition.
4.2. Results
4.3. Verification for Automated Driving
- Route passing through intersections A to E with only TLs in Figure 21
- Route passing through intersections F to G with TLs and arrow TLs in Figure 21.
5. Conclusions
- Robust recognition was achieved by integrating multiple types of detection methods to recognize TLs including the small size of objects with a few pixels.
- Arrow TL recognition using prior information obtained from the HD map was proposed, and it was shown that small arrow object can be detected even if their size is smaller than 10 pixel.
- It was verified that the proposed method satisfies the necessary requirements for smooth deceleration of approximately 0.2 G at intersection approaches in urban automated driving.
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Franke, U.; Pfeiffer, D.; Rabe, C.; Knoeppel, C.; Enzweiler, M.; Stein, F.; Herrtwich, R.G. Making Bertha See. In Proceedings of the ICCV Workshop on Computer Vision for Autonomous Driving, Sydney, Australia, 8–12 December 2013. [Google Scholar]
- Broggi, A.; Cerri, P.; Debattisti, S.; Laghi, M.C.; Porta, P.P.; Panciroli, M.; Prioletti, A. PROUD-Public ROad Urban Driverless test: Architecture and results. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium, Dearborn, MI, USA, 8–11 June 2014; pp. 648–654. [Google Scholar]
- Kato, S.; Takeuchi, E.; Ishiguro, Y.; Ninomiya, Y.; Takeda, K.; Hamada, T. An Open Approach to Autonomous Vehicles. IEEE Micro 2015, 35, 60–69. [Google Scholar] [CrossRef]
- Hoberock, L.L. A survey of longitudinal acceleration comfort studies in ground transportation vehicles. J. Dyn. Syst. Meas. Control 1977, 99, 76–84. [Google Scholar] [CrossRef]
- Powell, J.P.; Palacin, R. Passenger Stability Within Moving Railway Vehicles: Limits on Maximum Longitudinal Acceleration. Urban Rail Transit 2015, 1, 95–103. [Google Scholar] [CrossRef] [Green Version]
- Levinson, J.; Thrun, S. Robust Vehicle Localization in Urban Environments Using Probabilistic Maps. In Proceedings of the 2010 IEEE International Conference on Robotics and Automation, Anchorage, AK, USA, 3–7 May 2010; pp. 4372–4378. [Google Scholar]
- Yoneda, K.; Hashimoto, N.; Yanase, R.; Aldibaja, M.; Suganuma, N. Vehicle Localization using 76 GHz Omnidirectional Millimeter-Wave Radar for Winter Automated Driving. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium, Changshu, China, 26–30 June 2018; pp. 971–977. [Google Scholar]
- Wolcott, R.W.; Eustice, R.M. Visual Localization within LIDAR Maps for Automated Urban Driving. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014; pp. 176–183. [Google Scholar]
- Fairfield, N.; Urmson, C. Traffic Light Mapping and Detection. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 5421–5426. [Google Scholar]
- Levinson, J.; Askeland, J.; Dolson, J.; Thrun, S. Traffic light mapping, localization and state detection for autonomous vehicles. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 5784–5791. [Google Scholar]
- John, V.; Yoneda, K.; Qi, B.; Liu, Z.; Mita, S. Traffic Light Recognition in Varying Illumination Using Deep Learning and Saliency Map. In Proceedings of the 17th International IEEE Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014; pp. 2286–2291. [Google Scholar]
- John, V.; Yoneda, K.; Liu, Z.; Mita, S. Saliency Map Generation by the Convolutional Neural Network for Real-Time Traffic Light Detection using Template Matching. IEEE Trans. Comput. Imaging 2015, 1, 159–173. [Google Scholar] [CrossRef]
- De Charette, R.; Nashashibi, F. Real Time Visual Traffic Lights Recognition Based on Spot Light Detection and Adaptive Traffic Lights Templates. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 358–363. [Google Scholar]
- Chen, Z.; Huang, X. Accurate and Reliable Detection of Traffic Lights Using Multiclass Learning and Multiobject Tracking. IEEE Intell. Transp. Syst. Mag. 2016, 8, 28–42. [Google Scholar] [CrossRef]
- Chan, T.H.; Jia, K.; Gao, S.; Lu, J.; Zeng, Z.; Ma, Y. Pcanet: A simple deep learning baseline for image classification? arXiv 2014, arXiv:1404.3606. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hirabayashi, M.; Sujiwo, A.; Monrroy, A.; Kato, S.; Edahiro, M. Traffic light recognition using high-definition map features. Robot. Auton. Syst. 2019, 111, 62–72. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–10 October 2016; pp. 21–37. [Google Scholar]
- Behrendt, K.; Novak, L.; Botros, R. A Deep Learning Approach to Traffic Lights: Detection, Tracking, and Classification. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation, Singapore, 29 May–3 June 2017; pp. 1370–1377. [Google Scholar]
- Redmon, J.; Divvala, S.K.; Girshick, R.B.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Siogkas, G.; Skodras, E.; Dermates, E. Traffic Lights Detection in Adverse Conditions using Color, Symmetry and Spatiotemporal Information. In Proceedings of the VISAPP 2012, Rome, Italy, 24–26 February 2012; pp. 620–627. [Google Scholar]
- Omachi, M.; Omachi, S. Traffic light detection with color and edge information. In Proceedings of the ICCSIT 2009, Beijing, China, 8–11 August 2009; pp. 284–287. [Google Scholar]
- Trehard, G.; Pollard, E.; Bradai, B.; Nashashibi, F. Tracking both pose and status of a traffic light via an Interacting Multiple Model filter. In Proceedings of the 17th International Conference on Information Fusion, Salamanca, Spain, 7–10 July 2014. [Google Scholar]
- Pon, A.D.; Andrienko, O.; Harakeh, A.; Waslander, S.L. A Hierarchical Deep Architecture and Mini-Batch Selection Method For Joint Traffic Sign and Light Detection. In Proceedings of the IEEE 15th Conference on Computer and Robot Vision, Toronto, ON, Canada, 8–10 May 2018; pp. 102–109. [Google Scholar]
- Yoneda, K.; Suganuma, N.; Aldibaja, M. Simultaneous Sate Recognition for Multiple Traffic Signals on Urban Road. In Proceedings of the MECATRONICS-REM, Compiegne, France, 15–17 June 2016; pp. 135–140. [Google Scholar]
- Bay, H.; Tuytelaars, T.; Gool, L.V. SURF: Speeded Up Robust Features. In Proceedings of the European Conference on Computer Vision 2006, Graz, Austria, 7–13 May 2006; pp. 404–417. [Google Scholar]
- Kuramoto, A.; Kameyama, J.; Yanase, R.; Aldibaja, M.; Kim, T.H.; Yoneda, K.; Suganuma, N. Digital Map Based Signal State Recognition of Far Traffic Lights with Low Brightness. In Proceedings of the 44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21–23 October 2018; pp. 5445–5450. [Google Scholar]
- Schapire, R.E.; Singer, Y. Improved Boosting Algorithms Using Confidence-rated Predictions. Mach. Learn. 1999, 37, 297–336. [Google Scholar] [CrossRef] [Green Version]
- Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767r. [Google Scholar]
- Yamamoto, D.; Suganuma, N. Localization for Autonomous Driving on Urban Road. In Proceedings of the International Conference on Intelligent Informatics and BioMedical Sciences, Okinawa, Japan, 28–30 November 2015. [Google Scholar]
- Yoneda, K.; Yanase, R.; Aldibaja, M.; Suganuma, N.; Sato, K. Mono-Camera based vehicle localization using Lidar Intensity Map for automated driving. Artif. Life Robot. 2018, 24, 147–154. [Google Scholar] [CrossRef]
- Watanabe, Y.; Sato, K.; Takada, H. DynamicMap 2.0: A Traffic Data Management Platform Leveraging Clouds, Edges and Embedded Systems. Int. J. Intell. Transp. Syst. Res. 2018, 18, 77–89. [Google Scholar] [CrossRef] [Green Version]
- Yoneda, K.; Suganuma, N.; Yanase, R.; Aldibaja, M. Automated driving recognition technologies for adverse weather conditions. IATSS Res. 2019, in press. [Google Scholar] [CrossRef]
Paper | Method | Object | Resolution | Time | Distance/Min. Pixel | Accuracy |
---|---|---|---|---|---|---|
[13], 2009 | Blob candidate; Adaptive Template Matcher | TL | 640 × 480 | 37.4 ms (CPU) | 6 px | 95.38% (Prec.) 98.41% (Recall) |
[9], 2011 | Blob candidate; Mapping | TL; Arrow | 2040 × 1080 | 4 Hz (CPU) | 200 m | 99% (Prec.) 62% (Recall) |
[10], 2011 | Probabilistic Template Matching; Mapping | TL | 1.3 megapixel | 15 Hz (CPU) | 140 m | 91.7% |
[11], 2014 | Blob candidate; CNN; Prior Map | TL | — | 300 ms (CPU) | 100 m | 99.4% |
[14], 2016 | Blob candidate; PCANet [15]; Multi-Object Tracking | TL; Arrow | 1920 × 1080 | 3 Hz (CPU) | 13 px | 95.7% (Prec.) 95.7% (Recall) |
[16], 2019 | SSD [17]; Prior Map | TL | 1368 × 1096 | 17.9 ms (GPU) | 150 m | 86.9% |
[18], 2017 | YOLO [19]; DNN Classifier; Tracking | TL; Arrow | 1280 × 720 | 15 Hz (GPU) | 4 px | — |
Ours | Circle, Blob & Shape candidate; AdaBoost; Prior Map | TL; Arrow | 1280 × 960 | 64 ms (CPU) | 150 m 2 px | 91.8% (TL) 56.7% (Arrow) |
Train/Test | Scene | Green | Yellow | Red | Left | Straight | Right |
---|---|---|---|---|---|---|---|
Train | Daytime | 10,211 | 422 | 5277 | 129 | 293 | 194 |
Test | Daytime | 30,506 | 1867 | 17,721 | 662 | 219 | 890 |
Test | Dark | 9206 | 646 | 5005 | 49 | 153 | 247 |
Test | Total | 39,712 | 2513 | 22,726 | 711 | 372 | 1,137 |
Dataset | Classes | Num. of Objects | W/H | Minimum | Average | Median | Maximum |
---|---|---|---|---|---|---|---|
Our test data | 6 | 67,171 | Width | 2.82 | 34.75 | 26.72 | 213.02 |
(Day & Night) | (3 lights & 3 arrows) | Height | 2.22 | 15.22 | 11.94 | 174.98 | |
LARA [13] | 4 | 9,168 | Width | 6.00 | 11.45 | 11.00 | 37.00 |
(Only Datytime) | (3 lights & ambiguous) | Height | 14.00 | 27.24 | 28.00 | 93.00 | |
WPI [14] | 6 | 4,207 | Width | 13.00 | 28.03 | 28.00 | 47.00 |
(Only Datytime) | (2 lights & 4 arrows) | Height | 13.00 | 27.30 | 28.00 | 48.00 | |
Bosch [18] | 4 | 13,493 | Width | 1.88 | 9.43 | 8.50 | 48.38 |
(Only Datytime) | (3 lights & off) | Height | 3.25 | 26.75 | 24.50 | 104.50 |
Object | Method | Mean Precision | Mean Recall | Mean F-Value |
---|---|---|---|---|
traffic light | proposed | 0.938 | 0.899 | 0.918 |
without spatial prior | 0.937 | 0.894 | 0.915 | |
YOLOv3 | 0.972 | 0.598 | 0.722 | |
arrow light | proposed | 0.771 | 0.517 | 0.567 |
without spatial prior | 0.542 | 0.386 | 0.429 | |
YOLOv3 | 0.611 | 0.352 | 0.417 |
Object | Distance | Mean Precision | Mean Recall | Mean F-Value | |
---|---|---|---|---|---|
traffic light & arrow light | 30–150 m | (ave. pixel size > 5.0) | 0.935 | 0.897 | 0.910 |
traffic light & arrow light | 30–120 m | (ave. pixel size > 6.0) | 0.957 | 0.907 | 0.932 |
traffic light & arrow light | 30–60 m | (ave. pixel size > 11.0) | 0.971 | 0.920 | 0.945 |
Scene | Method | Mean Precision | Mean Recall | Mean F-Value |
---|---|---|---|---|
Whole | proposed | 0.938 | 0.899 | 0.918 |
SURF | 0.936 | 0.894 | 0.914 | |
blob | 0.917 | 0.536 | 0.672 | |
AdaBoost | 0.990 | 0.194 | 0.305 | |
YOLOv3 | 0.972 | 0.598 | 0.722 | |
Daytime | proposed | 0.930 | 0.902 | 0.915 |
SURF | 0.928 | 0.899 | 0.913 | |
blob | 0.921 | 0.587 | 0.712 | |
AdaBoost | 0.990 | 0.250 | 0.372 | |
YOLOv3 | 0.986 | 0.710 | 0.806 | |
Dark | proposed | 0.959 | 0.871 | 0.913 |
SURF | 0.958 | 0.875 | 0.915 | |
blob | 0.902 | 0.372 | 0.523 | |
AdaBoost | 0.583 | 0.007 | 0.013 | |
YOLOv3 | 0.840 | 0.226 | 0.345 |
Method | CPU/GPU | Average (ms) | Standard Deviation (ms) |
---|---|---|---|
proposed | CPU | 64.278 | 19.374 |
SURF | CPU | 61.036 | 16.890 |
blob | CPU | 45.806 | 10.267 |
AdaBoost | CPU | 71.663 | 22.322 |
YOLOv3 | CPU/GPU | 93.945 | 29.693 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yoneda, K.; Kuramoto, A.; Suganuma, N.; Asaka, T.; Aldibaja, M.; Yanase, R. Robust Traffic Light and Arrow Detection Using Digital Map with Spatial Prior Information for Automated Driving. Sensors 2020, 20, 1181. https://doi.org/10.3390/s20041181
Yoneda K, Kuramoto A, Suganuma N, Asaka T, Aldibaja M, Yanase R. Robust Traffic Light and Arrow Detection Using Digital Map with Spatial Prior Information for Automated Driving. Sensors. 2020; 20(4):1181. https://doi.org/10.3390/s20041181
Chicago/Turabian StyleYoneda, Keisuke, Akisuke Kuramoto, Naoki Suganuma, Toru Asaka, Mohammad Aldibaja, and Ryo Yanase. 2020. "Robust Traffic Light and Arrow Detection Using Digital Map with Spatial Prior Information for Automated Driving" Sensors 20, no. 4: 1181. https://doi.org/10.3390/s20041181
APA StyleYoneda, K., Kuramoto, A., Suganuma, N., Asaka, T., Aldibaja, M., & Yanase, R. (2020). Robust Traffic Light and Arrow Detection Using Digital Map with Spatial Prior Information for Automated Driving. Sensors, 20(4), 1181. https://doi.org/10.3390/s20041181