Object Detection in Autonomous Vehicles under Adverse Weather: A Review of Traditional and Deep Learning Approaches
Abstract
:1. Introduction
2. Related Survey Papers
3. An overview of AVs
3.1. Sensor Technologies
3.1.1. LiDAR
3.1.2. Radar
3.1.3. Ultrasonic
3.1.4. Camera
3.2. Actuators
3.3. Perception
3.4. Planning and Control
3.5. Challenges for AVs in Adverse Weather
3.5.1. Performance of Sensors
3.5.2. Performance of Cameras
3.5.3. Technical Parameters of the Vehicle
3.5.4. Behavior of Driver
3.5.5. V2V, V2I, and VLC Communications
4. Overview of Object Detection in AVs
- Vehicle Detection: Vehicle detection is the process by which AVs identify and locate other vehicles on the road. This capability is crucial for AVs to make informed decisions about their own movement, such as maintaining safe distances, changing lanes, or responding to traffic situations.
- Pedestrian Detection: Pedestrian detection involves the recognition and tracking of people walking near or crossing the road. This is a vital safety feature for AVs as it enables the vehicle to anticipate and react to the presence of pedestrians, preventing collisions and ensuring the safety of both the vehicle’s occupants and those outside.
- Road Lane Detection: Road lane detection is the ability of AVs to identify and understand the position and orientation of road lanes. This information is essential for the vehicle to navigate within its designated lane, follow traffic rules, and make correct turns, ensuring a smooth and safe driving experience.
5. Overview of Traditional and DL Approaches for Object Detection in AVs
5.1. Traditional Approach
5.2. DL Approaches
6. Traditional and DL Approaches for Object Detection in AVs
6.1. Traditional Approaches for Object Detection in AVs
6.1.1. Traditional Approaches for Vehicle Detection
6.1.2. Traditional Approaches for Pedestrian Detection
6.1.3. Traditional Approaches for Road Lane Detection
6.2. DL Approaches for Object Detection in AVs
6.2.1. DL Approaches for Vehicle Detection
Ref. | Year | Technique | Weather Classes | Accuracy |
---|---|---|---|---|
[12] | 2022 | SqueezeNet, ResNet-50, EfficientNet-b0 | cloudy, rainy, snowy, sandy, shine, and sunrise | 98.48% 97.78% 96.05% |
[137] | 2020 | YOLOv3 | night | 93.66% |
[140] | 2022 | YOLOv5 | Fog, rain, snow, sand | 94.7% |
[130] | 2023 | CNN | Cloudy, Fog, Rain, Shine Sunrise | 98% |
[131] | 2022 | ResNet, GoogleNet, SqueezeNet | Fog, Low light, sun glare | 100% 65.50% 90.0% |
[133] | 2021 | Faster R-CNN | Rain, Fog, Snow, Sand | 95.16% |
[134] | 2020 | SSD | Sunny, overcast, Night, rain | 92.18% |
[141] | 2022 | YOLOv5 | night | 96.75% |
6.2.2. DL Approaches for Pedestrian Detection
6.2.3. DL Approaches for Road Lane Detection
6.3. Practical Implications
7. Discussion, Limitations, and Future Research Trends
7.1. Discussion
7.2. Limitations
- Our literature review primarily focuses on the detection of pedestrians, vehicles, and road lanes, which may not encompass all possible objects and scenarios relevant to AVs in adverse weather conditions. There may be other critical elements, such as traffic signals or animals, that warrant further investigation.
- The detection algorithms discussed in our review may have inherent limitations, such as difficulties in detecting small or occluded objects, which could impact the accuracy and reliability of AVs in certain situations.
- Our study primarily considers a range of adverse weather conditions, but there may be other environmental factors, such as dust storms or heavy fog, that were not extensively covered in the reviewed literature.
- The field of AV technology is rapidly evolving, and new detection algorithms and sensor technologies may emerge that could significantly impact the findings of our review. Our study may not capture the most recent advancements in this area.
- While our study includes datasets that simulate adverse weather conditions, the simulation environments may not perfectly replicate real-world driving scenarios. The complexity of real-world conditions, including unpredictable human behavior and dynamic traffic patterns, can introduce additional challenges not fully captured in simulated datasets.
- The ethical considerations and societal acceptance of AVs, especially in challenging conditions, are not addressed in our study. Public trust and the ethical use of AV technology are essential factors for their successful integration into smart cities.
7.3. Future Research Trends
- It has become more important to address the real-time requirements for OD in real-world applications. Deep neural networks, however, frequently require large amounts of computational power, which presents difficulties for embedded systems. To properly fulfill these objectives, resource-efficient technique development has become essential. To ensure the practical usefulness of the suggested methodologies, future research should focus heavily on their computational components, offering both quantitative and qualitative analysis.
- The existing deep neural network techniques for difficult item detection mainly depend on large-scale annotated datasets. However, creating these databases is expensive and time-consuming. Consequently, there is an urgent need to create OD algorithms that can train with little to no labeled information.
- The employment of various evaluation metrics and IoU criteria for OD in difficult situations has resulted in the absence of a clear benchmark standard, which makes comparing methods difficult. For future research in this area to be uniform and comparable, a global baseline must be established.
- Creation of extensive simulation environments should occur that imitate inclement weather to thoroughly test and improve object identification algorithms.
- To develop comprehensive solutions for adverse weather OD, researchers, engineers, and policymakers should collaborate more closely.
- It is necessary to study the psychology and behavior of human drivers in adverse weather, with an emphasis on developing successful communication and trust with AVs.
- Creation of novel tactics for the real-time modification of OD algorithms for AVs in response to changing environmental circumstances should be achieved.
- Investigation into cutting-edge techniques for combining current weather data with weather forecasts to enable proactive decisionmaking during unfavorable weather conditions is necessary.
- Improvements in sensor fusion methods should be attained, which integrate information from several sensor types to provide more accurate and dependable identification in adverse weather.
- To develop behavior prediction models for AVs, leveraging machine learning and deep learning to forecast the actions of vehicles, pedestrians, and cyclists should occur. These models will operate effectively in adverse weather, improving AV decisionmaking for enhanced safety and efficiency.
8. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- World Health Organization. WHO: Global Status Report on Road Safety: Summary; Technical Report. Available online: https://www.who.int/health-topics/road-safety#tab=tab_1 (accessed on 18 February 2024).
- Iftikhar, S.; Asim, M.; Zhang, Z.; El-Latif, A.A.A. Advance generalization technique through 3D CNN to overcome the false positives pedestrian in autonomous vehicles. Telecommun. Syst. 2022, 80, 545–557. [Google Scholar] [CrossRef]
- Self-Driving Cars Global Market Size. Available online: https://precedenceresearch.com/self-driving-cars-market (accessed on 18 December 2023).
- Badue, C.; Guidolini, R.; Carneiro, R.V.; Azevedo, P.; Cardoso, V.B.; Forechi, A.; Jesus, L.; Berriel, R.; Paixao, T.M.; Mutz, F.; et al. Self-driving cars: A survey. Expert Syst. Appl. 2021, 165, 113816. [Google Scholar] [CrossRef]
- Bansal, P.; Kockelman, K.M. Are we ready to embrace connected and self-driving vehicles? A case study of Texans. Transportation 2018, 45, 641–675. [Google Scholar] [CrossRef]
- Chehri, A.; Sharma, T.; Debaque, B.; Duclos, N.; Fortier, P. Transport systems for smarter cities, a practical case applied to traffic management in the city of Montreal. In Sustainability in Energy and Buildings 2021; Springer: Singapore, 2022; pp. 255–266. [Google Scholar]
- Trenberth, K.E.; Zhang, Y. How often does it really rain? Bull. Am. Meteorol. Soc. 2018, 99, 289–298. [Google Scholar] [CrossRef]
- Andrey, J.; Yagar, S. A temporal analysis of rain-related crash risk. Accid. Anal. Prev. 1993, 25, 465–472. [Google Scholar] [CrossRef]
- National Oceanic and Atmospheric Administration, Getting Traction: Tips for Traveling in Winter Weather. Available online: https://www.weather.gov/wrn/getting_traction (accessed on 24 October 2023).
- Mehra, A.; Mandal, M.; Narang, P.; Chamola, V. ReViewNet: A fast and resource optimized network for enabling safe autonomous driving in hazy weather conditions. IEEE Trans. Intell. Transp. Syst. 2020, 22, 4256–4266. [Google Scholar] [CrossRef]
- Abu Al-Haija, Q.; Krichen, M.; Abu Elhaija, W. Machine-learning-based darknet traffic detection system for IoT applications. Electronics 2022, 11, 556. [Google Scholar] [CrossRef]
- Al-Haija, Q.A.; Gharaibeh, M.; Odeh, A. Detection in adverse weather conditions for autonomous vehicles via deep learning. AI 2022, 3, 303–317. [Google Scholar] [CrossRef]
- Zang, S.; Ding, M.; Smith, D.; Tyler, P.; Rakotoarivelo, T.; Kaafar, M.A. The impact of adverse weather conditions on autonomous vehicles: How rain, snow, fog, and hail affect the performance of a self-driving car. IEEE Veh. Technol. Mag. 2019, 14, 103–111. [Google Scholar] [CrossRef]
- Mohammed, A.S.; Amamou, A.; Ayevide, F.K.; Kelouwani, S.; Agbossou, K.; Zioui, N. The perception system of intelligent ground vehicles in all weather conditions: A systematic literature review. Sensors 2020, 20, 6532. [Google Scholar] [CrossRef]
- Yoneda, K.; Suganuma, N.; Yanase, R.; Aldibaja, M. Automated driving recognition technologies for adverse weather conditions. IATSS Res. 2019, 43, 253–262. [Google Scholar] [CrossRef]
- Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and sensing for autonomous vehicles under adverse weather conditions: A survey. ISPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
- Arthi, V.; Murugeswari, R.; Nagaraj, P. Object Detection of Autonomous Vehicles under Adverse Weather Conditions. In Proceedings of the 2022 International Conference on Data Science, Agents & Artificial Intelligence (ICDSAAI), Chennai, India, 8–10 December 2022; Volume 1, pp. 1–8. [Google Scholar]
- Hnewa, M.; Radha, H. Object detection under rainy conditions for autonomous vehicles: A review of state-of-the-art and emerging techniques. IEEE Signal Process. Mag. 2020, 38, 53–67. [Google Scholar] [CrossRef]
- Abbas, A.F.; Sheikh, U.U.; Al-Dhief, F.T.; Mohd, M.N.H. A comprehensive review of vehicle detection using computer vision. TELKOMNIKA Telecommun. Comput. Electron. Control 2021, 19, 838–850. [Google Scholar] [CrossRef]
- Yang, Z.; Pun-Cheng, L.S. Vehicle detection in intelligent transportation systems and its applications under varying environments: A review. Image Vis. Comput. 2018, 69, 143–154. [Google Scholar] [CrossRef]
- Muneeswaran, V.; Nagaraj, P.; Rajasekaran, M.P.; Reddy, S.U.; Chaithanya, N.S.; Babajan, S. IoT based Multiple Vital Health Parameter Detection and Analyzer System. In Proceedings of the 2022 7th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 22–24 June 2022; pp. 473–478. [Google Scholar] [CrossRef]
- SAE International. Available online: https://www.sae.org/standards/content/j3016_202104/ (accessed on 11 November 2023).
- Bimbraw, K. Autonomous cars: Past, present and future a review of the developments in the last century, the present scenario and the expected future of autonomous vehicle technology. In Proceedings of the 2015 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO), Colmar, France, 21–23 July 2015; Volume 1, pp. 191–198. [Google Scholar]
- Lafrance, A. Your Grandmother’s Driverless Car. The Atlantic, 29 June 2016. Available online: https://www.theatlantic.com/technology/archive/2016/06/beep-beep/489029/ (accessed on 27 October 2023).
- Pomerleau, D.A. Alvinn: An autonomous land vehicle in a neural network. In NIPS’88: Proceedings of the 1st International Conference on Neural Information Processing Systems, 1 January 1988; MIT Press: Cambridge, MA, USA; pp. 305–313.
- SAE International. Surface Vehicle Recommended Practice. Joint SAE/RCCC Fuel Consumption Test Procedure (Short Term in-Service Vehicle) Type I. 1986. Available online: https://ca-times.brightspotcdn.com/54/02/2d5919914cfe9549e79721b12e66/j372016-202104.pdf (accessed on 1 May 2021).
- How Google’s Autonomous Car Passed the First US State Self-Driving Test. 2014. Available online: https://spectrum.ieee.org/how-googles-autonomous-car-passed-the-first-us-state-selfdriving-test (accessed on 31 December 2023).
- Claire, P. The Pathway to Driverless Cars: Summary Report and Action Plan; OCLC: London, UK, 2015. [Google Scholar]
- Briefs, U. Mcity Grand Opening. Res. Rev. 2015, 46, 1–3. [Google Scholar]
- Bellone, M.; Ismailogullari, A.; Müür, J.; Nissin, O.; Sell, R.; Soe, R.M. Autonomous driving in the real-world: The weather challenge in the Sohjoa Baltic project. In Towards Connected and Autonomous Vehicle Highways: Technical, Security and Social Challenges; Springer International Publishing: Cham, Switzerland, 2021; pp. 229–255. [Google Scholar]
- Lambert, F. Watch Tesla Autopilot Go through a Snowstorm. Available online: https://electrek.co/2019/01/28/tesla-autopilot-snow-storm/ (accessed on 2 January 2023).
- EUR-Lex. E.L. EUR-Lex-32019R2144-EN-EUR-Lex. 2014. Available online: https://eur-lex.europa.eu/legal-content/en/ALL/?uri=CELEX%3A32019R2144 (accessed on 13 October 2023).
- Waymo, W. Waymo Safety Report. 2020. Available online: https://waymo.com/safety/ (accessed on 18 August 2023).
- Gehrig, S.K.; Stein, F.J. Dead reckoning and cartography using stereo vision for an autonomous car. In Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289), Kyongju, Republic of Korea, 17–21 October 1999; Volume 3, pp. 1507–1512. [Google Scholar] [CrossRef]
- Rawashdeh, N.A.; Bos, J.P.; Abu-Alrub, N.J. Drivable path detection using CNN sensor fusion for autonomous driving in the snow. In Proceedings of the SPIE 11748, Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2021, Online, FL, USA, 12–17 April 2021; Volume 1174806. [Google Scholar] [CrossRef]
- Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Stanley: The robot that won the DARPA Grand Challenge. J. Field Robot. 2006, 23, 661–692. [Google Scholar] [CrossRef]
- Carballo, A.; Lambert, J.; Monrroy, A.; Wong, D.; Narksri, P.; Kitsukawa, Y.; Takeuchi, E.; Kato, S.; Takeda, K. LIBRE: The multiple 3D LiDAR dataset. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 1094–1101. [Google Scholar] [CrossRef]
- Velodyne, HDL-64E Spec Sheet. Available online: https://velodynesupport.zendesk.com/hc/en-us/articles/115003632634-HDL-64E-Spec-Sheet (accessed on 13 October 2023).
- Bijelic, M.; Gruber, T.; Ritter, W. A benchmark for lidar sensors in fog: Is detection breaking down? In Proceedings of the Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 760–767. [Google Scholar] [CrossRef]
- Patole, S.M.; Torlak, M.; Wang, D.; Ali, M. Automotive radars: A review of signal processing techniques. IEEE Signal Process. Mag. 2017, 34, 22–35. [Google Scholar] [CrossRef]
- Navtech Radar, FMCW Radar. Available online: https://navtechradar.com/explore/fmcw-radar/ (accessed on 22 November 2021).
- Carullo, A.; Parvis, M. An ultrasonic sensor for distance measurement in automotive applications. IEEE Sens. J. 2001, 1, 143. [Google Scholar] [CrossRef]
- Frenzel, L. Ultrasonic Sensors: A Smart Choice for Shorter-Range Applications. 2018. Available online: https://www.electronicdesign.com/industrialautomation/article/21806202/ultrasonic-sensors-a-smart-choicefor-shorterrange-applications (accessed on 14 August 2023).
- Kamemura, T.; Takagi, H.; Pal, C.; Ohsumi, A. Development of a long-range ultrasonic sensor for automotive application. SAE Int. J. Passeng. Cars-Electron. Electr. Syst. 2008, 1, 301–306. [Google Scholar] [CrossRef]
- Tesla, Summon Your Tesla from Your Phone. Available online: https://www.tesla.com/blog/summon-your-tesla-your-phone (accessed on 2 November 2023).
- Pendleton, S.D.; Andersen, H.; Du, X.; Shen, X.; Meghjani, M.; Eng, Y.H.; Rus, D.; Ang, M.H., Jr. Perception, planning, control, and coordination for autonomous vehicles. Machines 2017, 5, 6. [Google Scholar] [CrossRef]
- Nasseri, A.; Shlomit, H. Autonomous Vehicle Technology Report. 2020. Available online: https://www.wevolver.com/article/ (accessed on 28 October 2023).
- Rosique, F.; Navarro, P.J.; Fernández, C.; Padilla, A. A systematic review of perception system and simulators for autonomous vehicles research. Sensors 2019, 19, 648. [Google Scholar] [CrossRef] [PubMed]
- Paden, B.; Čáp, M.; Yong, S.Z.; Yershov, D.; Frazzoli, E. A survey of motion planning and control techniques for self-driving urban vehicles. IEEE Trans. Intell. Veh. 2016, 1, 33–55. [Google Scholar] [CrossRef]
- Shladover, S.E.; Desoer, C.A.; Hedrick, J.K.; Tomizuka, M.; Walrand, J.; Zhang, W.B.; McMahon, D.H.; Peng, H.; Sheikholeslam, S.; McKeown, N. Automated vehicle control developments in the PATH program. IEEE Trans. Veh. Technol. 1991, 40, 114–130. [Google Scholar] [CrossRef]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
- Sun, H.; Zhang, W.; Yu, R.; Zhang, Y. Motion planning for mobile robots—Focusing on deep reinforcement learning: A systematic review. IEEE Access 2021, 9, 69061–69081. [Google Scholar] [CrossRef]
- Rai, M.; Khosla, B.; Dhawan, Y.; Kharotia, H.; Kumar, N.; Bandi, A. CLEAR: An Efficient Traffic Sign Recognition Technique for Cyber-Physical Transportation Systems. In Proceedings of the 2022 IEEE/ACM 15th International Conference on Utility and Cloud Computing (UCC), Vancouver, WA, USA, 6 December 2022; pp. 418–423. [Google Scholar]
- Sakaridis, C.; Dai, D.; Van Gool, L. ACDC: The adverse conditions dataset with correspondences for semantic driving scene understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10765–10775. [Google Scholar]
- Geyer, J.; Kassahun, Y.; Mahmudi, M.; Ricou, X.; Durgesh, R.; Chung, A.S.; Hauswald, L.; Pham, V.H.; Mühlegg, M.; Dorn, S.; et al. A2d2: Audi autonomous driving dataset. arXiv 2020, arXiv:2004.06320. [Google Scholar]
- Sun, P.; Kretzschmar, H.; Dotiwalla, X.; Chouard, A.; Patnaik, V.; Tsui, P.; Guo, J.; Zhou, Y.; Chai, Y.; Caine, B.; et al. Scalability in perception for autonomous driving: Waymo open dataset. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2443–2451. [Google Scholar] [CrossRef]
- Caesar, H.; Bankiti, V.; Lang, A.H.; Vora, S.; Liong, V.E.; Xu, Q.; Krishnan, A.; Pan, Y.; Baldan, G.; Beijbom, O. nuscenes: A multimodal dataset for autonomous driving. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11621–11631. [Google Scholar] [CrossRef]
- Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2636–2645. [Google Scholar] [CrossRef]
- Huang, X.; Wang, P.; Cheng, X.; Zhou, D.; Geng, Q.; Yang, R. The apolloscape open dataset for autonomous driving and its application. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 2702–2719. [Google Scholar] [CrossRef] [PubMed]
- Ros, G.; Sellart, L.; Materzynska, J.; Vazquez, D.; Lopez, A.M. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 3234–3243. [Google Scholar] [CrossRef]
- Carlevaris-Bianco, N.; Ushani, A.K.; Eustice, R.M. University of Michigan North Campus long-term vision and lidar dataset. Int. J. Robot. Res. 2016, 35, 1023–1035. [Google Scholar] [CrossRef]
- Pitropov, M.; Garcia, D.E.; Rebello, J.; Smart, M.; Wang, C.; Czarnecki, K.; Waslander, S. Canadian adverse driving conditions dataset. Int. J. Robot. Res. 2021, 40, 681–690. [Google Scholar] [CrossRef]
- Zheng, J.Y. IUPUI driving videos and images in all weather and illumination conditions. arXiv 2021, arXiv:2104.08657. [Google Scholar]
- Bos, J.P.; Chopp, D.; Kurup, A.; Spike, N. Autonomy at the end of the earth: An inclement weather autonomous driving data set. In Proceedings of the Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure 2020, Online, CA, USA, 27 April–9 May 2020; Volume 11415, pp. 36–48. [Google Scholar] [CrossRef]
- Vora, S.; Lang, A.H.; Helou, B.; Beijbom, O. Pointpainting: Sequential fusion for 3d object detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 4603–4611. [Google Scholar] [CrossRef]
- Chang, M.F.; Lambert, J.; Sangkloy, P.; Singh, J.; Bak, S.; Hartnett, A.; Wang, D.; Carr, P.; Lucey, S.; Ramanan, D.; et al. Argoverse: 3d tracking and forecasting with rich maps. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 8748–8757. [Google Scholar] [CrossRef]
- Meyer, M.; Kuschk, G. Automotive radar dataset for deep learning based 3d object detection. In Proceedings of the 2019 16th European Radar Conference (EuRAD), Paris, France, 2–4 October 2019; pp. 129–132. [Google Scholar]
- Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; Heide, F. Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 15–20 June 2020; pp. 11682–11692. [Google Scholar]
- Sakaridis, C.; Dai, D.; Van Gool, L. Semantic foggy scene understanding with synthetic data. Int. J. Comput. Vis. 2018, 126, 973–992. [Google Scholar] [CrossRef]
- Kim, J.; Rohrbach, A.; Darrell, T.; Canny, J.; Akata, Z. Textual Explanations for Self-Driving Vehicles. In Computer Vision–ECCV 2018; Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 11206. [Google Scholar]
- Neuhold, G.; Ollmann, T.; Rota Bulo, S.; Kontschieder, P. The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4990–4999. [Google Scholar]
- Braun, M.; Krebs, S.; Flohr, F.; Gavrila, D.M. Eurocity persons: A novel benchmark for person detection in traffic scenes. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 1844–1861. [Google Scholar] [CrossRef]
- Maddern, W.; Pascoe, G.; Linegar, C.; Newman, P. 1 year, 1000 km: The oxford robotcar dataset. Int. J. Robot. Res. 2017, 36, 3–15. [Google Scholar] [CrossRef]
- Pham, Q.H.; Sevestre, P.; Pahwa, R.S.; Zhan, H.; Pang, C.H.; Chen, Y.; Mustafa, A.; Chandrasekhar, V.; Lin, J. A 3D dataset: Towards autonomous driving in challenging environments. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 2267–2273. [Google Scholar]
- Wenzel, P.; Wang, R.; Yang, N.; Cheng, Q.; Khan, Q.; von Stumberg, L.; Zeller, N.; Cremers, D. 4Seasons: A cross-season dataset for multi-weather SLAM in autonomous driving. In Pattern Recognition; Akata, Z., Geiger, A., Sattler, T., Eds.; DAGM GCPR 2020. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2021; Volume 12544, pp. 404–417. [Google Scholar]
- Zendel, O.; Honauer, K.; Murschitz, M.; Steininger, D.; Dominguez, G.F. Wilddash-creating hazard-aware benchmarks. In Proceedings of the Computer Vision–ECCV 2018: 15th European Conference, Munich, Germany, 8–14 September 2018; Proceedings, Part VI. pp. 402–416. [Google Scholar]
- Choi, Y.; Kim, N.; Hwang, S.; Park, K.; Yoon, J.S.; An, K.; Kweon, I.S. KAIST multi-spectral day/night data set for autonomous and assisted driving. IEEE Trans. Intell. Transp. Syst. 2018, 19, 934–948. [Google Scholar] [CrossRef]
- Sheeny, M.; De Pellegrin, E.; Mukherjee, S.; Ahrabian, A.; Wang, S.; Wallace, A. RADIATE: A radar dataset for automotive perception in bad weather. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 1–7. [Google Scholar]
- Yan, Z.; Sun, L.; Krajník, T.; Ruichek, Y. EU long-term dataset with multiple sensors for autonomous driving. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 10697–10704. [Google Scholar]
- Burnett, K.; Yoon, D.J.; Wu, Y.; Li, A.Z.; Zhang, H.; Lu, S.; Qian, J.; Tseng, W.K.; Lambert, A.; Leung, K.Y.; et al. Boreas: A multi-season autonomous driving dataset. Int. J. Robot. Res. 2023, 42, 33–42. [Google Scholar] [CrossRef]
- Kenk, M.A.; Hassaballah, M. DAWN: Vehicle detection in adverse weather nature dataset. arXiv 2020, arXiv:2008.05402. [Google Scholar]
- Saralajew, S.; Ohnemus, L.; Ewecker, L.; Asan, E.; Isele, S.; Roos, S. A dataset for provident vehicle detection at night. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 9750–9757. [Google Scholar]
- Lowe, D.G. Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–25 September 1999; Volume 2, pp. 1150–1157. [Google Scholar]
- Viola, P.; Jones, M. Rapid object detection using a boosted cascade of simple features. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, Kauai, HI, USA, 8–14 December 2001. [Google Scholar] [CrossRef]
- Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; Volume 1, pp. 886–893. [Google Scholar]
- Asim, M.; Wang, Y.; Wang, K.; Huang, P.Q. A review on computational intelligence techniques in cloud and edge computing. IEEE Trans. Emerg. Top. Comput. Intell. 2020, 4, 742–763. [Google Scholar] [CrossRef]
- LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Computer Vision–ECCV 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; Volume 9905, pp. 21–37. [Google Scholar]
- Terven, J.; Cordova-Esparza, D. A comprehensive review of YOLO: From YOLOv1 to YOLOv8 and beyond. arXiv 2023, arXiv:2304.00501. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. EfficientDet: Scalable and efficient object detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 10778–10787. [Google Scholar] [CrossRef]
- Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Computer Vision–ECCV 2020; Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; Volume 12346, pp. 213–229. [Google Scholar]
- Tian, Z.; Shen, C.; Chen, H.; He, T. Fcos: Fully convolutional one-stage object detection. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9626–9635. [Google Scholar] [CrossRef]
- Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar] [CrossRef]
- Galvao, L.G.; Abbod, M.; Kalganova, T.; Palade, V.; Huda, M.N. Pedestrian and vehicle detection in autonomous vehicle perception systems—A review. Sensors 2021, 21, 7267. [Google Scholar] [CrossRef]
- Jocher, G. Ultralytics/Yolov8, GitHub, GitHub Repository. 2023. Available online: https://github.com/ultralytics/ultralytics (accessed on 15 December 2023).
- Jia, S.J.; Zhai, Y.T.; Jia, X.W. Detection of Traffic and Road Condition Based on SVM and PHOW. Appl. Mech. Mater. 2014, 513, 3651–3654. [Google Scholar] [CrossRef]
- Gao, D.; Zhou, J.; Xin, L. SVM-based detection of moving vehicles for automatic traffic monitoring. In Proceedings of the ITSC 2001. 2001 IEEE Intelligent Transportation Systems. Proceedings (Cat. No.01TH8585), Oakland, CA, USA, 25–29 August 2001; pp. 745–749. [Google Scholar] [CrossRef]
- Zheng, F.; Luo, S.; Song, K.; Yan, C.W.; Wang, M.C. Improved lane line detection algorithm based on Hough transform. Pattern Recognit. Image Anal. 2018, 28, 254–260. [Google Scholar] [CrossRef]
- Vamsi, A.M.; Deepalakshmi, P.; Nagaraj, P.; Awasthi, A.; Raj, A. IOT based autonomous inventory management for warehouses. In EAI International Conference on Big Data Innovation for Sustainable Cognitive Computing; Haldorai, A., Ramu, A., Mohanram, S., Onn, C., Eds.; EAI/Springer Innovations in Communication and Computing; Springer: Cham, Switzerland, 2020; pp. 371–376. [Google Scholar]
- Nagaraj, P.; Muneeswaran, V.; Sudar, K.M.; Ali, R.S.; Someshwara, A.; Kumar, T.S. Internet of Things based smart hospital saline monitoring system. In Proceedings of the 2021 5th International Conference on Computer, Communication and Signal Processing (ICCCSP), Chennai, India, 24–25 May 2021; pp. 53–58. [Google Scholar] [CrossRef]
- Wang, Z.; Zhan, J.; Duan, C.; Guan, X.; Yang, K. Vehicle detection in severe weather based on pseudo-visual search and HOG–LBP feature fusion. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2022, 236, 1607–1618. [Google Scholar] [CrossRef]
- Abdullah, M.N.; Ali, Y.H. Vehicles detection system at different weather conditions. Iraqi J. Sci. 2021, 62, 2040–2052. [Google Scholar] [CrossRef]
- Wu, B.F.; Juang, J.H. Adaptive vehicle detector approach for complex environments. IEEE Trans. Intell. Transp. Syst. 2012, 13, 817–827. [Google Scholar] [CrossRef]
- Padilla, D.A.; Villaverde, J.F.; Magdaraog, J.J.T.; Oconer, A.J.L.; Ranjo, J.P. Vehicle and Weather Detection Using Real Time Image Processing Using Optical Flow and Color Histogram. In Proceedings of the 2019 5th International Conference on Control, Automation and Robotics (ICCAR), Beijing, China, 19–22 April 2019; pp. 880–883. [Google Scholar] [CrossRef]
- Tian, Q.; Zhang, L.; Wei, Y.; Zhao, W.; Fei, W. Vehicle Detection and Tracking at Night in Video Surveillance. Int. J. Online Eng. 2013, 9, 60–64. [Google Scholar] [CrossRef]
- Kuang, H.; Chen, L.; Chan, L.L.H.; Cheung, R.C.; Yan, H. Feature selection based on tensor decomposition and object proposal for night-time multiclass vehicle detection. IEEE Trans. Syst. Man Cybern. Syst. 2018, 49, 71–80. [Google Scholar] [CrossRef]
- Ewecker, L.; Asan, E.; Ohnemus, L.; Saralajew, S. Provident vehicle detection at night for advanced driver assistance systems. Auton. Robot. 2023, 47, 313–335. [Google Scholar] [CrossRef]
- Yaghoobi Ershadi, N. Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera. PLoS ONE 2017, 12, e0189145. [Google Scholar] [CrossRef]
- Yang, H.; Qu, S. Real-time vehicle detection and counting in complex traffic scenes using background subtraction model with low-rank decomposition. IET Intell. Transp. Syst. 2018, 12, 75–85. [Google Scholar] [CrossRef]
- Govardhan, P. Night Time Pedestrian Detection for Advanced Driving Assistance Systems (ADAS) Using Near Infrared Images. Ph.D. Thesis, National Institute of Technology, Rourkela, India, 2014. [Google Scholar]
- Sasaki, Y.; Emaru, T.; Ravankar, A.A. SVM based pedestrian detection system for sidewalk snow removing machines. In Proceedings of the 2021 IEEE/SICE International Symposium on System Integration (SII), Iwaki, Japan, 11–14 January 2021; pp. 700–701. [Google Scholar]
- Kang, S.; Byun, H.; Lee, S.W. Real-time pedestrian detection using support vector machines. In Pattern Recognition with Support Vector Machines. SVM 2002; Lee, S.W., Verri, A., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2002; Volume 2388, pp. 268–277. [Google Scholar]
- Jegham, I.; Khalifa, A.B. Pedestrian detection in poor weather conditions using moving camera. In Proceedings of the 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), Hammamet, Tunisia, 30 October–3 November 2017; pp. 358–362. [Google Scholar]
- Fan, Y. Research of Pedestrian Tracking Based on HOG Feature and Haar Feature. Comput. Sci. 2013, 40, 199–203. [Google Scholar]
- Ding, B.; Liu, Z.; Sun, Y. Pedestrian detection in haze environments using dark channel prior and histogram of oriented gradient. In Proceedings of the 2018 Eighth International Conference on Instrumentation & Measurement, Computer, Communication and Control (IMCCC), Harbin, China, 19–21 July 2018; pp. 1003–1008. [Google Scholar]
- Sotelo, M.; Parra, I.; Fernandez, D.; Naranjo, E. Pedestrian detection using SVM and multi-feature combination. In Proceedings of the 2006 IEEE Intelligent Transportation Systems Conference, Toronto, ON, Canada, 17–20 September 2006; pp. 103–108. [Google Scholar] [CrossRef]
- Satyawan, A.S.; Fuady, S.; Mitayani, A.; Sari, Y.W. HOG Based Pedestrian Detection System for Autonomous Vehicle Operated in Limited Area. In Proceedings of the 2021 International Conference on Radar, Antenna, Microwave, Electronics, and Telecommunications (ICRAMET), Bandung, Indonesia, 23–24 November 2021; pp. 147–152. [Google Scholar]
- Jang, G.; Park, J.; Kim, M. Cascade-Adaboost for Pedestrian Detection Using HOG and Combined Features. In Advances in Computer Science and Ubiquitous Computing; Springer: Singapore, 2016; pp. 430–435. [Google Scholar]
- Zhang, X.; Zhu, X. Autonomous path tracking control of intelligent electric vehicles based on lane detection and optimal preview method. Expert Syst. Appl. 2019, 121, 38–48. [Google Scholar] [CrossRef]
- Gern, A.; Moebus, R.; Franke, U. Vision-based lane recognition under adverse weather conditions using optical flow. In Proceedings of the Intelligent Vehicle Symposium 2002, Versailles, France, 17–21 June 2002; Volume 2, pp. 652–657. [Google Scholar]
- Tran, T.T.; Son, J.H.; Uk, B.J.; Lee, J.H.; Cho, H.M. An adaptive method for detecting lane boundary in night scene. In Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence; Huang, D.S., Zhang, X., Reyes García, C.A., Zhang, L., Eds.; ICIC 2010. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2010; Volume 6216, pp. 301–308. [Google Scholar]
- Chen, T.Y.; Chen, C.H.; Luo, G.M.; Hu, W.C.; Chern, J.C. Vehicle detection in nighttime environment by locating road lane and taillights. In Proceedings of the 2015 International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), Adelaide, SA, Australia, 23–25 September 2015; pp. 60–63. [Google Scholar]
- Guo, J.; Wei, Z.; Miao, D. Lane detection method based on improved RANSAC algorithm. In Proceedings of the 2015 IEEE Twelfth International Symposium on Autonomous Decentralized Systems, Taichung, Taiwan, 25–27 March 2015; pp. 285–288. [Google Scholar]
- Antonio, J.A.; Romero, M. Pedestrians’ Detection Methods in Video Images: A Literature Review. In Proceedings of the 2018 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 12–14 December 2018; pp. 354–360. [Google Scholar]
- Papadimitriou, O.; Kanavos, A.; Mylonas, P.; Maragoudakis, M. Advancing Weather Image Classification using Deep Convolutional Neural Networks. In Proceedings of the 2023 18th International Workshop on Semantic and Social Media Adaptation & Personalization (SMAP) 18th International Workshop on Semantic and Social Media Adaptation & Personalization (SMAP 2023), Limassol, Cyprus, 25–26 September 2023; pp. 1–6. [Google Scholar]
- Alhammadi, S.A.; Alhameli, S.A.; Almaazmi, F.A.; Almazrouei, B.H.; Almessabi, H.A.; Abu-Kheil, Y. Thermal-Based Vehicle Detection System using Deep Transfer Learning under Extreme Weather Conditions. In Proceedings of the 2022 8th International Conference on Information Technology Trends (ITT), Dubai, United Arab Emirates, 25–26 May 2022; pp. 119–123. [Google Scholar]
- Arora, N.; Kumar, Y.; Karkra, R.; Kumar, M. Automatic vehicle detection system in different environment conditions using fast R-CNN. Multimed. Tools Appl. 2022, 81, 18715–18735. [Google Scholar] [CrossRef]
- Ghosh, R. On-road vehicle detection in varying weather conditions using faster R-CNN with several region proposal networks. Multimed. Tools Appl. 2021, 80, 25985–25999. [Google Scholar] [CrossRef]
- Cao, J.; Song, C.; Song, S.; Peng, S.; Wang, D.; Shao, Y.; Xiao, F. Front vehicle detection algorithm for smart car based on improved SSD model. Sensors 2020, 20, 4646. [Google Scholar] [CrossRef] [PubMed]
- Li, W. Vehicle detection in foggy weather based on an enhanced YOLO method. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2022; Volume 2284, p. 012015. [Google Scholar]
- Ghosh, R. A modified yolo model for on-road vehicle detection in varying weather conditions. In Intelligent Computing and Communication Systems; Springer: Singapore, 2021; pp. 45–54. [Google Scholar]
- Miao, Y.; Liu, F.; Hou, T.; Liu, L.; Liu, Y. A nighttime vehicle detection method based on YOLO v3. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 6617–6621. [Google Scholar]
- Humayun, M.; Ashfaq, F.; Jhanjhi, N.Z.; Alsadun, M.K. Traffic management: Multi-scale vehicle detection in varying weather conditions using yolov4 and spatial pyramid pooling network. Electronics 2022, 11, 2748. [Google Scholar] [CrossRef]
- Wang, R.; Zhao, H.; Xu, Z.; Ding, Y.; Li, G.; Zhang, Y.; Li, H. Real-time vehicle target detection in inclement weather conditions based on YOLOv4. Front. Neurorobot. 2023, 17, 1058723. [Google Scholar] [CrossRef]
- Yao, J.; Fan, X.; Li, B.; Qin, W. Adverse Weather Target Detection Algorithm Based on Adaptive Color Levels and Improved YOLOv5. Sensors 2022, 22, 8577. [Google Scholar] [CrossRef] [PubMed]
- Razzok, M.; Badri, A.; Mourabit, I.E.; Ruichek, Y.; Sahel, A. Pedestrian detection under weather conditions using conditional generative adversarial network. Int. J. Artif. Intell. 2023, 12, 1557–1568. [Google Scholar] [CrossRef]
- Pawełczyk, M.; Wojtyra, M. Real world object detection dataset for quadcopter unmanned aerial vehicle detection. IEEE Access 2020, 8, 174394–174409. [Google Scholar] [CrossRef]
- Yahiaoui, M.; Rashed, H.; Mariotti, L.; Sistu, G.; Clancy, I.; Yahiaoui, L.; Kumar, V.R.; Yogamani, S. Fisheyemodnet: Moving object detection on surround-view cameras for autonomous driving. arXiv 2019, arXiv:1908.11789. [Google Scholar]
- Ragesh, N.; Rajesh, R. Pedestrian detection in automotive safety: Understanding state-of-the-art. IEEE Access 2019, 7, 47864–47890. [Google Scholar] [CrossRef]
- Kim, J.H.; Hong, H.G.; Park, K.R. Convolutional neural network-based human detection in nighttime images using visible light camera sensors. Sensors 2017, 17, 1065. [Google Scholar] [CrossRef] [PubMed]
- Hsia, C.H.; Peng, H.C.; Chan, H.T. All-Weather Pedestrian Detection Based on Double-Stream Multispectral Network. Electronics 2023, 12, 2312. [Google Scholar] [CrossRef]
- Lai, K.; Zhao, J.; Liu, D.; Huang, X.; Wang, L. Research on pedestrian detection using optimized mask R-CNN algorithm in low-light road environment. In Journal of Physics: Conference Series; IOP Publishing: Bristol, UK, 2021; Volume 1777, p. 012057. [Google Scholar]
- Tumas, P.; Nowosielski, A.; Serackis, A. Pedestrian detection in severe weather conditions. IEEE Access 2020, 8, 62775–62784. [Google Scholar] [CrossRef]
- Shakeri, A.; Moshiri, B.; Garakani, H.G. Pedestrian detection using image fusion and stereo vision in autonomous vehicles. In Proceedings of the 2018 9th International Symposium on Telecommunications (IST), Tehran, Iran, 17–19 December 2018; pp. 592–596. [Google Scholar] [CrossRef]
- Galarza-Bravo, M.A.; Flores-Calero, M.J. Pedestrian detection at night based on faster R-CNN and far infrared images. In Intelligent Robotics and Applications; Chen, Z., Mendes, A., Yan, Y., Chen, S., Eds.; ICIRA 2018. Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; Volume 10985, pp. 335–345. [Google Scholar]
- Xu, Y.; Cao, X.; Qiao, H. Optical camera based pedestrian detection in rainy or snowy weather. In Fuzzy Systems and Knowledge Discovery; Wang, L., Jiao, L., Shi, G., Li, X., Liu, J., Eds.; FSKD 2006. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4223, pp. 1182–1191. [Google Scholar]
- Montenegro, B.; Flores-Calero, M. Pedestrian detection at daytime and nighttime conditions based on YOLO-v5. Ingenius. Rev. Cienc. Tecnol. 2022, 85–95. [Google Scholar] [CrossRef]
- Zaman, M.; Saha, S.; Zohrabi, N.; Abdelwahed, S. Deep Learning Approaches for Vehicle and Pedestrian Detection in Adverse Weather. In Proceedings of the 2023 IEEE Transportation Electrification Conference & Expo (ITEC), Detroit, MI, USA, 21–23 June 2023; pp. 1–6. [Google Scholar]
- Son, J.; Yoo, H.; Kim, S.; Sohn, K. Real-time illumination invariant lane detection for lane departure warning system. Expert Syst. Appl. 2015, 42, 1816–1824. [Google Scholar] [CrossRef]
- Kortli, Y.; Marzougui, M.; Atri, M. Efficient implementation of a real-time lane departure warning system. In Proceedings of the 2016 International Image Processing, Applications and Systems (IPAS), Hammamet, Tunisia, 5–7 November 2016; pp. 1–6. [Google Scholar]
- Sun, T.Y.; Tsai, S.J.; Chan, V. HSI color model based lane-marking detection. In Proceedings of the 2006 IEEE Intelligent Transportation Systems Conference, Toronto, ON, Canada, 7–12 September 2006; pp. 1168–1172. [Google Scholar]
- Chiu, K.Y.; Lin, S.F. Lane detection using color-based segmentation. In Proceedings of the IEEE Proceedings. Intelligent Vehicles Symposium, Las Vegas, NV, USA, 6–8 June 2005; pp. 706–711. [Google Scholar]
- Shin, S.; Shim, I.; Kweon, I.S. Combinatorial approach for lane detection using image and LIDAR reflectance. In Proceedings of the 2015 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Goyangi, Republic of Korea, 28–30 October 2015; pp. 485–487. [Google Scholar]
- Rose, C.; Britt, J.; Allen, J.; Bevly, D. An integrated vehicle navigation system utilizing lane-detection and lateral position estimation systems in difficult environments for GPS. IEEE Trans. Intell. Transp. Syst. 2014, 15, 2615–2629. [Google Scholar] [CrossRef]
- Hsieh, J.W.; Chen, L.C.; Chen, D.Y. Symmetrical SURF and its applications to vehicle detection and vehicle make and model recognition. IEEE Trans. Intell. Transp. Syst. 2014, 15, 6–20. [Google Scholar] [CrossRef]
- Yusuf, M.M.; Karim, T.; Saif, A.S. A robust method for lane detection under adverse weather and illumination conditions using convolutional neural network. In Proceedings of the ICCA 2020: International Conference on Computing Advancements, Dhaka, Bangladesh, 10–12 January 2020; pp. 1–8. [Google Scholar]
- Nguyen, V.; Kim, H.; Jun, S.; Boo, K. A study on real-time detection method of lane and vehicle for lane change assistant system using vision system on highway. Eng. Sci. Technol. Int. J. 2018, 21, 822–833. [Google Scholar] [CrossRef]
- Khan, M.N.; Ahmed, M.M. Weather and surface condition detection based on road-side webcams: Application of pre-trained convolutional neural network. Int. J. Transp. Sci. Technol. 2022, 11, 468–483. [Google Scholar] [CrossRef]
- Ding, Y.; Xu, Z.; Zhang, Y.; Sun, K. Fast lane detection based on bird’s eye view and improved random sample consensus algorithm. Multimed. Tools Appl. 2017, 76, 22979–22998. [Google Scholar] [CrossRef]
- Ab Ghani, H.; Daud, A.M.; Besar, R.; Sani, Z.M.; Kamaruddin, M.N.; Syahali, S. Lane Detection Using Deep Learning for Rainy Conditions. In Proceedings of the 2023 9th International Conference on Computer and Communication Engineering (ICCCE), Kuala Lumpur, Malaysia, 15–16 August 2023; pp. 373–376. [Google Scholar]
- Raj, N.; Dutta, K.K.; Kamath, A. Lane Prediction by Autonomous Vehicle in Highway Traffic using Artificial Neural Networks. In Proceedings of the 2021 Fourth International Conference on Electrical, Computer and Communication Technologies (ICECCT), Erode, India, 15–17 September 2021; pp. 1–6. [Google Scholar]
- Krishnaveni, P.; Sutha, J. Novel deep learning framework for broadcasting abnormal events obtained from surveillance applications. J. Ambient. Intell. Humaniz. Comput. 2020, 13, 4123. [Google Scholar] [CrossRef]
- Ouyang, W.; Wang, X. A discriminative deep model for pedestrian detection with occlusion handling. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3258–3265. [Google Scholar]
- Haris, M.; Hou, J.; Wang, X. Lane lines detection under complex environment by fusion of detection and prediction models. Transp. Res. Rec. 2022, 2676, 342–359. [Google Scholar] [CrossRef]
Sensors | Advantages | Disadvantages |
---|---|---|
LiDAR | High resolution Long range Wide FOV | Sensitive to weather Expensive |
Radar | Long range Detection of velocity Suitable for all types of weather | Low resolution Very sensitive |
Camera | High resolution Detection of colors | Sensitive to weather Sensitive to lighting |
Ultrasonic | Non-InvasiveReal-time feedback | Low resolution Expensive |
Ref. | Year | Dataset | LiDAR | Radar | Camera | Adverse Weather |
---|---|---|---|---|---|---|
[37] | 2020 | LIBRE | ✓ | - | ✓ | Rain, fog, Daytime |
[51] | 2013 | Kitti | ✓ | - | ✓ | - |
[54] | 2021 | ACDC | - | - | ✓ | Rain, fog, snow |
[55] | 2020 | A2D2 | ✓ | - | ✓ | Rain |
[56] | 2020 | Waymo | ✓ | - | ✓ | Rain, night |
[57] | 2020 | NuScenes | ✓ | ✓ | ✓ | Rain, night |
[58] | 2020 | BDD100K | - | - | ✓ | Daytime, night |
[59] | 2020 | ApolloScape | ✓ | - | - | Rain, Night |
[60] | 2016 | SYNTHIA | - | - | - | Snow |
[61] | 2016 | NCLT | ✓ | - | ✓ | Snow |
[62] | 2021 | CADCD | ✓ | - | ✓ | Snow |
[64] | 2020 | End of the earth | ✓ | - | ✓ | Snow |
[65] | 2020 | nuImages | - | - | ✓ | - |
[66] | 2019 | Argoverse | ✓ | - | ✓ | - |
[67] | 2019 | Astyx | ✓ | ✓ | ✓ | - |
[68] | 2020 | DENSE | ✓ | ✓ | ✓ | Rain, snow, fog, night |
[69] | 2018 | Foggy Cityscape | - | - | - | Fog/haze |
[70] | 2020 | Berkley DeepDrive | - | - | ✓ | Rain, fog, snow, night |
[71] | 2017 | Mapillary | - | - | ✓ | Rain, fog, snow, night |
[72] | 2019 | EuroCity | - | - | ✓ | Rain, fog, snow, night |
[73] | 2017 | Oxford RobotCar | ✓ | ✓ | ✓ | Rain, snow, night |
[74] | 2020 | A* 3D | ✓ | - | ✓ | Rain, night |
[75] | 2021 | 4Seasons | - | - | ✓ | Rain, night |
[76] | 2018 | WildDash | - | - | ✓ | Rain, fog, snow, night |
[77] | 2018 | KAIST multispectral | - | - | ✓ | Daytime, night |
[78] | 2021 | Radiate | ✓ | ✓ | ✓ | Rain, fog, snow, night |
[79] | 2020 | EU | ✓ | - | ✓ | Snow, night |
[80] | 2022 | Boreas | ✓ | ✓ | ✓ | Rain, snow, night |
[81] | 2020 | DAWN | - | - | - | Rain, snow, fog, sandstorm |
[82] | 2021 | PVDN | - | - | ✓ | night |
Ref. | Year | Technique | Weather Classes | Accuracy |
---|---|---|---|---|
[101] | 2014 | SVM+PHOW | Rain, Fog, Snow, Low light | 82.5% |
[106] | 2022 | HOG–LBP | several weather | 96.4% |
[108] | 2012 | histogram GDVM | Rain, Night | 92.4% |
[111] | 2019 | tensor decomposition | Night | 95.82% |
[113] | 2017 | Particle Filtering | Dust, Snow | 94.78% |
Ref. | Year | Technique | Weather Classes | Accuracy |
---|---|---|---|---|
[141] | 2023 | YOLOv3 | several weathers | 74.38% (AP) |
[145] | 2017 | CNN | night time | 98.3% |
[147] | 2021 | Mask R-CNN | lowlight | 85.05% |
[152] | 2022 | YOLOv5 | Day, Night | 96.6% |
[153] | 2023 | YOLOv7, CNN, Faster R-CNN, SSD, HOG | Rain, Fog, Snow, Sand | 0.73, 0.883, 0.71, 0.657, 0.70 |
Ref. | Approach | Practical Implications | KPIs | Challenges | Addressing Challenges |
---|---|---|---|---|---|
[145] | CNN-based Human Detection at Nighttime | Enhances pedestrian detection in low-light conditions, improving safety and navigation for AVs. | Detection Accuracy, False Negative Rate, Computational Efficiency | Limited visibility, background noise interference | Utilizes CNNs trained on nighttime images to recognize pedestrian features, potentially reducing false negatives. |
[147] | Optimized Mask R-CNN for Low-Light Pedestrian Detection | Tailors the Mask R-CNN algorithm for low-light environments, maintaining high detection rates. | Detection Accuracy, Precision, Recall, Real-time Processing | Optimization for low-light conditions, computational complexity | Adjusts model parameters, uses data augmentation, and employs hardware acceleration for real-time processing. |
[152] | YOLO-v5 for Pedestrian Detection in Daytime and Nighttime | Demonstrates the versatility of YOLO-v5 across different lighting conditions, ensuring consistent performance. | Detection Speed, Accuracy, Robustness | Balancing speed with accuracy in varying lighting | Fine-tunes the YOLO-v5 model on diverse datasets, ensuring generalization across different lighting scenarios. |
[153] | DL Approaches for Adverse Weather Detection | Improves detection reliability in adverse weather, crucial for AV safety and functionality. | Detection Accuracy, Robustness, System Reliability | Degradation of performance due to weather conditions | Uses DL models that learn to recognize patterns despite weather distortions and integrate multi-sensor data for enhanced detection. |
Ref. | Approaches | Technique | Superiority | Limitations | Additional Considerations |
---|---|---|---|---|---|
[154,155] | Color-based Lane Detection | Traditional | Simple to implement, low computational cost | Limited in adverse weather, reliance on good weather conditions for high accuracy | Research into adaptive color models or fusion with other sensor data could improve performance in challenging conditions. |
[158,159] | LiDAR Integration | Traditional | Reduces reliance on visual data, robust in various weather | High cost, sensitivity to unfavorable weather conditions. | Cost reduction and miniaturization of LiDAR sensors could make this technology more accessible for widespread use. |
[160] | HSV Color Space | Traditional | Works well in daylight or with white light; uses color correspondence | Requires global thresholding, less effective in tunnel environments with distinct color illumination. | Enhancing the color model with machine learning could improve its adaptability to different lighting conditions. |
[161] | SafeDrive Technique | Traditional | Utilizes historical data and vehicle location for lane detection in low-light areas | May not generalize well to all environments, relies on available historical data. | Incorporating real-time weather data and vehicle dynamics could improve the technique’s robustness. |
[163] | CNN-based Real-time Road Condition Detection | DL | High accuracy with transfer learning automates real-time condition recognition | Requires pre-trained models, may not adapt quickly to new conditions | Continuous learning and model updating could help maintain high accuracy as conditions change. |
[165] | Lane Marker Detection in Rainy Weather | DL | Prioritizes feature selection to counteract rain-induced blurriness | May require extensive training data, complex model architecture | Using smaller, more specialized models could reduce computational demands and improve real-time performance. |
[141] | Generative Adversarial Networks (GANs) | DL | Can generate realistic data for training, improve feature extraction | Requires significant computational resources, may struggle with certain types of adverse weather. | Research into GANs for adverse weather simulation could provide more robust training data for DL models. |
[152] | Multispectral Imaging | DL | Combines different imaging modalities for improved detection | Requires specialized hardware; may be complex to integrate | Further development of multispectral imaging techniques could lead to more reliable detection systems. |
[163] | Transfer Learning | DL | Allows models to adapt to new tasks with fewer data | May not perform as well as task-specific models if the transfer is not well-aligned. | Fine-tuning transfer learning models for specific AV tasks could enhance their effectiveness. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tahir, N.U.A.; Zhang, Z.; Asim, M.; Chen, J.; ELAffendi, M. Object Detection in Autonomous Vehicles under Adverse Weather: A Review of Traditional and Deep Learning Approaches. Algorithms 2024, 17, 103. https://doi.org/10.3390/a17030103
Tahir NUA, Zhang Z, Asim M, Chen J, ELAffendi M. Object Detection in Autonomous Vehicles under Adverse Weather: A Review of Traditional and Deep Learning Approaches. Algorithms. 2024; 17(3):103. https://doi.org/10.3390/a17030103
Chicago/Turabian StyleTahir, Noor Ul Ain, Zuping Zhang, Muhammad Asim, Junhong Chen, and Mohammed ELAffendi. 2024. "Object Detection in Autonomous Vehicles under Adverse Weather: A Review of Traditional and Deep Learning Approaches" Algorithms 17, no. 3: 103. https://doi.org/10.3390/a17030103
APA StyleTahir, N. U. A., Zhang, Z., Asim, M., Chen, J., & ELAffendi, M. (2024). Object Detection in Autonomous Vehicles under Adverse Weather: A Review of Traditional and Deep Learning Approaches. Algorithms, 17(3), 103. https://doi.org/10.3390/a17030103