Deep Learning-Based Object Detection and Scene Perception under Bad Weather Conditions
Abstract
:1. Introduction
2. Related Work
3. Materials and Methods
3.1. Proposed Scheme
3.2. Imagery Annotation and Model Training
3.3. Testing and Validation Using Simulated Datasets
4. Performance and Evaluation
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bansal, P.; Kockelman, K.M. Are we ready to embrace connected and self-driving vehicles? A case study of Texans. Transportation 2018, 45, 641–675. [Google Scholar] [CrossRef]
- Krishnaveni, P.; Sutha, J. Novel deep learning framework for broadcasting abnormal events obtained from surveillance applications. J. Ambient Intell. Humaniz. Comput. 2020, 1–15. [Google Scholar] [CrossRef]
- Ahad, M.A.; Paiva, S.; Tripathi, G.; Feroz, N. Enabling technologies and sustainable smart cities. Sustain. Cities Soc. 2020, 61, 102301. [Google Scholar] [CrossRef]
- Chehri, H.; Chehri, A.; Saadane, R. Traffic signs detection and recognition system in snowy environment using deep learning. In Proceedings of the Third International Conference on Smart City Applications; Springer: Cham, Switzerland, 2020; pp. 503–513. [Google Scholar]
- Peppa, M.V.; Bell, D.; Komar, T.; Xiao, W. Urban traffic flow analysis based on deep learning car detection from CCTV image series. In Proceedings of the SPRS TC IV Mid-Term Symposium “3D Spatial Information Science–The Engine of Change”; Newcastle University: Newcastle upon Tyne, UK, 2018. [Google Scholar]
- Bahlmann, C.; Zhu, Y.; Ramesh, V.; Pellkofer, M.; Koehler, T. A system for traffic sign detection, tracking, and recognition using color, shape, and motion information. In Proceedings of the IEEE Proceedings Intelligent Vehicles Symposium, Las Vegas, NV, USA, 6–8 June 2005; pp. 255–260. [Google Scholar]
- Pawełczyk, M.Ł.; Wojtyra, M. Real world object detection dataset for quadcopter unmanned aerial vehicle detection. IEEE Access 2020, 8, 174394–174409. [Google Scholar] [CrossRef]
- Yahiaoui, M.; Rashed, H.; Mariotti, L.; Sistu, G.; Clancy, I.; Yahiaoui, L.; Kumar, V.R.; Yogamani, S. Fisheyemodnet: Moving object detection on surround-view cameras for autonomous driving. arXiv 2019, arXiv:1908.11789. [Google Scholar]
- Hu, L.; Ni, Q. IoT-driven automated object detection algorithm for urban surveillance systems in smart cities. IEEE Internet Things J. 2017, 5, 747–754. [Google Scholar] [CrossRef] [Green Version]
- Bau, D.; Zhu, J.-Y.; Strobelt, H.; Lapedriza, A.; Zhou, B.; Torralba, A. Understanding the role of individual units in a deep neural network. Proc. Natl. Acad. Sci. USA 2020, 117, 30071–30078. [Google Scholar] [CrossRef] [PubMed]
- Albawi, S.; Mohammed, T.A.; Al-Zawi, S. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology (ICET), Antalya, Turkey, 21–23 August 2017; pp. 1–6. [Google Scholar]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 18–20 June 1996; pp. 580–587. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Zhang, X.; Sun, J. Object Detection Networks on Convolutional Feature Maps. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1476–1481. [Google Scholar] [CrossRef] [Green Version]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 18–20 June 1996; pp. 770–778. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 18–20 June 1996; pp. 779–788. [Google Scholar]
- Gala, G.; Chavan, G.; Desai, N. Image Processing Based Driving Assistant System. Iconic Res. Eng. J. 2020, 3, 171–174. [Google Scholar]
- Chehri, A.; Sharma, T.; Debaque, B.; Duclos, N.; Fortier, P. Transport Systems for Smarter Cities, a Practical Case Applied to Traffic Management in the City of Montreal. In Sustainability in Energy and Buildings; Springer: Singapore, 2021; pp. 255–266. [Google Scholar]
- Delforouzi, A.; Pamarthi, B.; Grzegorzek, M. Training-based methods for comparison of object detection methods for visual object tracking. Sensors 2018, 18, 3994. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Hayouni, A.; Debaque, B.; Duclos-Hindié, N.; Florea, M. Towards Cognitive Vehicles: GNSS-free Localization using Visual Anchors. In Proceedings of the 2020 IEEE 23rd International Conference on Information Fusion (FUSION), Rustenburg, South Africa, 6–9 July 2020; pp. 1–8. [Google Scholar]
- Ahmed, I.; Jeon, G.; Chehri, A.; Hassan, M.M. Adapting Gaussian YOLOv3 with transfer learning for overhead view human detection in smart cities and societies. Sustain. Cities Soc. 2021, 70, 102908. [Google Scholar] [CrossRef]
- Al-qaness, M.A.A.; Abbasi, A.A.; Fan, H.; Ibrahim, R.A.; Alsamhi, S.H.; Hawbani, A. An improved YOLO-based road traffic monitoring system. Computing 2021, 103, 211–230. [Google Scholar] [CrossRef]
- Kasper-Eulaers, M.; Hahn, N.; Berger, S.; Sebulonsen, T.; Myrland, Ø.; Kummervold, P.E. Detecting Heavy Goods Vehicles in Rest Areas in Winter Conditions Using YOLOv5. Algorithms 2021, 14, 114. [Google Scholar] [CrossRef]
- Yang, Y.; Cai, L.; Wei, H.; Qian, T.; Gao, Z. Research on Traffic Flow Detection Based on Yolo V4. In Proceedings of the 2021 16th International Conference on Computer Science & Education (ICCSE), Lancaster, UK, 17–21 August 2021; pp. 475–480. [Google Scholar]
- Lee, H.-J.; Chen, S.-Y.; Wang, S.-Z. Extraction and recognition of license plates of motorcycles and vehicles on highways. In Proceedings of the Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 26 August 2004; IEEE: Piscataway, NJ, USA, 2004; Volume 4, pp. 356–359. [Google Scholar]
- De Oliveira, M.B.W.; Neto, A.D.A. Optimization of traffic lights timing based on multiple neural networks. In Proceedings of the 2013 IEEE 25th International Conference on Tools with Artificial Intelligence, Herndon, VA, USA, 4–6 November 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 825–832. [Google Scholar]
- Comelli, P.; Ferragina, P.; Granieri, M.N.; Stabile, F. Optical recognition of motor vehicle license plates. IEEE Trans. Veh. Technol. 1995, 44, 790–799. [Google Scholar] [CrossRef]
- Dharamadhat, T.; Thanasoontornlerk, K.; Kanongchaiyos, P. Tracking object in video pictures based on background subtraction and image matching. In Proceedings of the 2008 IEEE International Conference on Robotics and Biomimetics, Washington, DC, USA, 22–25 February 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 1255–1260. [Google Scholar]
- Cancela, B.; Ortega, M.; Penedo, M.G.; Fernández, A. Solving multiple-target tracking using adaptive filters. In Proceedings of the International Conference Image Analysis and Recognition; Springer: Berlin/Heidelberg, Germany, 2011; pp. 416–425. [Google Scholar]
- Sekar, G.; Deepika, M. Complex background subtraction using kalman filter. Int. J. Eng. Res. Appl. 2015, 5, 15–20. [Google Scholar]
- Rabiu, H. Vehicle detection and classification for cluttered urban intersection. Int. J. Comput. Sci. Eng. Appl. 2013, 3, 37. [Google Scholar] [CrossRef]
- Wang, K.; Liang, Y.; Xing, X.; Zhang, R. Target detection algorithm based on gaussian mixture background subtraction model. In Proceedings of the 2015 Chinese Intelligent Automation Conference; Springer: Berlin/Heidelberg, Germany, 2015; pp. 439–447. [Google Scholar]
- Sun, Z.; Bebis, G.; Miller, R. Monocular precrash vehicle detection: Features and classifiers. IEEE Trans. Image Process. 2006, 15, 2019–2034. [Google Scholar] [PubMed]
- Junior, O.L.; Nunes, U. Improving the generalization properties of neural networks: An application to vehicle detection. In Proceedings of the 2008 11th International IEEE Conference on Intelligent Transportation Systems, Beijing, China, 12–15 October 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 310–315. [Google Scholar]
- Roboflow How to Train YOLOv5 on Custom Objects. Available online: https://public.roboflow.com/object-detection/self-driving-car (accessed on 5 April 2020).
- Fang, Y.; Guo, X.; Chen, K.; Zhou, Z.; Ye, Q. Accurate and Automated Detection of Surface Knots on Sawn Timbers Using YOLO-V5 Model. BioResources 2021, 16, 5390–5406. [Google Scholar] [CrossRef]
References | Proposed Scheme | Techniques Implemented | Advantages | Implementation Challenges |
---|---|---|---|---|
[22] | Real-time road traffic management is done using an improved YOLOv3 model. | Its a convolution neural network-based approach for the traffic analysis system, available online datasets are used to train the proposed neural network model, real video sequences of road traffic are used to test the performance of the proposed system. | The trained neural network improves vehicle detection, lowers cost, and has modest hardware requirements. Large-scale construction or installation work is not required. | Neural network-based model often produces detections with false rates due to incorrect input ranges (false positives). |
[23] | Transfer learning to YOLOv5-based approach is utilized. | The proposed solution detects heavy goods vehicles at rest areas during winter to allow real-time prediction of parking spot occupancy in snowy conditions in winter. | Snowy conditions and the polar night in winter typically pose some challenges for image recognition; thermal network cameras can be used to solve the above problem. | The model faces some restrictions when analyzing images from small-angle cameras to detect objects that occur in groups and have a high number of overlaps and cut-offs.Detecting certain characteristic features of images can improve the model. |
[24] | YOLOv4 network model is used to monitor traffic flow. | YOLOv4 network model is modified to increase the convolution times after the feature layer. | More global and higher semantic level feature information. More accurate than the original YOLOv4 model. | Increases the network complexity. The average detection time of the proposed model is slower than the original model. |
[25,26,27] | Vehicle search is performed by detecting registration plates. | Neural networks, a block-difference method, and optical recognition techniques are used to detect moving objects. | Simplest in terms ofrecognition algorithms because of the contrast of the background and the characters, and the limited number of characters. | This approach does not allow detectingvehicles in situations where there are no license plates (bicycles) or when they arelocated in nonstandard areas (such as cars with temporary numbers). |
[28,29,30,31,32] | Background subtraction-based implementation combined with blob analysis, Kalman filter, Gaussian Mixture Model (GMM). | Background subtraction: Vehicle detection is to segments moving implemented by subtracting the dynamic component (moving objects) from the static background of the image. | It is efficient for computation time and storage, and it is the simplest and most popular. | Processing data in dense traffic conditions lead to vehicle fusion due to partial occlusion in the processed image data. As a result, the prediction of an incorrect bounding box may occur. |
[33] | Offline YOLO-based training method for object detection. Support vector machine is used to calculate the Haar wavelet function. | The offline tracker uses the detector for object detection in still images, and then a tracker based on Kalman filter associates the objects among video frames. | Offline YOLO trackers show more stability and provide improved performance. Faster with Kalman filter than the other trackers. | YOLO is not qualified for online tracking, because in this case it is very slow during the training phase. |
[34] | An approach based on multilayer neural networks is used, and the network is trained by a new algorithm: Minimization of Inter-class Interference (MCI). | The proposed algorithm creates a hidden space (i.e., feature space) where the patterns have a desirable statistical distribution. | Simplicity and robustness enable real-time applications possible. | The neural architecture, the linear output layer, is replaced by the Mahalanobis kernel to improve generalization, and disturbing images are used; therefore, this approach is time-consuming. |
Class | Images | Labels | Precision | Recall | mAP (0.5) |
---|---|---|---|---|---|
all | 239 | 1520 | 0.474 | 0.337 | 0.258 |
biker | 239 | 27 | 0.438 | 0.0741 | 0.0811 |
car | 239 | 1066 | 0.528 | 0.726 | 0.723 |
pedestrian | 239 | 156 | 0.22 | 0.308 | 0.186 |
trafficLight | 239 | 41 | 0.308 | 0.415 | 0.297 |
trafficLight-Green | 239 | 42 | 0.133 | 0.476 | 0.0956 |
trafficLight-GreenLeft | 239 | 4 | 1 | 0 | 0.00756 |
trafficLight-Red | 239 | 91 | 0.378 | 0.714 | 0.468 |
trafficLight-RedLeft | 239 | 24 | 0.2 | 0.0833 | 0.128 |
trafficLight-Yellow | 239 | 12 | 1 | 0 | 0.0169 |
truck | 239 | 57 | 0.532 | 0.578 | 0.573 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sharma, T.; Debaque, B.; Duclos, N.; Chehri, A.; Kinder, B.; Fortier, P. Deep Learning-Based Object Detection and Scene Perception under Bad Weather Conditions. Electronics 2022, 11, 563. https://doi.org/10.3390/electronics11040563
Sharma T, Debaque B, Duclos N, Chehri A, Kinder B, Fortier P. Deep Learning-Based Object Detection and Scene Perception under Bad Weather Conditions. Electronics. 2022; 11(4):563. https://doi.org/10.3390/electronics11040563
Chicago/Turabian StyleSharma, Teena, Benoit Debaque, Nicolas Duclos, Abdellah Chehri, Bruno Kinder, and Paul Fortier. 2022. "Deep Learning-Based Object Detection and Scene Perception under Bad Weather Conditions" Electronics 11, no. 4: 563. https://doi.org/10.3390/electronics11040563
APA StyleSharma, T., Debaque, B., Duclos, N., Chehri, A., Kinder, B., & Fortier, P. (2022). Deep Learning-Based Object Detection and Scene Perception under Bad Weather Conditions. Electronics, 11(4), 563. https://doi.org/10.3390/electronics11040563