Next Article in Journal
MM-Wave Radar-Based Recognition of Multiple Hand Gestures Using Long Short-Term Memory (LSTM) Neural Network
Next Article in Special Issue
Traffic Landmark Matching Framework for HD-Map Update: Dataset Training Case Study
Previous Article in Journal
Machine Learning-Based Feature Selection and Classification for the Experimental Diagnosis of Trypanosoma cruzi
Previous Article in Special Issue
Real-Time LiDAR Point Cloud Semantic Segmentation for Autonomous Driving
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application and Comparison of Deep Learning Methods to Detect Night-Time Road Surface Conditions for Autonomous Vehicles

1
Ecole Supérieure des Techniques Aéronautiques et de Construction Automobile, 12 Avenue Paul Delouvrier (RD10), 78180 Montigny-le-Bretonneux, France
2
Ecole Supérieure des Techniques Aéronautiques et de Construction Automobile, Parc Universitaire Laval-Changé, Rue Georges Charpak, 53000 Laval, France
3
Institut d’Optique Graduate School, 2 Avenue Augustin Fresnel, 91120 Palaiseau, France
4
LCOMS, Université de Lorraine, 57000 Metz, France
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(5), 786; https://doi.org/10.3390/electronics11050786
Submission received: 2 February 2022 / Revised: 22 February 2022 / Accepted: 1 March 2022 / Published: 3 March 2022
(This article belongs to the Special Issue AI-Based Autonomous Driving System)

Abstract

:
Currently, road surface conditions ahead of autonomous vehicles are not well detected by the existing sensors on those autonomous vehicles. However, driving safety should be ensured for the weather-induced road conditions for day and night. An investigation into deep learning to recognize the road surface conditions in the day is conducted using the collected data from an embedded camera on the front of the vehicles. Deep learning models have only been proven to be successful in the day, but they have not been assessed for night conditions to date. The objective of this work is to propose deep learning models to detect on-line road surface conditions caused by weather ahead of the autonomous vehicles at night with a high accuracy. For this study, different deep learning models, namely traditional CNN, SqueezeNet, VGG, ResNet, and DenseNet models, are applied with performance comparison. Considering the current limitation of existing night-time detection, reflection features of different road surfaces are investigated in this paper. According to the features, night-time databases are collected with and without ambient illumination. These databases are collected from several public videos in order to make the selected models more applicable to more scenes. In addition, selected models are trained based on a collected database. Finally, in the validation, the accuracy of these models to classify dry, wet, and snowy road surface conditions at night can be up to 94%.

1. Introduction

In recent years, autonomous driving technology has been developed rapidly. In order to ensure the passenger has a safe and comfortable experience with autonomous vehicles, advanced obstacle detection systems have to be implemented. One of the important issues of obstacles detection systems is the detection of the road surface conditions ahead of the autonomous vehicles induced by the weather, such as dry, wet, icy, and snowy road surfaces. According to [1,2], the risk of traffic accidents is significantly related to weather conditions. In Europe, according to data from European Road Safety Observatory, 29% of fatalities occurred in non-dry conditions (including rain, fog, snow, etc.) in 2016 [3]. These surface conditions, especially icy and snowy ones, decrease the road adhesion, which increases the breaking distance. Thus, the distinction of road surface condition is crucial for safe autonomous driving. Once the road surface conditions can be recognized, the autonomous vehicles can brake in advance, which increases the safety of the autonomous vehicles.
Investigations into the recognition of road surface conditions have been conducted since the 1990s [4,5,6]. During several years, research has been focused on methods to detect road conditions in an attempt to decrease accidents caused by slippery roads. A more promising technique for the road condition classification exploits the variation in intensity of the scattered NIR (Near Infra-Red) light from the road surface. Authors in [7,8] investigated the feasibility of the NIR system to recognize the road surface conditions. It is proven that using a tri-wavelengths sourced in the NIR band can be feasible to distinguish the road surface conditions. However, this system has the disadvantage of high cost.
In recent years, the growth of computer performance and the development of machine learning algorithms have allowed us to analyze and extract information related to the road surface conditions easily [9]. In the literature, Ref. [10] exploited the data from a weather station to forecast the road surface condition in rainy days. Ref. [11] exploited the Model of the Environment and Temperature of Road (METRo) to forecast the icy road surface conditions. Other references were focused on the data captured from cameras, as video information is the most intuitive information to distinguish the road surface conditions and has good availability. Ref. [12] proposed a texture based model to detect wet road surface conditions. Refs. [13,14] used SVM (Support Vector Machine) to classify road surface conditions. The accuracy of the classification could be 90%. In [15], comparisons were made between the SVM and Naive Bayes, where it was concluded that SVM is better than Naive Bayes in accuracy. Ref. [16] achieved a classification accuracy of 97% with the SqueezeNet model as one of the deep learning models.Ref. [17] went further, using the running SqueezeNet model to reduce the computational complexity without significantly decreasing its prediction accuracy. In addition, DenseNet121 and AlexNet/caffeNet are exploited in [18,19] in order to classify the road surfaces or estimate the road friction. In [20], a CNN-based model RCNet is proposed to classify the road surface conditions with an accuracy of 99%. Furthermore, in order to solve the problem of database imbalance, a CycleGAN is proposed in [21] to artificially generate images of wet and snow-covered roads, which could help the road surface algorithms to obtain better performance.
The references above are mainly focused on the road surface conditions during the day. However, in real life, slippery road surface conditions at night are more difficult to be detected, due to the limited lighting conditions. Indeed, at night, the contrast of the images captured by the camera is decreased. The risk of accidents caused by road surface conditions at night is higher. Therefore, the importance of the obstacle system to detect the road conditions at night-time for autonomous vehicles is obvious. Refs. [22,23] investigated the cases at night via images from cameras. Both papers use the Mahalanobis distance to distinguish the road conditions. Their accuracies are about 70–80%. Ref. [24] extracted luminance, color information, and texture feature of images from cameras. The nearest neighbor method was exploited to make classifications. The accuracy is about 90%. Compared to the references focused on the database during the day, fewer references investigated the road surface conditions at night. In addition, one of the limitations of [22,23,24] is that the databases that they collected are constrained to one given road, and even the test data are acquired on the same road, which certainly undermines the feasibility of the model to other scenes.
In the literature, deep learning models in [16,18], such as SqueezeNet and DenseNet, were proven to be successful models to classify the road surface conditions during the day. However, their performance to classify road surface conditions has not been assessed for night. Considering the limitations of classification of the road surface conditions at night, we propose the use of deep learning models to classify the road surface conditions at night using images captured from cameras on the front of the vehicles. This system will be implemented on the autonomous vehicles in order to increase driving safety. In [22,23,24], the classification accuracy of models at night is lower compared with the accuracy of the models in the day. A possible reason is that the images captured at night depend greatly on the road illumination conditions. Illumination sources at night are mainly the headlamps of the vehicles and ambient light sources such as street lamps. The positions of these light sources relative to the camera are different, which results in the features of the image being different under the illumination of the two light sources. Thus, we will first discuss the features of the images according to the illumination conditions. In addition, as previous works about night road surface conditions utilize data from a fixed road, we aim to collect data from more scenes depending on the illumination conditions, in order to give our models better applicability. The road conditions will be classified based on the illumination conditions discussed before. In this work, the road conditions aimed to be investigated are dry, wet, and snowy road conditions. Therefore, the data of these road conditions will be collected from public videos, and deep learning models will be assessed to these models to obtain a better accuracy at night. The contributions of the paper are as follows:
  • Different deep learning models, such as traditional CNN, SqueezeNet, VGG, ResNet, and DenseNet models, are proposed and applied to classify road surface conditions at night, which have not been investigated in detail before.
  • Illumination conditions are discussed based on the reflection features. Models are trained separately in different illumination conditions in order to increase accuracy.
  • Data of different scenarios with and without ambient illuminations at night are collected.
In this paper, Section 2 is devoted to the investigation of the features of the road surface under different illumination conditions. In Section 3, according to these features, databases are collected separately and the deep learning models that will be applied are introduced. In Section 4, performance and comparison of the models will be presented. Finally, a discussion will be given in Section 5, followed by a conclusion in Section 6.

2. Features of Images Captured at Night

In the investigation of the images captured at night, it is found that the features of reflected light vary depending on the illumination conditions. For dry and snowy conditions, light scatters on the surface. However, wet conditions have a different behavior. Most of the light reflects on a wet road in the opposite direction to the incident direction, which is because water on the road smooths the rough road surface.
This feature makes wet road conditions behave quite differently in different ambient illuminations. Where the road surface is only illuminated by headlamps of vehicles, as in Figure 1c, the light emitted from the vehicle will reflect forward and cannot be captured by the camera on the vehicle. Thus, the wet road surface will look very dark in the camera mounted on the vehicle. This is due to the specular reflection on the wet road surface. Where the wet road is illuminated by ambient light sources, such as street lamps, or other light sources along with the headlamps of the vehicle, as in Figure 1d, the light emitted by street lamps will be captured by the camera through the specular reflection on the road surface, which will make the images brighter than in cases without an ambient light source. As for dry and snowy road conditions, presented in Figure 1a,b, the road surface is rough. Thus, the camera mounted on vehicles can capture the light scattered on the road surface with ambient light illuminations.
In Figure 2, the examples of different road conditions with and without ambient light illumination are presented. Where the road surface is only illuminated by headlamps, luminance in the road image is slightly low for dry road surfaces, as illustrated by Figure 2a. This is due to the fact that the color of asphalt is usually dark. Luminance is high for snowy road surfaces, as shown in Figure 2c, because the road becomes bright or white. The wet road surface in the image is very dark, even darker than dry road surfaces, as shown in Figure 2b, which is due to the specular reflection. On the contrary, in cases in which the road is illuminated by ambient light and headlamps, the images of dry and snowy road surfaces are similar to those in the previous case, as shown in Figure 2d,f. The images of wet road surface conditions present a very bright reflection of the street lamp as shown in Figure 2e. These two illumination conditions often correspond to the illumination conditions in urban areas and suburban areas (e.g., the countryside and highway). Thus, according to these features, we will collect the data in two illumination conditions: with and without ambient illumination.

3. Description of the Models

The models that are suggested to detect road surface conditions at night are CNN, SqueezeNet, VGG, ResNet50, and DenseNet.

3.1. CNN Models

CNN models are well known for hybrid feature abstraction from images [25]. The model is composed of convolution layers, pooling layers, and a dense flattening layer. Each convolution layer is followed by a pooling layer with an activation function. The dense layer follows after the last pooling layer.
This model needs to specify the number of convolution layers and the filter size. In order to obtain best performance, the models are investigated from one to three consecutive convolution layers. In addition, the filter size is tested from 3 to 17 with stride of 2. As a result, the initial filter size that gives maximum test accuracy is [7 × 7 × 8]. Figure 3 shows an example of three consecutive convolution layer CNN models with the selected filter.
This CNN model is designed with an input image of size [256 × 256 × 6]. The six dimensions refer to the six color channel features of the images. The sizes of the three filters are, respectively, [7 × 7 × 8], [7 × 7 × 16], and [7 × 7 × 32]. The max-pooling layer reduces the resolution of the convolved image by half, followed by a ReLU (Rectified Linear Unit) activation function and batch normalization operation with a dropout probability of 0.8. The loss function used for training the back-propagation is a cross-entropy loss function. The dimension of the output of the model is [1 × 1 × 3], which represents the probability that the image belongs to dry, wet, or snowy conditions, respectively. Thus, the final predicted class label is the class with maximum predicted probability.

3.2. SqueezeNet Model

This deep learning model was proposed in [16,26]. It has proven to be a very efficient model to process road surfaces at night. In addition, it replaces [3 × 3] convolution filters in the AlexNet model with [1 × 1] filters, which leads to fewer input channels. Then, it uses Fire module as an expand module. In Fire module, a [1 × 1] convolution and a [3 × 3] convolution are both applied to the output of previous layer. Their output results are concatenated. In addition, ReLU are used as the activation layer. This model uses late max-pooling to improve accuracy and does not need additional parameters. Finally, residual connections are applied between layers of the same dimensionality, followed by parameter pruning from ‘Deep Compression’ to further reduce the parameters.
In our case, this model is adapted for the input and output dimensions. The input of the model is [256 × 256 × 6]. Thus, the input channel sizes for each layer are as follows: Input: [256 × 256 × 6], Conv1: [128 × 128 × 96], MaxPooling1: [63 × 63 × 96], Fire2: [63 × 63 × 128], Fire3: [63 × 63 × 128], Fire4: [63 × 63 × 256], MaxPooling2: [31 × 31 × 256], Fire5: [31 × 31 × 256], Fire6: [31 × 31 × 384], Fire7: [31 × 31 × 384], Fire8: [31 × 31 × 512], MaxPooling3: [15 × 15 × 512], Fire9: [15 × 15 × 512], Dense(Conv10): [15 × 15 × 3], and Output: [1 × 1 × 3]. Figure 4 shows the adapted structure of the SqueezeNet model.

3.3. VGG Model

The VGG model was first introduced in [27]. VGG is a classical convolutional neural network architecture. It makes an improvement on AlexNet by replacing large kernel-sized filters (11 and 5 in the first and second convolutional layer, respectively) with multiple 3 × 3 kernel-sized filters, one after another. The VGG16 and VGG19 in [27] are investigated in our task, which, respectively, have 16 and 19 layers. The performances of VGG16 and VGG19 will be compared. Their outputs are replaced by an output of three dimensions for our classification task.

3.4. ResNet Model

The ResNet model was introduced in [28]. In order to solve the problem that deeper neural networks are more difficult to train, a residual learning framework was proposed to ease the training of deep networks. The layers are reformulated as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. It has been proven to be a very successful model in pattern recognition. The ResNet50 in [28] is investigated in our task considering the complexity of computation and the model. The ResNet50 is composed of 50 layers with ResNet architecture. The output of the ResNet50 is replaced by an output of three dimensions for our classification task.

3.5. DenseNet Model

The DenseNet model was first introduced in [29]. A DenseNet is a type of convolutional neural network that utilizes dense connections between layers, through dense blocks, where we connect all layers (with matching feature-map sizes) directly with each other. In [18], DenseNet121 is investigated to estimate the road frictions by images captured by cameras on the front of vehicles during the day. The network structure is initialized as in [29] with a basic convolution layer followed by four dense blocks and three transition layers, while the output layer is replaced by an output of three dimensions for our classification task.
These models will be assessed with an individual database with and without ambient light illumination.

4. Database and Pre-Processing

The evaluation of the models above should be assessed on a database. As no specific database focused on road conditions at night is available, an individual database needs to be collected. In order to ensure that the model can be applied to as many scenes as possible, the database is collected from several YouTube videos. As in Section 2, the images taken under ambient light illumination and the images taken without ambient light illumination have a large difference. Therefore, the databases are collected separately for the two illumination conditions and they will be trained separately. We found that the videos under ambient illumination are regularly taken in urban areas. On the other hand, the videos without ambient illumination are regularly taken in the countryside or on a highway. Images are extracted from the frames of the videos with an interval of at least one second between each frame, in order to ensure variability within the database. The images are labeled manually.
In Table 1, the number of images in each class for training and validation under different illumination conditions are presented. In order to test the feasibility of applying the models to other scenes, the database for validation is collected from other videos. The original resolution of the images are [1920 × 1080] or [1280 × 720]. Then, the images are resized to match their respective model. The color space of the original images is RGB (Red, Green, Blue). For the CNN and SqueezeNet models, the images are resized to [256 × 256] and they are converted into HSV (Hue, Saturation, Value) color space. Due to the images in RGB color space and HSV color space, each image can be represented in a [256 × 256 × 6] matrix, in which the six channels are Red, Green, Blue, Hue, Saturation, and Value. For the VGG, ResNet50, and DenseNet121 models, the images are resized to [224 × 224] and the input of these models is [224 × 224 × 3]. The three dimensions correspond to RGB color space. Additionally, histogram-based image equalization is applied to the images in the databases.

5. Training and Validation

An RTX 3080 GPU is used to train these models. The optimizer of the models is selected as Adam optimization. The learning rates of the CNN, SqueezeNet, ResNet, and DenseNet models are searched to be set as 0.001. The initial learning rates of model VGG16 and VGG19 are set to be 10 6 , and after 10 epochs the learning rates can be expressed as 10 6 × e x p ( 0.1 × ( 10 e p o c h ) ) . The batch size is 50. In addition, the categorical crossentropy loss function is chosen.

6. Performance Evaluation

The accuracy of the models is presented in Table 2 and Table 3 for databases with and without ambient light illumination, respectively. It can be noticed that the training accuracy of all the models is about 99% for the two light illumination conditions, while the validation accuracy of all models can reach 90%. The DenseNet121 model stands out with a validation accuracy reaching 94.08% and 95.46% in the two illumination conditions, whereas the validation accuracy of other models is around 90–92%. Considering the storage of parameters, the number of parameters DenseNet121 uses is the second lowest. It is only larger than that of the SqueezeNet model. For example, for the VGG model and 1 convolution layer CNN model, the storing space of the parameters is more than 1.5 GB, while the DenseNet121 and SqueezeNet models only use 80 MB and 8.6 MB, respectively. Another core parameter is the test time of the model. DenseNet121 has the longest test time, which is 41 ms/image, while the other models have test times from 20–35 ms. However, the test time depends on the configuration of the hardware. This test time is tested by the RTX 3080. In the autonomous vehicles, this time might be changed depending on the hardware configuration. In our configuration, the test time of DenseNet121 is enough for the autonomous vehicles to make a decision. Based on the data training and validation, it can be concluded that the DenseNet121 model is more promising for implementation to the autonomous vehicles, with relatively small storing space requirement and a good accuracy for the three road surface conditions. As the performance of the DenseNet121 model is better than the other models tested, the confusion matrix of this model is presented in Figure 5. According to the confusion matrix, the wet road condition is the most accurate condition that can be identified, while some dry and snowy road surface conditions are recognized as wet road conditions. For one single road surface condition, the accuracy is above 88.93%. With this system, the autonomous vehicle can increase the safety when it encounters a slippery road surface.

7. Discussion

In this work, dry, wet, and snowy road surface conditions are investigated. The black ice is not under consideration, as it is too hard to be identified with the currently applied cameras. Thus, it is hard to find a database containing black ice and to label the images. Compared to existing works to detect the road surface conditions at night, the deep learning models have advantage of high accuracy and can be applied to more scenes. In [22,23,24], data were collected from a certain road, and the validation was also tested for the same road. The feasibility of their models use in other scenes is not investigated, which is a limitation of their models. In addition, we collected the data in two illumination conditions, whereas [22,23] only examined conditions under street light illumination. Moreover, according to [16], these two models can also be applied to the detection of road surface conditions at night using a daytime database.
However, compared with the models aiming at the daytime, the accuracy of the night conditions is lower. In [16,20], the accuracy of the models is, respectively, 99% and 97%, which is higher than found by this work. This may be due to the different database, but more importantly due to the different lighting conditions during day and night. At night, although headlamps and street lamps can help to illuminate the road, they are much weaker than sunlight, which leads to issues such as lack of contrast. Considering the limitation of the night model, one solution might be to combine the deep learning with other sensors. As introduced by [7,8], NIR light can provide more significant differences between different road surface conditions. Combining deep learning and NIR systems might be a development trend to classify the road surface conditions at night.

8. Conclusions

In this work, different deep learning models are applied and show their benefits to detect road surface conditions at night with a high accuracy. According to the reflection features of the different road conditions, databases with and without ambient light illumination are collected from public videos. Indeed, these data can increase the reliability to detect the road surface conditions in urban areas and suburban areas such as the countryside or highway. In addition, the used deep learning models show their advantages to detect road surface conditions at night compared to the existing literature. CNN, SqueezeNet, VGG, ResNet50, and DenseNet121 models are tested and validated. In comparing the performance of these models, the DenseNet121 model is recommended, with an accuracy that can reach 99% for the training and 94% for the validation. As the proposed models have low computational time complexity for processing test images, we can conclude that they are suitable for real-time prediction of road surface conditions for autonomous vehicles with a high reliability. Due to these models, the safety of autonomous vehicles will be ensured for night activities. In the near future, this system will be implemented and tested on autonomous vehicles in collaboration with the companies Renault and Valeo.

Author Contributions

H.Z. collected the database and developed the code to analyse the data. S.A. guided the development of the model and analysis. H.Z. wrote the manuscript draft. R.S. reviewed and improved the manuscript. M.B. reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by ELS Embedded Lighting Systems Chair.

Data Availability Statement

The videos in the databases are available in 2022 at https://www.youtube.com/playlist?list=PLZi94yHOS-GROyCk6zvEnNb0LbCyCpc1O. The resized and pre-processed dataset are available in 2022 at: https://drive.google.com/drive/folders/16HlHUngIGWjtA27vP2MSnWkhXP6h8IFC?usp=sharing.

Acknowledgments

The authors thank the companies Renault and Valeo for their support of the project.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CNNConvolutional Neural Network
ReLURectified Linear Unit
METRoModel of the Environment and Temperature of Road
NIRNear Infra-Red
RCNetRoad Classifcation Network
VGGVisual Geometry Group
ResNetResidual neural Network
DenseNetDense Convolutional Network

References

  1. Zhang, H.; Azouigui, S.; Sehab, R.; Boukhnifer, M.; Balembois, F.; Bedu, F.; Cayol, O.; Beev, K.; Planche, G. Remote sensing techniques to recognize road surface conditions for autonomous vehicles. In Proceedings of the SIA VISION, Paris, France, 17–18 March 2021; pp. 179–184. [Google Scholar]
  2. Bellone, M.; Ismailogullari, A.; Müür, J.; Nissin, O.; Sell, R.; Soe, R.M. Autonomous Driving in the Real-World: The Weather Challenge in the Sohjoa Baltic Project. In Towards Connected and Autonomous Vehicle Highways; Springer: Berlin/Heidelberg, Germany, 2021; pp. 229–255. [Google Scholar]
  3. European Road Safety Observatory. Annual Accident Report 2018; European Road Safety Observatory: Brussels, Belgium, 2018. [Google Scholar]
  4. Fukui, H.; Takagi, J.; Murata, Y.; Takeuchi, M. An image processing method to detect road surface condition using optical spatial frequency. In Proceedings of the Conference on Intelligent Transportation Systems, Boston, MA, USA, 12 November 1997; pp. 1005–1009. [Google Scholar]
  5. Chen, Y. Image analysis applied to black ice detection. In Applications of Artificial Intelligence IX; International Society for Optics and Photonics: Bellingham, WA, USA, 1991; Volume 1468, pp. 551–562. [Google Scholar]
  6. Holzwarth, F.; Eichhorn, U. Non-contact sensors for road conditions. Sens. Actuators A Phys. 1993, 37, 121–127. [Google Scholar] [CrossRef]
  7. Ruiz-Llata, M.; Rodríguez-Cortina, M.; Martín-Mateos, P.; Bonilla-Manrique, O.E.; López-Fernández, J.R. LiDAR design for Road Condition Measurement ahead of a moving vehicle. In Proceedings of the 2017 IEEE SENSORS, Scotland, UK, 29 October–1 November 2017; pp. 1–3. [Google Scholar]
  8. Casselgren, J.; Sjödahl, M.; LeBlanc, J.P. Model-based winter road classification. Int. J. Veh. Syst. Model. Test. 2012, 7, 268–284. [Google Scholar] [CrossRef]
  9. Singh, K.B.; Arat, M.A. Deep learning in the automotive industry: Recent advances and application examples. arXiv 2019, arXiv:1906.08834. [Google Scholar]
  10. Kim, S.; Lee, J.; Yoon, T. Road surface conditions forecasting in rainy weather using artificial neural networks. Saf. Sci. 2021, 140, 105302. [Google Scholar] [CrossRef]
  11. Smolyakov, D.; Burnaev, E. Software System for Road Condition Forecast Correction. arXiv 2020, arXiv:2003.09957. [Google Scholar]
  12. Amthor, M.; Hartmann, B.; Denzler, J. Road condition estimation based on spatio-temporal reflection models. In German Conference on Pattern Recognition; Springer: Aachen, Germany, 2015; pp. 3–15. [Google Scholar]
  13. Zhao, J.; Wu, H.; Chen, L. Road surface state recognition based on SVM optimization and image segmentation processing. J. Adv. Transp. 2017, 2017, 6458495. [Google Scholar] [CrossRef]
  14. Omer, R.; Fu, L. An automatic image recognition system for winter road surface condition classification. In Proceedings of the 13th International IEEE Conference on Intelligent Transportation Systems, Funchal, Portugal, 19–22 September 2010; pp. 1375–1379. [Google Scholar]
  15. Marianingsih, S.; Utaminingrum, F. Comparison of support vector machine classifier and Naïve Bayes classifier on road surface type classification. In Proceedings of the 2018 International Conference on Sustainable Information Engineering and Technology (SIET), Malang, Indonesia, 10–12 November 2018; pp. 48–53. [Google Scholar]
  16. Roychowdhury, S.; Zhao, M.; Wallin, A.; Ohlsson, N.; Jonasson, M. Machine learning models for road surface and friction estimation using front-camera images. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
  17. Fink, D.; Busch, A.; Wielitzka, M.; Ortmaier, T. Resource Efficient Classification of Road Conditions through CNN Pruning. IFAC-PapersOnLine 2020, 53, 13958–13963. [Google Scholar] [CrossRef]
  18. Svensson, E. Transfer Learning for Friction Estimation: Using Deep Reduced Features. Master’s Thesis, Linköping University, Linköping, Sweden, 10 August 2020. [Google Scholar]
  19. Balcerek, J.; Konieczka, A.; Piniarski, K.; Pawłowski, P. Classification of road surfaces using convolutional neural network. In Proceedings of the 2020 Signal Processing: Algorithms, Architectures, Arrangements, and Applications (SPA), Poznan, Poland, 23–25 September 2020; pp. 98–103. [Google Scholar]
  20. Dewangan, D.K.; Sahu, S.P. RCNet: Road classification convolutional neural networks for intelligent vehicle system. Intell. Serv. Robot. 2021, 14, 199–214. [Google Scholar] [CrossRef]
  21. Choi, W.; Heo, J.; Ahn, C. Development of Road Surface Detection Algorithm Using CycleGAN-Augmented Dataset. Sensors 2021, 21, 7769. [Google Scholar] [CrossRef] [PubMed]
  22. Shibata, K.; Takeuch, K.; Kawai, S.; Horita, Y. Detection of road surface conditions in winter using road surveillance cameras at daytime, night-time and twilight. Int. J. Comput. Sci. Netw. Secur. 2014, 14, 21. [Google Scholar]
  23. Horita, Y.; Kawai, S.; Furukane, T.; Shibata, K. Efficient distinction of road surface conditions using surveillance camera images in night time. In Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, Fl, USA, 30 September–3 October 2012; pp. 485–488. [Google Scholar]
  24. Kawai, S.; Takeuchi, K.; Shibata, K.; Horita, Y. A method to distinguish road surface conditions for car-mounted camera images at night-time. In Proceedings of the 2012 12th International Conference on ITS Telecommunications, Taipei, Taiwan, 5–8 November 2012; pp. 668–672. [Google Scholar]
  25. Liu, G.; Han, P.; Niu, Y.; Yuan, W.; Lu, Z.; Wen, J.R. Graph-boosted convolutional neural networks for semantic segmentation. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 612–618. [Google Scholar]
  26. Iandola, F.N.; Han, S.; Moskewicz, M.W.; Ashraf, K.; Dally, W.J.; Keutzer, K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv 2016, arXiv:1602.07360. [Google Scholar]
  27. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  29. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
Figure 1. (a) Dry or snowy road conditions without ambient light illumination. (b) Dry or snowy road conditions with ambient light illumination. (c) Wet road conditions without ambient light illumination. (d) Wet road conditions with ambient light illumination.
Figure 1. (a) Dry or snowy road conditions without ambient light illumination. (b) Dry or snowy road conditions with ambient light illumination. (c) Wet road conditions without ambient light illumination. (d) Wet road conditions with ambient light illumination.
Electronics 11 00786 g001
Figure 2. Examples of different road conditions. (ac) are taken without ambient light illumination. (b,d,f) are taken under ambient light illumination. (a,d) are taken in dry road conditions. (b,e) are taken in wet road conditions. (c,f) are taken in snowy road conditions.
Figure 2. Examples of different road conditions. (ac) are taken without ambient light illumination. (b,d,f) are taken under ambient light illumination. (a,d) are taken in dry road conditions. (b,e) are taken in wet road conditions. (c,f) are taken in snowy road conditions.
Electronics 11 00786 g002
Figure 3. Example of three convolution layer CNN models.
Figure 3. Example of three convolution layer CNN models.
Electronics 11 00786 g003
Figure 4. SqueezeNet model [26].
Figure 4. SqueezeNet model [26].
Electronics 11 00786 g004
Figure 5. (a) Confusion matrix of DenseNet121 model for database with ambient illumination. (b) Confusion matrix of DenseNet121 model for database without ambient illumination.
Figure 5. (a) Confusion matrix of DenseNet121 model for database with ambient illumination. (b) Confusion matrix of DenseNet121 model for database without ambient illumination.
Electronics 11 00786 g005
Table 1. Number of images in each class for training and validation under different illumination.
Table 1. Number of images in each class for training and validation under different illumination.
Database for Training
With ambient illuminationWithout ambient illumination
DryWetSnowDryWetSnow
321934643510372238303601
Database for Validation
With ambient illuminationWithout ambient illumination
DryWetSnowDryWetSnow
372228374222608128094183
Table 2. Database with ambient light.
Table 2. Database with ambient light.
ModelsTraining AccuracyValidation AccuracyTotal ParametersTraining Time (min)Test Time (ms/Image)
3 convolution layer CNN99.98%90.08%33,592,5231321
2 convolution layer CNN99.92%90.72%67,121,7071522
1 convolution layer CNN99.90%90.89%134,224,219923
SqueezeNet model100%89.14%751,0752234
VGG1699.71%90.65%134,272,8355123
VGG1999.74%90.17%139,582,5314124
ResNet5099.96%92.54%23,593,8591230
DenseNet12199.95%94.08%7,040,5791941
Table 3. Database without ambient light.
Table 3. Database without ambient light.
ModelsTraining AccuracyValidation AccuracyTotal ParametersTraining Time (min)Test Time (ms/Image)
3 convolution layer CNN99.69%90.96%33,592,5231020
2 convolution layer CNN99.87%90.16%67,121,7071322
1 convolution layer CNN99.80%89.96%134,224,219524
SqueezeNet model99.91%93.59%751,07512.535
VGG1699.98%91.65%134,272,8354023
VGG1999.95%91.79%139,582,5314524
ResNet5099.88%92.17%23,593,8591630
DenseNet12199.99%95.46%7,040,5793341
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, H.; Sehab, R.; Azouigui, S.; Boukhnifer, M. Application and Comparison of Deep Learning Methods to Detect Night-Time Road Surface Conditions for Autonomous Vehicles. Electronics 2022, 11, 786. https://doi.org/10.3390/electronics11050786

AMA Style

Zhang H, Sehab R, Azouigui S, Boukhnifer M. Application and Comparison of Deep Learning Methods to Detect Night-Time Road Surface Conditions for Autonomous Vehicles. Electronics. 2022; 11(5):786. https://doi.org/10.3390/electronics11050786

Chicago/Turabian Style

Zhang, Hongyi, Rabia Sehab, Sheherazade Azouigui, and Moussa Boukhnifer. 2022. "Application and Comparison of Deep Learning Methods to Detect Night-Time Road Surface Conditions for Autonomous Vehicles" Electronics 11, no. 5: 786. https://doi.org/10.3390/electronics11050786

APA Style

Zhang, H., Sehab, R., Azouigui, S., & Boukhnifer, M. (2022). Application and Comparison of Deep Learning Methods to Detect Night-Time Road Surface Conditions for Autonomous Vehicles. Electronics, 11(5), 786. https://doi.org/10.3390/electronics11050786

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop