Next Article in Journal
Using an Artificial Physarum polycephalum Colony for Threshold Image Segmentation
Previous Article in Journal
Spectral Characteristics of Fluctuating Aerodynamic Forces Acting on Rectangular Prisms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Black Ice Classification with Hyperspectral Imaging and Deep Learning

Department of Electronic Engineering, Yeungnam University, 280 Daehak-ro, Gyeongsan-si 38541, Republic of Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(21), 11977; https://doi.org/10.3390/app132111977
Submission received: 28 August 2023 / Revised: 19 October 2023 / Accepted: 31 October 2023 / Published: 2 November 2023

Abstract

:
With the development of new technologies inside car mechanisms with various sensors connected to the IoT, a new generation of automation is attracting attention. However, there are still some factors that are difficult to detect. Among them, one of the highest risk factors is black ice. A road covered with black ice, which is hard to see from a distance, is not only the cause of damage to vehicles passing over the spot, but it also puts lives at risk. Hence, the detection of black ice is essential. A lot of research has been done on this topic with various sensors and methods. However, hyperspectral imaging has not been used for this particular purpose. Therefore, in this paper, black ice classification has been performed with the help of hyperspectral imaging in collaboration with a deep learning model for the first time. With abundant spectral and spatial information, hyperspectral imaging is a good way to analyze any material. In this paper, a 2D–3D Convolutional Neural Network (CNN) has been used to classify hyperspectral images of black ice. The spectral data were preprocessed, and the dimension of the image cube was reduced with the help of Principal Component Analysis (PCA). The proposed method was then compared with the existing method for better evaluation.

1. Introduction

Black ice is a thin layer or coating of transparent ice on the road surface that is formed by snowing and raining. Whenever the temperature increases above the freezing point, some ice will melt, and when the temperature drops to freezing or below, the water freezes again. Less commonly, black ice will be formed from fog as well. The formation of black ice can happen beside the road where sunshine cannot reach. Unlike the name, the ice itself is not black, but due to its transparent nature, the surface of the road is seen clearly. Thus, detecting it is hard for pedestrians and even harder for IoT sensors embedded in vehicles. Tending to reduce the friction coefficient, black ice makes the road slippery, causing major accidents. In addition, black ice is more slippery than white ice, which can also cause fatalities. One of the recent news reports stated that over 116,000 people in the USA were injured due to slippery roads coated by a thin layer of black ice [1].
To detect black ice, various methods have been implemented and proposed, with and without sensors. In this paper, hyperspectral imaging is used to determine an abundant amount of information on it. The purpose of hyperspectral imaging is to obtain spectral signatures for each pixel in the image of a particular scene. While the human eye can see images with three bands, hyperspectral imaging (HSI) carries numerous bands with rich amounts of information. Therefore, to understand the information about any material, understanding how light acts on it is one of the best ways, making use of electromagnetic radiation and helping to characterize materials and objects [2]. Just as a multispectral image, a hyperspectral image can be defined by three dimensions (X × Y × Z) where X and Y are spatial dimensions and Z is the spectral dimension [3]. This imaging technique can be defined as the combination of traditional spectroscopy and modern imaging [4,5]. Due to its shape and numerous bands, the hyperspectral image is considered as a “data cube”.
Since the first work on remote sensing was published, the popularity of the hyperspectral camera has been increasing exponentially [6]. Thus, the areas of application for hyperspectral imaging have also increased, such as analytical techniques for nanoscale materials [7], plant seeds [8], fruit quality and safety [9,10], biological tissues [11], and wound care [12].
In the past decade, the development of deep learning models has increased rapidly and received widespread attention. Unlike traditional machine learning models, deep learning does not need any patterns because it can learn a pattern from the given data. Thus, it is used in natural language processing, object detection, speech recognition, semantic segmentation, and other branches of computer vision. Many training samples are given to a general deep learning framework to train the model and tune the numerous parameters. Various deep learning models have been used to handle huge datasets of hyperspectral images. In [13], an autoencoder was used for the classification of hyperspectral images. Similarly, the Recurrent Neural Network [14], the Transformer (SpectralFormer) [15], a Deep Belief Network [16], a Deep Recurrent Neural Network [17], and a Convolutional Neural Network [18] have been used for classification of hyperspectral images.
In this paper, a 2D–3D Convolutional Neural Network is used for the classification of hyperspectral images of black ice. The next section introduces a short review of related works until now. It presents an approach for black ice classification with hyperspectral imaging data for the first time. Considering the survey result from the Korea Transport Institute (Figure 1), in the initial phase, the main aim of the paper was to collect data during the daytime. We have utilized hyperspectral imaging for capturing a richer set of features compared to RGB or grayscale images. We have introduced a custom deep neural network architecture optimized for hyperspectral image data. Even though there are limited sources of research on black ice detection using computer vision and hyperspectral imaging, in the following section, we discussed the existing related works. We have compared our proposed method to the state-of-the-art method for hyperspectral image classification in general and an existing method where they have used 2D colored images for black ice classification. More details about the process are described in the following methodology and implementation details.

2. Related Works

In Figure 1, yearly traffic accidents in Korea caused by black ice for 24 h are shown [19]. According to the graph, the majority of the accidents caused by black ice occur in the morning from 6 a.m. to 10 a.m. From the figure that is provided by an official source, we can observe 80–85% of the accidents caused by black ice occur during the daylight. In [20], multi-wavelength, non-contact optical technology was used as a method for detecting black ice, pointing out the differences in normalized reflectance under dry, wet, black ice, snowy, and icy conditions. In [21], to develop a black ice detector, optical sensors and an infrared thermometer were used. An infrared camera has also been used to identify road conditions by using deep learning [22]. Using a millimeter wave sensor with a deep learning model (a 1D Convolutional Neural Network), another detection method achieved 98.2% accuracy [23]. However, mmwave technology is hard to manage in rain and fog when the black ice forms [24,25], as it suffers from attenuation in this kind of situation. Compared to hyperspectral images, the data are also a bit complicated for preprocessing and other custom settings.
In this paper, for the first time, we introduce a hyperspectral imaging system with deep learning to analyze black ice. One of the advantages of using hyperspectral imaging is that it needs no prior information on a sample, and it can take advantage of spatial relationships among the spectra in a neighborhood, which allows more accurate classification models. In the following section, we discuss more about the methodology and implementation details. Due to a lack of research on black ice detection with computer vision and hyperspectral imaging, we have compared our proposed method to Fast 3D CNN [18], which is one of the state-of-the-art methods for hyperspectral image classification.

3. Data Collection

Figure 2 displays the camera settings with respect to black ice and the position of the black ice. Figure 2B displays two samples, one is a visible black ice sample inside a black box and the other one is a mostly invisible black ice sample spread on the asphalt. The images were taken from distances of 10 m, 20 m, and 30 m in broad daylight [08:00–10:00 a.m.]. The dataset was labeled by MATLAB as shown in Figure 3. In the figure, the ground truth image shows the black ice sample on the road and the concentrated black ice sample in the box: yellow color represents the concentrated black ice and green color represents black ice spread on the road. The figure illustrates two black ice samples: one is on the asphalt and the other one is inside the box. We aim to detect both samples, as it is hard to detect black ice on asphalt. After data acquisition, further processing was done as shown in Figure 4. For the main part of this paper, a 2D–3D CNN model was used with hyperspectral data. It displays a workflow for the whole classification process. Hyperspectral image data for black ice are not present, so we have collected our own dataset at different distances with a Specim HS-CL-30-V10E (400–1000 nm), and the data acquisition was done with Spectral DAQ-200 The whole process for data acquisition can be seen in the figure. Figure 5 shows a visual explanation of Principal Component Analysis (PCA) on hyperspectral image data and Figure 6 displays the model architecture used for classification.

4. Methodology

Table 1 is a detailed model description with the filter’s dimensions. To implement the model, the data cube was divided into 3D overlapping patches. During classification, the hyperspectral image data cube was divided into small 3D patches. According to the HybridSN [26] model, the truth labels are decided by the center label’s pixels. From X, which is the modified input after dimension reduction, the 3D neighboring patches are P ∈ R. The neighboring patches, S × S × B, were created after implementing traditional Principal Component Analysis (PCA) where S is the window size and B is the reduced number of bands after performing PCA.
Principal component analysis or PCA uses orthogonal transformation to convert observations of correlated variables into another set of values that have linearly uncorrelated variables, known as principal components. First, the data are standardized so that each spectral band’s mean is 0 and standard deviation is 1. After standardizing, the covariance matrix is calculated. This covariance matrix is a square matrix that captures how each feature varies with every other feature. In simple words, the matrix captures the covariance between the bands. After these steps, we calculated the eigenvector and eigenvalues to determine the magnitude and the direction of new space, which helped us to select the informative bands for each dataset. The main reason behind using PCA was to extract the bands containing the majority of the information, thereby aiding in dimension reduction. It helps to reduce the work of dealing with many uninformative bands, potentially improves the model’s accuracy and efficiency, and helps eliminate noise.

5. Model Architecture

After using PCA, we have used the data augmentation technique for up-sampling the dataset, thereby enhancing both the training quality and the model’s robustness. The total number of generated patches was (M − S + 1) × (N − S + 1), where M and N are the input image’s width and height, respectively. X is defined as PCA reduced data cube. The input data for the 2D CNN were convolved with 2D kernels. To introduce non-linearity into the model, the convolved features passed through a ReLU activation function. The activation value at the (x, y) spatial position, in the n th feature map of the m th layer, can be denoted as [26],
v m , n x , y = β ( b m , n + τ = 1 d l 1 ρ = γ γ σ = δ δ w m , n τ σ , ρ × v m 1 , τ x + σ , y + ρ
where β is the activation function and b m , n is the bias parameter. d l 1 is the number of feature maps in the (l − 1)th layer and the depth of kernel w m , n for the n th feature map, ( 2 δ + 1 ) is the height, and ( 2 γ + 1 ) is the width of the kernel. Similarly, 3D CNN was convolved with 3D kernels. For this layer, the activation value at (x, y, z) spatial position, in the n th feature map of m th layer, can be denoted as
v m , n x , y , z = β ( b m , n + τ = 1 d l 1 λ = f f ρ = γ γ σ = δ δ w m , n τ σ , ρ , λ × v m , n , τ 1 x + σ , y + ρ , z + λ
where (2f + 1) is the depth of the kernel. Other parameters are the same as in the 2D CNN. The window size and the number of components were changed according to the data cube.
Conv3D is intended for volumetric inputs, such as our 3D images, as opposed to Conv2D, which works with 2D images. Eight filters of size (2, 2) are input into the first Conv3D layer, which then uses the ReLU activation function to add non-linearity. This layer hunts for low-level features such as corners and edges. A total of 16 filters of sizes 3, 3, and 5 are present in the second Conv3D layer, which help extract even more intricate spatial characteristics. The depth of the filters enables the model to concurrently learn spectral and spatial characteristics. After the Conv3D layers, the output is reshaped to compress the final two dimensions (channels and filters) into one, making it appropriate to feed into Conv2D layers. The 3D and 2D convolution operations are connected by this. Conv2D layers search for patterns in the 2D space by applying filters to the 2D input. Compared to Conv3D layers, these are less computationally intensive. There are 64 filters of size (3, 3) in our model. The activation function of ReLU is utilized, just as in earlier layers. The Flatten layer is used to transform the 3D data to 1D after feature extraction by convolutions, making them appropriate for input into fully connected (dense) layers. Each neuron in these fully connected layers is linked to every neuron in the layer below it. These layers discover the data’s broad patterns. The model employs two dense layers, each with 256 or 128 units and both with Dropout and ReLU activation. By randomly setting a portion of the input units to 0 during training, the regularization approach known as dropout works to prevent overfitting. The model’s final layer is where actual classification takes place. The “softmax” activation function is used in this dense layer, which has units equal to the number of classes.
Overall, the input data are initialized as 3D data with height, width, and depth (number of bands in one hyperspectral image data), which is called a data cube. For a hyperspectral imaging system, every pixel counts as a data point because of different spectral properties, unlike other 2D images. 3D convolution was done to preserve the spectral information of the input data cube, and the 2D Convolutional Neural Network was applied because it discriminates the spatial information, which is one of the major components in hyperspectral imaging. For the second part of this paper, one of the existing methods was implemented to analyze the difference. It classified the colored (RGB) black ice image with data augmentation and the CNN [27]. That method uses data augmentation to increase the data and then applies the CNN. A similar process was implemented with the existing black ice images. The images were augmented and passed through the 2D CNN model to analyze the process. The first step of this process is feature extraction, and the next stage is classification.

6. Implementation Details

As stated in Section 3, we have collected datasets from distances of 10 m, 20 m, and 30 m. The implementation consists of three settings: 1. We have meticulously trained our deep learning model using Hyperspectral Imaging (HSI) data captured from a distance of 10 m. Subsequently, to evaluate the model’s performance and generalization capabilities, we tested it with HSI data collected from distances of 20 m and 30 m. 2. We have trained HSI data taken from a distance of 20 m and tested it on the images that were taken from distances of 10 m and 30 m. 3. At last, we have trained the model with HSI data collected from a distance of 30 m and tested it on the data that were collected from distances of 10 m and 20 m. The main aim is to make the model robust for all distances, which can help us delve deeper into data generalization. Section 7 describes the reasoning in more detail.
The model ingests incoming data at the input layer. The input has the following shape: (25, 25, 10, 1), which represents a 3D image with dimensions of (25, 25, 10), and (1), which denotes grayscale mode. The shape of the hyperspectral image was 1332 × 854 and the number of bands was reduced to 10 by using PCA. The window size over the image was used in the shape of 25 × 25. As every pixel works as a data point, in this case, we can obtain (Height − Window height) × (Width − Window width) × 10 data points for one single hyperspectral image datum. We divided the points into train, test, and validation set with a 7:2:1 ratio. The final testing for detecting the black ice was done on completely unknown images. The test dataset was taken from different distances. The number of total parameters in our model is 6,629,818. Table 1 shows the parameter details according to the layers and the output/input shapes of the incoming data. The hyperspectral image is already high dimensional, capturing spectral and spatial information for all pixels. We have used augmentation and regularization techniques, which caused a huge number of parameters. Additionally, we have used EarlyStopping to halt the training process when it is needed. Our model is made to exhibit robust generalization abilities when it is applied to unseen data. This is achieved by using the data augmentation technique. Additionally, the architecture of the model is optimized to capture both spectral and spatial information. We have trained the model for 100 epochs at a 0.0002 learning rate.

7. Results

7.1. Results and Discussion

To understand a material, a good way is to see how light works on the material. For hyperspectral images, every pixel can have a different spectral signature, which may hold important information. Therefore, for a rough idea about black ice, spectral reflectance curves are shown in Figure 7, Figure 8 and Figure 9 (from distances of 10 m, 20 m, and 30 m away, respectively), graphing wavelength versus digital count. The black color curve is illustrated as the road, and the blue color curve is the black ice spread on the road. Even though black ice on the road is not clear enough to be seen, the graphs show the difference between them. For better visualization of black ice, random bands were taken into consideration, as shown in Figure 10, Figure 11 and Figure 12. The figures display the feature variations in different bands.
As mentioned in the methodology, Principal Component Analysis was used for dimension reduction and to avoid possible over-fitting during the deep learning process. One of the main reasons was to choose the relevant bands in the data cube. After observation of the first PCA trial, 40 bands were taken for further visualization and implementation. After PCA, the model was structured according to the data dimension. For the mentioned three distances, accuracy and loss curves are shown in Figure 13, Figure 14 and Figure 15. In this paper, Kappa Coefficient (KA), Average Accuracy (AA), and Overall Accuracy (OA) are taken into consideration to judge the classification results. KA, OA, and AA are shown based on the classification results. Overall accuracy describes correctly classified samples, while average accuracy is the average number of correctly classified samples.
Precision = T P T P + F P
Recall = T P T P + F N
Accuracy = T P + T N T P + F N + F P + T N
F 1 - score = 2 × 1 1 Precision + 1 Recall = 2 × Precision × Recall Precision + Recall
Kappa can be defined as a statistical measure of the relationship between the ground truth and the classification map. Table 2, Table 3 and Table 4 present the accuracy scores and statistical measurements for each distance. As mentioned in Section 6, we tested our model at three different distances to identify the optimal dataset. When trained on black ice data captured from a distance of 10 m, the values for OA, AA, and KA are 0.97, 0.97, and 0.94, respectively. For black ice spread on the road, the precision, recall, and F1-score are 0.98, 0.97, and 0.97, respectively. For black ice on the box, these performance measurements are 0.96, 0.97, and 0.97. The macro average and weighted average for both precision and F1-score stand at 0.97. When training on black ice data collected from a distance of 20 m, the OA, AA, and KA values are 0.95, 0.93, and 0.86, respectively. For black ice on the road, the precision, recall, and F1-score are 0.94, 0.92, and 0.93, respectively. For black ice in a box, these values are 0.93, 0.94, and 0.93. In contrast, for training on black ice data that are collected from a distance of 30 m, the OA and AA are both 0.93, with a KA of 0.86. For black ice on the road, precision, recall, and F1-score are 0.93. Figure 13, Figure 14 and Figure 15 illustrate the accuracy loss graphically, clearly showing that the model trained with data from a 10 m distance achieves the highest accuracy, corroborating the findings presented in Table 2, Table 3 and Table 4.
For black ice in a box, the same performance measurements are also 0.93. After the whole process, predicted images are displayed in Figure 16 for distances of 10 m, 20 m, and 30 m, respectively. This clearly shows that the model that is trained on black ice data from a distance of 10 m had very robust performance throughout all the test datasets from different distances. Figure 16 is the predicted result for the best case where we have trained our model with the hyperspectral data that were taken from a distance of 10 m, as it has the best accuracy and robustness to the unknown dataset.
If we observe Figure 16, by the model trained with the dataset collected from a distance of 10 m, the prediction showed higher accuracy, and with increasing distances, the accuracy dropped. However, at the 30 m distance, the black ice was still visible to some extent, and as the distance increased, the accuracy became lower. This observation shows that our model’s detection works best until the camera is at a distance of 30 m from the black ice. We have compared these results with an existing method solely for black ice detection with colored image classification using a Convolutional Neural Network. This model has 64.20% OA, 78.90% precision, and 65.20% recall [28] with colored images. Due to the lack of research on black ice using hyperspectral imaging systems, we have compared our model to one of the existing state-of-the-art methods for hyperspectral image classification [18]. For a fair comparison, we have used the existing model with the same pre-processing method that we have used for our model. After the evaluation, as shown in Table 2, Table 3 and Table 4, it can be clearly observed that our proposed model is outperforming the existing model. We have mentioned the colored image classification method [28] as “existing method 1” and the fast 3DCNN method [18] as “existing method 2”. The proposed method of classifying black ice has an average of almost 15% higher accuracy than existing method 1 and on average 3–8% higher accuracy than existing method 2. After analyzing the difference, the main reason for method 1 could be the spectral and spatial information in the hyperspectral data cube, because the image cube has several dimensions of features that can be features per band and the band itself. However, for a normal RGB or color image, there is not much feature information compared to the hyperspectral image. For method 2, the main reason could be the lack of captured spectral information. Because our model captures spectral and spatial information, it does not lose the important features in both parts.

7.2. Visualization of Feature Maps

Feature maps have been extracted with activation statistics, as shown in Figure 17. The activation statistics were extracted from the model trained with data collected from a distance of 10 m. Regarding convolutional neural networks, the mean of feature maps refers to the average value of elements or the neuron’s activations in the feature maps, which provides an insight into the average activation level of the detected features around the spatial dimension. Standard Deviation (std), here, refers to how much the feature maps deviate from the mean. Usually, a higher std value means that the feature map activations have a broader range of values, and a lower std value means that the activations are clustered closer to the mean. Figure 17 shows the features that are read onto the black ice portion of the image.
Feature map 1 with mean value of 0.03 and std value of 0.05 suggests that the activations are spread around the mean with moderate variability. Therefore, it may mean that the feature maps are capturing patterns with a moderate magnitude and moderate consistency in their activations. Feature map 2 is very similar to feature map 1, indicating a higher concentration and less variability around the mean. Feature map 3 is similar to feature map 1 and feature map 2. With low mean value, feature map 4 indicates that the activations are comparatively small on average, and the std value suggests that the activations are concentrated around the mean. Feature map 5 indicates that the activations have a bit more variability than feature map 4. Feature map 6 is exactly the same as feature map 5. Feature map 7 has a mean value close to 0, which means that the activations are close to zero on average. This feature map is not important for this case. Feature map 8 has moderate magnitudes and moderate variability around the mean. Feature map 9 has a high mean value of 0.16, which indicates that the activations are strong on average. The standard deviation suggests that these activations are widely spread around the mean, denoting significant variability. Feature maps 10 and 11 have a moderate spread and variability around the mean. Similar to feature maps 10 and 11, feature map 12 also has the same characteristics. Observing the values of std and mean in Figure 17, it might indicate that this feature map is capturing binary or threshold patterns. Similar to most of the feature maps, it means that the activations are moderately concentrated around the mean. Feature map 16 has a mean value of 0.05, which means that it has moderate magnitudes and significant variability in activations around the mean.
In summary, some features have high mean values, indicating strong activations, while others have low mean values; for example, feature map 7 indicates weak activations. The standard deviation offers insights into the variability of these activations.

8. Conclusions

In this paper, to classify black ice, hyperspectral imaging data were used for the first time with the help of a 2D–3D CNN model that provided the highest overall accuracy of 97% for objects at a 10 m distance from the camera. The data dimension was reduced with the help of Principal Component Analysis, which helped the model to work faster. Higher overall accuracy and a higher kappa score of 94% were achieved at the 10 m distance. The same process was done for images taken from distances of 20 m and 30 m, and the accuracy rate was also quite high. A comparison was made with color images of black ice and the state-of-the-art model for hyperspectral image classification. The difference between the results was higher than expected and might be due to the different dimensions of features. On the other hand, our dataset was collected in the daylight of winter, but the daylight (the brightness) does not stay consistent for a long time. Therefore, our data do have lighting differences in every image, which makes the model more robust. For the application of moving cars and other real-life cases, the dataset can be collected accordingly. The cameras can be mounted in front of the car in a position where they can have a proper view of the road. According to our experiments, the model is detecting the ice quite well until 30 m. Owing to the high-dimensional and abundant information about the material from the image, using hyperspectral imaging showed great results. However, due to dimension reduction, some information might have been missing in this case. For future aspects of this research, efforts will be made toward finding ways to protect important information from bands and to calculate the distance from the camera to any other object. We keep the further experiment of black ice detection at night for our future scope.

Author Contributions

The contributions were distributed between authors as follows: C.B. wrote the text of the manuscript, programmed the method, and implemented the idea. S.K. provided the database and operational scenario, performed the in-depth discussion of the related literature, and confirmed the accuracy experiments that are exclusive to this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (IRIS RS-2023-00240109). This work was supported by the 2023 Yeungnam University Research Grants.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset was collected with our own lab instruments and camera and with the “Info Works” team. Therefore, there is no need for copyright since it is our own dataset. Data are available upon request from the authors. The dataset is not available online.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Snow and Ice. Available online: https://ops.fhwa.dot.gov/weather/weather_events/snow_ice.htm (accessed on 29 October 2022).
  2. Shippert, P. Why Use Hyperspectral Imagery? Photogramm. Eng. Remote Sens. 2004, 70, 377–396. [Google Scholar]
  3. Amigo, J.M.; Babamoradi, H.; Elcoroaristizabal, S. Hyperspectral image analysis. A tutorial. Anal. Chim. Acta 2015, 896, 34–51. [Google Scholar] [CrossRef] [PubMed]
  4. Qian, S.-E. Hyperspectral Satellites and System Design, 1st ed.; CRC Press: London, UK, 2020. [Google Scholar] [CrossRef]
  5. Qian, S.-E. Optical Satellite Signal Processing and Enhancement; SPIE Press: Bellingham, WA, USA, 2013. [Google Scholar]
  6. Amigo, J.M. Chapter 1.1—Hyperspectral and multispectral imaging: Setting the scene. In Data Handling in Science and Technology; Amigo, J.M., Ed.; Elsevier: Amsterdam, The Netherlands, 2019; Volume 32, pp. 3–16. ISSN 0922-3487; ISBN 9780444639776. [Google Scholar] [CrossRef]
  7. Guo, R.; Somogyi, A.; Bazin, D.; Bouderlique, E.; Letavernier, E.; Curie, C.; Isaure, M.-P.; Medjoubi, K. Towards routine 3D characterization of intact mesoscale samples by multi-scale and multimodal scanning X-ray tomography. Sci. Rep. 2022, 12, 16924. [Google Scholar] [CrossRef] [PubMed]
  8. Feng, L.; Zhu, S.; Zhou, L.; Zhao, Y.; Bao, Y.; Zhang, C.; He, Y. Detection of Subtle Bruises on Winter Jujube Using Hyperspectral Imaging with Pixel-Wise Deep Learning Method. IEEE Access 2019, 7, 64494–64505. [Google Scholar] [CrossRef]
  9. Hussain, A.; Pu, H.; Sun, D.-W. Innovative nondestructive imaging techniques for ripening and maturity of fruits—A review of recent applications. Trends Food Sci. Technol. 2018, 72, 144–152. [Google Scholar] [CrossRef]
  10. Lu, Y.; Saeys, W.; Kim, M.; Peng, Y.; Lu, R. Hyperspectral imaging technology for quality and safety evaluation of horticultural products: A review and celebration of the past 20-year progress. Postharvest Biol. Technol. 2020, 170, 111318. [Google Scholar] [CrossRef]
  11. ul Rehman, A.; Qureshi, S.A. A review of the medical hyperspectral imaging systems and unmixing algorithms’ in biological tissues. Photodiagnosis Photodyn. Ther. 2020, 33, 102165. [Google Scholar] [CrossRef]
  12. Saiko, G.; Lombardi, P.; Au, Y.; Queen, D.; Armstrong, D.; Harding, K. Hyperspectral imaging in wound care: A systematic review. Int. Wound J. 2020, 17, 1840–1856. [Google Scholar] [CrossRef]
  13. Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep Learning-Based Classification of Hyperspectral Data. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
  14. Jia, S.; Jiang, S.; Lin, Z.; Li, N.; Xu, M.; Yu, S. A survey: Deep learning for hyperspectral image classification with few labeled samples. Neurocomputing 2021, 448, 179–204. [Google Scholar] [CrossRef]
  15. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking Hyperspectral Image Classification with Transformers. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 5518615. [Google Scholar] [CrossRef]
  16. Chen, C.; Ma, Y.; Ren, G. Hyperspectral Classification Using Deep Belief Networks Based on Conjugate Gradient Update and Pixel-Centric Spectral Block Features. IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2020, 13, 4060–4069. [Google Scholar] [CrossRef]
  17. Mou, L.; Ghamisi, P.; Zhu, X.X. Deep Recurrent Neural Networks for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3639–3655. [Google Scholar]
  18. Ahmad, M.; Khan, A.M.; Mazzara, M.; Distefano, S.; Ali, M.; Sarfraz, M.S. A Fast and Compact 3-D CNN for Hyperspectral Image Classification. IEEE Geosci. Remote. Sens. Lett. 2020, 19, 5502205. [Google Scholar] [CrossRef]
  19. Luo, Y.; Zou, J.; Yao, C.; Zhao, X.; Li, T.; Bai, G. HSI-CNN: A Novel Convolution Neural Network for Hyperspectral Image. In Proceedings of the 2018 International Conference on Audio, Language and Image Processing, Shanghai, China, 16–17 July 2018; pp. 464–469. [Google Scholar]
  20. Park, K.; Cho, B. The Korea Transport Institute. Available online: https://english.koti.re.kr/user/bbs/BD_selectBbs.do?q_bbsCode=1017&q_bbscttSn=20220630102531640&q_clCode=1&q_lang=eng (accessed on 30 October 2023).
  21. Ma, X.; Ruan, C. Method for black ice detection on roads using tri-wavelength backscattering measurements. Appl. Opt. 2020, 59, 7242–7246. [Google Scholar] [CrossRef]
  22. Alimasi, N.; Takahashi, S.; Enomoto, H. Development of a mobile optical system to detect road-freezing conditions. Bull. Glaciol. Res. 2012, 30, 41–51. [Google Scholar] [CrossRef]
  23. Kim, H.G.; Jang, M.S.; Lee, Y.S. A Black Ice Detection Method Using Infrared Camera and YOLO. J. Korea Inst. Inf. Commun. Eng. 2021, 25, 1874–1881. [Google Scholar] [CrossRef]
  24. Kim, J.; Kim, E.; Kim, D. A Black Ice Detection Method Based on 1-Dimensional CNN Using mmWave Sensor Backscattering. Remote Sens. 2022, 14, 5252. [Google Scholar] [CrossRef]
  25. Nymphas, E.F.; Ibe, O. Attenuation of millimetre wave radio signal at worst hour rainfall rate in a tropical region: A case study. Niger. Sci. Afr. 2022, 16, e01158. [Google Scholar] [CrossRef]
  26. Liu, J.; Matolak, D.W.; Güvenç, I.; Mehrpouyan, H. Tropospheric attenuation prediction for future millimeter wave terrestrial systems: Estimating statistics and extremes. Int. J. Commun. Syst. 2022, 35, e5240. [Google Scholar] [CrossRef]
  27. Roy, S.K.; Krishna, G.; Dubey, S.R.; Chaudhuri, B.B. HybridSN: Exploring 3-D–2-D CNN Feature Hierarchy for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2020, 17, 277–281. [Google Scholar] [CrossRef]
  28. Park, P.; Han, S. Study of Black Ice Detection Method through Color Image Analysis. J. Platf. Technol. 2021, 9, 90–96. [Google Scholar] [CrossRef]
Figure 1. Traffic accidents in South Korea in a day due to black ice [19].
Figure 1. Traffic accidents in South Korea in a day due to black ice [19].
Applsci 13 11977 g001
Figure 2. (A) The camera set up. (B) The black ice set up at 10 m distance from the camera.
Figure 2. (A) The camera set up. (B) The black ice set up at 10 m distance from the camera.
Applsci 13 11977 g002
Figure 3. (A) The ground truth image at 10 m, created by MATLAB R2021a software. (B) Hyperspectral image at 10 m distance taken from the mentioned camera.
Figure 3. (A) The ground truth image at 10 m, created by MATLAB R2021a software. (B) Hyperspectral image at 10 m distance taken from the mentioned camera.
Applsci 13 11977 g003
Figure 4. Overall workflow of the process for the black ice classification method.
Figure 4. Overall workflow of the process for the black ice classification method.
Applsci 13 11977 g004
Figure 5. PCA on input hyperspectral image data.
Figure 5. PCA on input hyperspectral image data.
Applsci 13 11977 g005
Figure 6. The architecture of the 2D–3D Convolutional Neural Network.
Figure 6. The architecture of the 2D–3D Convolutional Neural Network.
Applsci 13 11977 g006
Figure 7. Digital count vs. wavelength curve of black ice image at 10 m.
Figure 7. Digital count vs. wavelength curve of black ice image at 10 m.
Applsci 13 11977 g007
Figure 8. Digital count vs. wavelength curve of black ice image at 20 m.
Figure 8. Digital count vs. wavelength curve of black ice image at 20 m.
Applsci 13 11977 g008
Figure 9. Digital count vs. wavelength curve of black ice image at 30 m.
Figure 9. Digital count vs. wavelength curve of black ice image at 30 m.
Applsci 13 11977 g009
Figure 10. Random band visualization of hyperspectral image of black ice at 10 m.
Figure 10. Random band visualization of hyperspectral image of black ice at 10 m.
Applsci 13 11977 g010
Figure 11. Random band visualization of hyperspectral image of black ice at 20 m.
Figure 11. Random band visualization of hyperspectral image of black ice at 20 m.
Applsci 13 11977 g011
Figure 12. Random band visualization of hyperspectral image of black ice at 30 m.
Figure 12. Random band visualization of hyperspectral image of black ice at 30 m.
Applsci 13 11977 g012
Figure 13. The accuracy and loss curves after finishing the iterations for the black ice data taken from a distance of 10 m.
Figure 13. The accuracy and loss curves after finishing the iterations for the black ice data taken from a distance of 10 m.
Applsci 13 11977 g013
Figure 14. The accuracy and loss curves after finishing the iterations for the black ice data taken from a distance of 20 m.
Figure 14. The accuracy and loss curves after finishing the iterations for the black ice data taken from a distance of 20 m.
Applsci 13 11977 g014
Figure 15. The accuracy and loss curves after finishing the iterations for the black ice data taken from a distance of 30 m.
Figure 15. The accuracy and loss curves after finishing the iterations for the black ice data taken from a distance of 30 m.
Applsci 13 11977 g015
Figure 16. The predicted ground truth of black ice (A) at 10 m (white is concentrated ice in box and gray is the ice spread on the road), (B) 20 m, and (C) 30 m (white is ice spread on the road and the gray is the concentrated ice in the box.
Figure 16. The predicted ground truth of black ice (A) at 10 m (white is concentrated ice in box and gray is the ice spread on the road), (B) 20 m, and (C) 30 m (white is ice spread on the road and the gray is the concentrated ice in the box.
Applsci 13 11977 g016
Figure 17. Feature map visualization of the 6th layer of the model on the black ice part.
Figure 17. Feature map visualization of the 6th layer of the model on the black ice part.
Applsci 13 11977 g017
Table 1. The summary of the whole model for black ice.
Table 1. The summary of the whole model for black ice.
LayersOutput ShapeParameters
Input[(None, 25, 25, 10, 1)]0
Conv3D[(None, 24, 24, 8, 8)]104
Conv3D[(None, 22, 22, 4, 16)]5776
Reshape[(None, 22, 22, 64)]0
Conv2D[(None, 20, 20, 64)]36,928
Flatten[(None, 25,600)]0
Dense[(None, 256)]6,553,856
Dropout[(None, 256)]0
Dense[(None, 128)]32,896
Dropout[(None, 128)]0
Dense[(None, 2)]258
Table 2. OA, AA, KA, and F1-score for black ice data from 10 m.
Table 2. OA, AA, KA, and F1-score for black ice data from 10 m.
Accuracy ScoreOur ModelExisting Method 1Existing Method 2
F1-score (Weighted Average)0.960.820.93
AA (%)0.970.830.95
OA (%)0.970.830.94
KA (%)0.940.780.90
Table 3. OA, AA, KA, and F1-score for black ice data from 20 m.
Table 3. OA, AA, KA, and F1-score for black ice data from 20 m.
Accuracy ScoreOur ModelExisting Method 1Existing Method 2
F1-score (Weighted Average)0.930.760.89
AA (%)0.930.760.87
OA (%)0.950.770.87
KA (%)0.860.750.84
Table 4. OA, AA, KA, and F1-score for black ice data from 30 m.
Table 4. OA, AA, KA, and F1-score for black ice data from 30 m.
Accuracy ScoreOur ModelExisting Method 1Existing Method 2
F1-score (Weighted Average)0.930.720.85
AA (%)0.930.720.83
OA (%)0.930.730.82
KA (%)0.860.710.82
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bhattacharyya, C.; Kim, S. Black Ice Classification with Hyperspectral Imaging and Deep Learning. Appl. Sci. 2023, 13, 11977. https://doi.org/10.3390/app132111977

AMA Style

Bhattacharyya C, Kim S. Black Ice Classification with Hyperspectral Imaging and Deep Learning. Applied Sciences. 2023; 13(21):11977. https://doi.org/10.3390/app132111977

Chicago/Turabian Style

Bhattacharyya, Chaitali, and Sungho Kim. 2023. "Black Ice Classification with Hyperspectral Imaging and Deep Learning" Applied Sciences 13, no. 21: 11977. https://doi.org/10.3390/app132111977

APA Style

Bhattacharyya, C., & Kim, S. (2023). Black Ice Classification with Hyperspectral Imaging and Deep Learning. Applied Sciences, 13(21), 11977. https://doi.org/10.3390/app132111977

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop