Next Article in Journal
Compressive Characteristics and Energy Absorption Capacity of Automobile Energy-Absorbing Box with Filled Porous TPMS Structures
Previous Article in Journal
Therapeutic Effects of 30 nm Cyclosporin A-Loaded Nanoparticles Using PLGA-PEG-PLGA Triblock Copolymer for Transdermal Delivery in Mouse Models of Psoriasis
Previous Article in Special Issue
Research on Obstacle Avoidance Replanning and Trajectory Tracking Control Driverless Ferry Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rainy Environment Identification Based on Channel State Information for Autonomous Vehicles

1
Communication and Network Laboratory, Dalian University, Dalian 116622, China
2
School of Information Engineering, Dalian University, Dalian 116622, China
3
Department of Computer Science, Loughborough University, Loughborough LE11 3TU, UK
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(9), 3788; https://doi.org/10.3390/app14093788
Submission received: 5 April 2024 / Revised: 20 April 2024 / Accepted: 25 April 2024 / Published: 29 April 2024
(This article belongs to the Special Issue Autonomous Driving and Intelligent Transportation)

Abstract

:
We introduce an innovative deep learning approach specifically designed for the environment identification of intelligent vehicles under rainy conditions in this paper. In the construction of wireless vehicular communication networks, an innovative approach is proposed that incorporates additional multipath components to simulate the impact of raindrop scattering on the vehicle-to-vehicle (V2V) channel, thereby emulating the channel characteristics of vehicular environments under rainy conditions and an equalization strategy in OFDM-based systems is proposed at the receiver end to counteract channel distortion. Then, a rainy environment identification method for autonomous vehicles is proposed. The core of this method lies in utilizing the Channel State Information (CSI) shared within the vehicular network to accurately identify the diverse rainy environments in which the vehicle operates without relying on traditional sensors. The environmental identification task is considered as a multi-class classification problem and a dedicated Convolutional Neural Network (CNN) model is proposed. This CNN model uses the CSI estimated from CAM exchanged in vehicle-to-vehicle (V2V) communication as training features. Simulation results showed that our method achieved an accuracy rate of 95.7% in recognizing various rainy environments, which significantly surpasses existing classical classification models. Moreover, it only took microseconds to predict with high accuracy, surpassing the performance limitations of traditional sensing systems under adverse weather conditions. This breakthrough ensures that intelligent vehicles can rapidly and accurately adjust driving parameters even in complex weather conditions like rain to autonomous drive safely and reliably.

1. Introduction

Environmental perception is the core of autonomous vehicles. Autonomous vehicles, also known as driverless cars, refer to vehicles capable of perceiving their surroundings and navigating safely with little or no human input [1]. Although Autonomous Driving Systems (ADS) have potential advantages in improving traffic safety, reducing congestion, lowering transportation costs, and positively impacting the environment, incidents and casualties involving ADS are still on the rise. To gain wider identification of the advantages of autonomous vehicles, it is imperative to address the pressing issues currently faced by ADS, especially the capability of environmental identification under adverse weather conditions [2].
Vehicular networks, through intelligent technology, have enabled efficient information exchange between vehicles, infrastructure, pedestrians, and network services, creating an intelligent transportation system [3]. In this system, accurately identifying the environment around the vehicle is crucial for the operation of intelligent vehicles. Intelligent vehicles must adjust their speed based on road conditions, traffic congestion, and the specific areas they are in, such as urban or rural regions. Particularly under adverse weather conditions like heavy rain, precise environmental identification becomes even more important, as statistics show that the risk of accidents in rainy weather is 70% higher than in normal conditions [4].
The introduction of V2X (Vehicle-to-Everything) communication technology further enhances the functionality of vehicular networks. V2X communication is not just a part of vehicular networks; it also expands the sensory range of vehicles, enabling autonomous vehicles to communicate in real an additional an additional time with surrounding vehicles, infrastructure, and pedestrians. This capability of real-time information sharing and perception allows autonomous vehicles to understand their environment more accurately, leading to more rational decision-making and action planning. This is of significant importance for achieving higher levels of autonomous driving and enhancing traffic safety and efficiency [5,6,7].
Rainy conditions pose additional challenges to the operation of intelligent vehicles, as weather conditions directly affect the environmental state, further complicating the task of environmental identification by ADS [8]. Therefore, a new method for vehicle environmental identification under rainy weather conditions that does not require specific sensors is proposed in this paper. It utilizes Cooperative Awareness Messages (CAM) exchanged between vehicles to explore channel characteristics, which are then used to identify the vehicle’s environment. This enables vehicles to automatically determine the appropriate driving speed, thereby enhancing the safety and reliability of ADS in rainy weather conditions, as shown in Figure 1.
The main contributions of our work are as follows:
A wireless vehicular communication network has been established. Considering the impact of raindrop scattering on the vehicle-to-vehicle (V2V) channel, an innovative approach that adds a multipath component has been proposed to simulate the channel characteristics of vehicular environments in rainy conditions. This new multipath component specifically represents the effect of raindrop scattering and is characterized by path delay, path gain, and Doppler shift parameters. An equalization strategy for the receiver end of OFDM systems has also been proposed. It adjusts the signal for each subcarrier to counteract channel distortion, thereby enhancing the reliability of the signal.
A deep learning-based method for rainy environment identification in vehicles is proposed. This method does not rely on specific sensors but utilizes the wireless channel characteristics shared among vehicles in the vehicular network for environmental perception. We use the Channel State Information (CSI), estimated from Cooperative Awareness Messages (CAM) exchanged in vehicle-to-vehicle (V2V) communication, as input features, which are then fed into the proposed Convolutional Neural Network (CNN) model. This enables the model to reliably identify the surrounding environment of the vehicle based on the channel characteristics of different rainy vehicular environments.
The rest of this paper is organized as follows: Section 2 introduces related research. Section 3 describes our wireless communication vehicular network. Section 4 provides our rainy environment identification method for autonomous vehicles. Section 5 describes the performance evaluation of the proposed method. Finally, Section 6 presents the conclusions.

2. Related Work

Perception and sensing in adverse weather conditions have been a challenge for autonomous vehicles to achieve higher levels of automation. The impacts that weather poses to ADS sensors are significant; hence, solutions for dealing with adverse weather conditions are mainly discussed in this paper.
Weather challenges have always been an obstacle to the deployment of ADS, and it is necessary to recognize their impacts on sensors. LiDAR is one of the core perception sensors in the field of autonomous driving. The use of 3D-LiDAR on cars has not been widespread for more than a decade but has already proven to be indispensable in ADAS (advanced driver-assistance systems) with high measurement accuracy and independent sensing capabilities regardless of illumination. Fersch et al. [9] studied the impact of rain on pulsed LiDAR systems with small apertures and narrow laser beam cross-sections. Filgueira et al. [10] quantified the impact of rainfall on different LiDAR parameters: range, intensity, and number of detection points. Hasirlioglu et al. [11] researched the impact of rain on automotive laser scanner sensor systems and introduced a new theoretical model based on a hierarchical model to describe the impact of rain on sensor behavior. Regarding radar sensors, the automotive radar system consists of a transmitter and a receiver. Zang et al. [12] described the impact of rainfall on millimeter-wave radar, considering rain attenuation and backscatter effects. As for cameras, the camera is one of the most widely used sensors in perception tasks but also one of the most vulnerable in adverse weather conditions. Cameras in the rain, no matter how high the resolution, are easily damaged by a drop of water on the transmitter or lens. Reway et al. [13] proposed a camera loop method to evaluate the performance of object detection algorithms under different weather conditions.
With the widespread use of machine learning and the rapid development of powerful sensors, multi-sensor modes and additional sensor components are used to help mitigate the impact of weather. A single sensor does not provide sufficient safety assurance for navigation under adverse weather conditions. Liu et al. [14] utilized multi-sensor information fusion technology to improve the perception accuracy and reliability of autonomous vehicles in adverse weather conditions. The fusion scheme uses millimeter-wave radar as the main sensor and a monocular camera as an auxiliary sensor. Bijelic et al. [15] developed a robust fusion model for multi-modal sensor inputs under unseen adverse weather conditions, laying the foundation for the development and evaluation of future autonomous vehicle technologies. Mai et al. [16] proposed a three-dimensional (3D) object detection method based on a post-fusion architecture that performs well in foggy conditions.
Sensors are key to the safe navigation of autonomous vehicles in adverse weather; enhancing their perception capabilities is crucial. Quan et al. [17] proposed a Complementary Cascade Network (CCN) architecture capable of uniformly removing raindrops and streaks from images, introducing a new real-world rain dataset, RainDS. Ni et al. [18] introduced a rain intensity control network, RICNet, which can achieve bidirectional control of rain intensity from clear to downpour images while preserving the characteristics of specific scenes. Yue et al. [19] introduced a new semi-supervised video deraining method that processes labeled synthetic data and unlabeled real data using a dynamic rain generator and different prior formats. This method has successfully achieved better deraining effects on real datasets.
The perception capabilities of intelligent vehicles are not limited to object identification; they also encompass a comprehensive understanding of their location and surrounding environment, especially the ability to accurately classify and locate in adverse weather conditions, which is crucial for ensuring the safe operation of the vehicle. Zhang and Ma [20] discussed the challenging task of multi-class weather classification from a single image in outdoor computer vision applications. Heinzler et al. [21] achieved quite precise weather classification using only multi-echo LiDAR sensors. The point cloud is first converted into a grid matrix, and the presence of rain or fog can be easily observed through the appearance of secondary echoes on objects. Šabanovič et al. [22] estimated road friction coefficients by constructing a vision-based Deep Neural Network (DNN), as surfaces with different frictional force reductions, such as dry, slippery, muddy, and icy, can essentially be identified as clear, rainy, snowy, and icy weather, respectively. Their algorithm not only detects wet conditions but also classifies combinations of wet conditions and road types. Wolcott and Eustice [23] proposed a fast and robust multi-resolution scan-matching algorithm for local vehicle positioning in autonomous driving. The algorithm solves the weather problem of vehicle positioning in autonomous driving by introducing Gaussian mixture maps, multi-resolution inference, and global optimization search, improving robustness under adverse conditions.
Almost all of the aforementioned methods fundamentally rely on specific types of sensors, such as cameras, millimeter-wave radars, and LiDAR. These sensors primarily collect high-dimensional measurement data in image and video formats, whose processing is not only cumbersome but also significantly energy-consuming. Moreover, meteorological conditions have a direct impact on the environmental state, thereby weakening the perception capabilities of sensors in autonomous driving systems and significantly increasing the difficulty of completing key perception tasks such as target detection and identification.
To overcome the limitations of traditional sensors in adverse weather conditions, a novel deep learning-based method for rainy environment identification is proposed in this paper. This method is designed for autonomous vehicles and allows for environmental perception without relying on specific sensors by utilizing the wireless channel characteristics shared among vehicles in the vehicular network. The core of this method is the Channel State Information (CSI), which is the most accurate indicator of wireless channel characteristics [24]. We use the CSI values estimated from CAM exchanged in Vehicle to Vehicle (V2V) communication as input features for the proposed convolutional neural network model. After training, the model can reliably identify the surrounding environment based on the channel characteristics of different rainy environments. Thus, intelligent vehicles can adjust their driving parameters according to the identified environmental information to adapt to rainy conditions, thereby ensuring driving safety.

3. Wireless Communication Vehicle Network

Firstly, a wireless communication vehicular network is constructed. In this network, each vehicle is equipped with a half-duplex transmitter/receiver, enabling communication with other vehicles while limiting the ability to send and receive signals simultaneously. This half-duplex communication mechanism is crucial in dense vehicular environments as it helps reduce channel congestion and signal interference. In this network, vehicles process the received CAM characterized by Channel State Information (CSI) in a wireless channel. CSI provides a precise characterization of the wireless channel and contains detailed information about the signal propagation path, such as signal attenuation, phase changes, and time delay characteristics.
The proposed V2X network operates based on the IEEE 802.11p standard. The main physical layer (PHY) of the IEEE 802.11p standard is based on Orthogonal Frequency-Division Multiplexing (OFDM) waveforms [25]. Figure 2 illustrates the frame structure of the Physical Protocol Data Unit (PPDU) specified by the IEEE 802.11p standard.
The vehicular wireless channel is constructed as a dual-selective fading propagation channel, where multipath effects cause the signal to experience varying degrees of fading at different times and frequencies. And the Doppler effect affects the signal’s frequency due to changes in moving speed.
In the high-speed moving vehicular communication environment, the time-varying nature of multipath effects poses a significant challenge to signal transmission quality. To adapt to this dynamic change, the baseband time-varying response of the multipath channel is represented as follows:
h ( t , τ ) = i = 1 L 1 A i ( t ) e f 2 π f d τ i ( t ) δ τ τ i ( t )
L represents the total number of non-zero paths, A i t is the i-th time-varying complex amplitude, and τ i t denotes the i-th time-varying path delay. The expression e j 2 π f d τ i t indicates the i-th phase shift due to the Doppler frequency shift, where f d is the Doppler frequency shift caused by the relative velocity change between the signal transmitter and receiver.
It is noteworthy that the phase of the complex amplitude A i t is affected by changes due to the Doppler effect. As the signal propagates through the channel, Doppler frequency shifts may occur on each independent path. By incorporating the factor e j 2 π f d τ l t , it is possible to accurately describe and compensate for the phase changes caused by the Doppler effect during signal propagation, thereby enabling a more accurate recovery of the transmitted signal.
Assuming that the channel characteristics are stationary within a constant time Tc, which is inversely proportional to the maximum Doppler frequency shift f d [26].The formula is as follows:
T c 0.423 f d
In vehicle communication, the Doppler frequency shift f d can be represented by the velocity difference Δ V between two communicating vehicles as follows:
f d = Δ V c f 0
Δ V = V 1 V 2
where c represents the speed of light and f 0 denotes the central frequency of the communication system.
Based on the channel’s coherence bandwidth 1/Tc, if this bandwidth is greater than the bandwidth of the signal itself, the channel exhibits flat fading characteristics. Conversely, if the channel’s coherence bandwidth 1/Tc is less than the signal bandwidth, the channel exhibits frequency-selective fading, which causes intersymbol interference in the time domain.
In vehicular networks, V2X communication scenarios significantly impact the propagation of electromagnetic waves, and these factors are key determinants in channel model construction. The channel model is the foundation of wireless communication system design and must consider specific factors under various vehicular environments, such as urban congestion, highway travel, tunnel crossing, and suburban areas. Each environment exhibits unique channel characteristics, including multipath effects, shadowing losses, reflection, and scattering.
To accurately model the channel characteristics in these environments, five main vehicular environments [27], namely Rural Line of Sight (Rural LOS), Urban Line of Sight (Urban LOS), Urban Non-Line of Sight (Urban NLOS), Highway Line of Sight (Highway LOS), and Highway Non-Line of Sight (Highway NLOS) are considered. The channel models for these environments are based on characteristics such as power attenuation, time delay spread, and Doppler frequency shift. Power attenuation describes the energy loss during signal propagation, time delay spread reflects the signal delay variation caused by multipath propagation, and the Doppler frequency shift characterizes the frequency variation due to the change in the relative position of moving vehicles. The comprehensive consideration of these characteristics allows the channel model to more accurately reflect the actual communication environment, providing a solid theoretical foundation for the design of wireless communication systems. These vehicular environments are shown in Table 1.
Under rainy conditions, the characteristics of the V2X channel indeed undergo significant changes. To accurately simulate these changes in the V2V channel model, a new multipath component is introduced. This component contains independent parameters for gain, delay, and Doppler frequency shift to reflect the specific variations in signal propagation under rainy conditions. Specifically, raindrops may cause additional attenuation and scattering of the signal, affecting its gain and delay. Moreover, rainy conditions may also change the moving speed of vehicles, thereby affecting the Doppler frequency shift. By precisely adjusting these parameters, the vehicle communication performance under rainy weather conditions can be simulated, providing theoretical support for the stable operation of autonomous driving systems in rainy environments.
Based on the existing multipath components in each vehicular communication environment, the introduced new multipath components to simulate the channel characteristics under rainy conditions. For path gain, it can be calculated by considering the attenuation effect of raindrops on electromagnetic waves. This typically involves calculating the impact of the scattering and absorption of raindrops on electromagnetic waves.
G = G 0 e k r R d
In the formula, G 0 is the gain under no-rain conditions, k r is the rain attenuation coefficient, R is the rainfall rate (mm/h), and d is the propagation distance (km).
Under no-rain conditions, the gain G 0 can be estimated based on the free-space propagation model. This model assumes that the signal propagates in a straight line between two points without any obstructions. The gain calculation formula is as follows:
G 0 = λ 4 π d 2
where λ is the wavelength of the signal, and d is the propagation distance.
The specific attenuation caused by rain can be quantified using the ITU model, which provides an empirical relationship based on extensive measurements and studies. The model calculates the specific attenuation k r as a function of the rain rate R (in mm/h) and the frequency of the signal (GHz) [28]. The formula for specific attenuation is generally expressed as follows:
k r = k R α
where k reflects the attenuation efficiency per unit rain rate. It increases with frequency, indicating that higher frequencies are more susceptible to rain attenuation. α describes the non-linear effect of rain rate on attenuation. It also varies with frequency and generally decreases as frequency increases, suggesting that the effect of increasing rain rate on higher frequencies is less pronounced than on lower frequencies. R is the rate at which rain falls, typically measured in mm/h. It is a direct measure of the intensity of the precipitation and is a critical factor in determining the extent of signal attenuation.
Delay is usually related to the length of the multipath propagation path. Rainy conditions may cause changes in the propagation path, thereby affecting the delay. If the additional scattering path caused by raindrops is considered, the delay can be expressed as follows:
τ = τ 0 + Δ τ
where τ 0 is the delay under no-rain conditions, and Δ τ is the additional delay caused by raindrop scattering.
The delay τ 0 under no-rain conditions is related to the straight-line distance of signal propagation. It can be calculated using the following formula:
τ 0 = d c
where c is the speed of light, and d is the propagation distance.
The additional delay Δ τ can be estimated by measuring the difference in signal propagation time between rainy and clear conditions. The formula is as follows:
Δ τ = d c n r 1
where d is the propagation distance, c is the speed of light, and n r is the refractive index of raindrops.
The calculation of the Doppler frequency shift can use the standard Doppler effect formula, taking into account the speed changes that may be caused by rainy conditions:
f D r a i n = v r a i n λ cos ( θ )
where f D r a i n is the Doppler frequency shift under rainy conditions, v r a i n is the vehicle speed in the rain, λ is the wavelength of the signal, and θ is the angle of arrival of the signal.
The Signal-to-Noise Ratio (SNR) is defined as the ratio of the effective signal power to the noise power, and it is a key metric for evaluating the performance of communication systems. The formula for calculating SNR is as follows:
SNR = 10 log 10 P signal   P noise
where P signal represents the average power of the received signal and P noise represents the total noise power. By accurately identifying these sources of noise and precisely calculating the total noise power, the reliability and accuracy of the SNR calculation can be ensured under various rainy conditions. This not only aids in assessing the performance of the communication systems but also ensures stable operation in complex environments.
In order to reduce the impact of raindrop scattering on V2V channel performance, a new multipath component is proposed. It can provide ample feature representation path components, including its own gain, delay, and Doppler frequency shift parameters, which meticulously reflect the changes in signal propagation under rainy conditions. By adjusting these parameters, the five vehicular environments under rainy weather conditions can be accurately simulated based on channel modeling characteristics such as power attenuation, time delay spread, and Doppler frequency shift. During the training process, particular attention was given to vehicular environments under torrential rain conditions. Torrential rain is defined as a rainfall rate exceeding 50 mm/h. Such conditions significantly affect wireless signals, including notable signal attenuation and multipath effects.
Due to the high mobility of vehicular environments, the transmitted messages are affected by the wireless channel. The signal received on the vehicular wireless channel can be represented as follows:
Y ( k ) = X ( k ) H ( k ) + W ( k )
where X k represents the k-th transmitted data symbol, W k is the current noise, and H k represents the k-th wireless channel response, which is described by the CSI. Channel estimation at the receiver end is performed to calculate the CSI, which is crucial for recovering the transmitted data.
In vehicular network communication, the Least Squares (LS) estimator is one of the most commonly used channel estimation methods, widely applied in the industrial implementation of V2X communication due to its low complexity. The mathematical expression for the LS estimator can be represented as follows:
H ^ L S = min H L S Y X H 2 2
where 2 is the L 2 norm, X is the long training sequence vector, and Y represents the corresponding observation vector.
Due to multipath propagation and the Doppler effect, the received signal often becomes distorted, which is particularly problematic for OFDM-based systems. To ensure the reliability and efficiency of data transmission, it is necessary to introduce equalization strategies at the receiver end to counteract these distortions in OFDM-based systems. The proposed equalization strategy aims to optimize the received signal using channel estimation information, thereby effectively suppressing multipath and Doppler effects in both frequency and time domains. The adjustment based on channel estimation values is as follows:
H e q ( k ) = H ( k ) | H ( k ) | 2 + N 0 E s
where H e q k is the k-th equalized channel response, H k is the k-th original channel response, N 0 is the current noise power spectral density, and E s is the current symbol energy.
This equalization strategy optimizes the signal recovery process by considering the signal’s energy and noise level, thereby improving the quality of the received signal and the overall performance of communication. At the receiver end of an OFDM system, the channel response is first obtained through the Least Squares method of channel estimation. Subsequently, Formula (11) is applied to adjust the signal of each subcarrier to counteract channel distortion and improve the reliability of the signal.

4. Vehicular Environment Identification Methodology

In this paper, the vehicle environment identification process is considered a multi-classification problem, and the computed CSI values, including a dataset of 128 samples, are input features for the proposed CNN model. The CSI values obtained by precisely capturing the subtle changes in the wireless channel provide rich environmental information for the CNN model. After deep feature extracting and learning by the CNN model, different categories of vehicular environments can be effectively distinguished. By optimizing the structure and parameters of the CNN model, the accuracy of environment identification based on CSI values can be further improved. The specific process flow of vehicle environment identification is shown in Figure 3.

4.1. The Proposed CNN Model

To address the issue of vehicular environment identification, a Convolutional Neural Network (CNN) model is proposed, as shown in Figure 4.
The CNN model constructed in this paper is based on one-dimensional convolutional layers designed to capture complex features from the Channel State Information (CSI) extracted from the V2V communication system. The model starts with two one-dimensional convolutional layers equipped with 64 (3 × 1) filters, followed by LeakyReLU activation functions for non-linear transformations, enhancing the model’s expressive ability for input data and preventing activation saturation. This is followed by a convolutional layer with 30 filters to further deepen feature extraction.
To improve the model’s training stability and mitigate the effects of internal covariate shift, batch normalization layers are introduced after each convolutional layer. These layers normalize the activation values, ensuring rapid convergence and generalization performance of the network during training. Next, there is an average pooling layer with a pool size of 2, which helps reduce the model’s complexity and increase its robustness.
The deep structure of the model includes two additional one-dimensional convolutional layers, each equipped with 64 and 30 filters, followed by subsequent batch normalization layers. This design helps the model capture more refined features and provides NVIDIA GeForce GTX 1080 ample feature representation for classification tasks. Following the sequence of convolutional layers, the model contains three fully connected layers with 64, 128, and 5 neurons, respectively. The first two fully connected layers use ReLU activation functions, while the final output layer uses a SoftMax activation function to classify five different vehicular environments.
To suppress overfitting, Dropout layers with a dropout rate of 0.5 are embedded after the first two fully connected layers. Additionally, all fully connected layers apply L2 regularization to further enhance the model’s generalization capability. The entire network architecture is implemented using the TensorFlow library, with the model undergoing 20 training epochs and a batch size of 50 to ensure an optimal balance between classification accuracy and prediction time. The proposed model is built using the TensorFlow library and trained with 20 epochs and 50 batches, with the training process carried out on an NVIDIA GeForce GTX 1080 machine.

4.2. Dataset Generation

During the training process, five types of vehicular environments under rainy conditions are considered: Rural LOS, Urban LOS, Urban NLOS, Highway LOS, and Highway NLOS. Each rainy environment is modeled through a wireless channel based on the vehicular environment’s delay, gain, and Doppler frequency. Each environment is assigned a label corresponding to the class output of the convolutional neural network model, as shown in Table 2.
When constructing the dataset samples, a half-duplex vehicle-to-vehicle (V2V) communication system based on Orthogonal Frequency Division Multiplexing (OFDM) technology was used. To simulate a diverse vehicular communication environment, we used MATLAB’s V2VChannel framework for wireless channel model simulation, version R2023a, was employed, as detailed in reference [27]. In the V2V channel model, we added a new multipath component to simulate the vehicular environment under rainy weather conditions. During the research process, numerous data packets conforming to the 802.11p standard were transmitted through various channel models. For each specific environment, data packets were tested for transmission at different signal-to-noise ratios (SNR), with the SNR values set to range from 15dB to 40dB, increasing in increments of 0.5 dB. This process is repeated 400 times for each environment, employing different releases of the channel model to enhance the diversity and robustness of the data. During each transmission process, we record the received packets and compute the Channel State Information (CSI), thereby extracting 128 feature symbols corresponding to each environment
The sequence feature F i used to store the CSI can be represented by the following formula:
F t = [ [ A ( 1 ) , A ( 2 ) , , A ( N ) ] ]
where F i is the CSI sample. At the end of this process, the dataset contains CSI samples, with 80% allocated for training data and the remaining 20% used as a validation dataset.

4.3. Choice of Loss Function and Its Impact

In this study, our CNN model is designed to identify different vehicular environments, which involves a typical classification problem. Therefore, choosing an appropriate loss function is crucial for ensuring effective model training and optimal performance.
For the multi-class classification problem, we have selected the Categorical Cross-Entropy Loss as it is highly effective in dealing with probability distributions across multiple categories. This loss function measures the disparity between the probability distribution predicted by the model and the actual distribution of the target labels [29]. The formula is given by:
L = c = 1 M y o , c log p o , c
where M is the total number of classes, y o , c is the true label for class c , and p o , c is the probability predicted by the model for class c .
The Categorical Cross-Entropy Loss function enhances classification accuracy by penalizing incorrect predictions and encouraging the model to lean toward the correct class probabilities. This loss function is particularly suitable for classification tasks as it provides a clear mathematical path aimed directly at optimizing classification accuracy.
Additionally, this loss function helps address issues of class imbalance by adjusting the weight of each class, which allows the model to pay more attention to minority classes during training, thus improving the model’s overall recognition capabilities across all categories.

5. Simulation Analysis

In order to verify the effectiveness and accuracy of the proposed model, a comprehensive evaluation was conducted on multiple datasets. To construct the test set, a series of data packets conforming to the 802.11p standard were transmitted using different channel models. In various environments, the Signal-to-Noise Ratio (SNR) range of the data packets was set from 15 dB to 40 dB, adjusted in steps of 0.25 dB. This process was repeated 30 times with different releases of the channel model for each environment, resulting in 15,000 test sequences.
Before formally evaluating the proposed CNN model, it was trained using the categorical cross-entropy loss function and the Adam optimization algorithm. Once the model is trained and its performance is verified to be stable, it is not necessary to retrain it every time it is used. However, should there be significant changes in environmental conditions or new types of environments are introduced, it may be necessary to retrain or adjust the model.
In this study, the method of processing and inputting data into the Convolutional Neural Network (CNN) is a critical step. The CSI data, inherently complex-valued and containing channel gain and phase information between multiple antennas, must undergo specific preprocessing before being input into the CNN. To accommodate the CNN model, we map the real and imaginary parts of each complex number to two separate input channels. Specifically, the first channel handles the real part information, while the second channel processes the imaginary part information exclusively. After these processing steps, the data are input into the CNN as a one-dimensional (1D) array. Our CNN model features a one-dimensional convolutional layer (Conv1D) designed specifically to handle this format of data. Through this approach, the CNN can process the real and imaginary parts in parallel, effectively learning and extracting key features to identify different communication environments and conditions. This one-dimensional input method not only optimizes the learning efficiency of the model but also enhances processing speed and accuracy.

5.1. Model Evaluation

The confusion matrix, as an important tool for evaluating classification performance, not only intuitively displays the model’s identification capabilities across various categories but also provides detailed insights into the model’s misclassification in specific categories. The main diagonal of each matrix shows the true positives for each category, while the off-diagonal elements represent the model’s misjudgments, thus offering a dimension to evaluate the model’s detailed performance.
Figure 5 presents the confusion matrix for the proposed CNN model under a dual-channel configuration. The matrix reveals that the CNN model can reliably identify various vehicular environments in rainy conditions based on CSI values, achieving an overall test accuracy of 95.7%. Specifically, the model’s identification accuracies for H-NLOS, H-LOS, R-LOS, U-LOS, and U-NLOS environments are 95.2%, 96.1%, 97.3%, 95.3%, and 94.8%, respectively.
Additionally, the proposed CNN model was compared with an ANN model that includes four dense, fully connected layers to ensure compatibility with the number of environments to be identified. The comparative analysis also considers a Random Forest (RF) classifier with 100 decision trees, a K-Nearest Neighbors classifier (K-NN, with K set to five neighbors), Gaussian Naive Bayes (GNB), and a Support Vector Machine (SVM) with a linear kernel, as shown in Table 3.
To further demonstrate the performance of various methods in accurate classification, the confusion matrices of each method are detailed in Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10. These matrices provide a clear visual representation for evaluating and comparing the effectiveness of different models in environmental identification tasks. They not only intuitively display the predictive accuracy of classification models across various categories but also reveal the strengths and weaknesses of the models under specific conditions.
The analysis of the confusion matrices indicates that different machine learning models exhibit varied performances in the task of vehicular environment identification under rainy conditions. The ANN model performed admirably with an accuracy rate of 81.9%, while the K-NN and RF models achieved an accuracy of 80% in H-NLOS and U-NLOS environments, demonstrating their effectiveness under specific conditions. However, these two models performed poorly in H-LOS, R-LOS, and U-LOS environments, with accuracy rates below 65%, which may suggest insufficient feature learning in these settings. On the other hand, the GNB and SVM classifiers generally performed poorly across all test environments, which may imply their limited adaptability to complex environment identification tasks.
The proposed convolutional neural network (CNN) model was compared with a series of advanced deep learning models, including ResNet50 [30], Xception [31], InceptionV3 [32], DenseNet201 [33], and MobileNetV2 [34]. To ensure fairness in comparison, all models were trained on the same training dataset, and their classification performance was evaluated on the test set. In response to the specific data input requirements of this study, the input layer shape of all models was adjusted accordingly, and the output layer structure was updated to include five categories, accurately matching the types of vehicular environments to be identified.
This approach ensured consistency in experimental setup across different models while also allowing each model to fully utilize its strengths in handling specific classification tasks. Through this comparison, we can more comprehensively assess the performance of the proposed CNN model in vehicular environment identification, as well as its relative advantages compared to other advanced models.
Since the aforementioned classic models are designed to receive two-dimensional (2D) inputs to accommodate shape sizes, a 2D channel matrix is considered in the input features rather than a one-dimensional (1D) channel vector. Therefore, the dataset is rearranged from 1D to 2D, with the formula as follows:
H 2 D n * n = Diag 1 * n H 1 D
where Diag is a diagonal matrix and H 2 D and H 1 D represent the channel matrix and the corresponding channel vector, respectively. The coefficients of these matrices and vectors are composed of CSI values, with n being 128, reflecting the number of CSI values estimated for each packet.
Table 4 provides a comparative analysis of the CNN model proposed in this paper with existing classic models in terms of test accuracy and the time required for environmental identification. The data from the table show that, compared to other models, our model has a significant advantage in prediction time. Specifically, the prediction times for ResNet50, Xception, and InceptionV3 are about 700 µs, while DenseNet201 and MobileNetV2 have prediction times of 1417 µs and 349 µs, respectively. In contrast, the prediction time for our study’s model is only 43.82 µs, which is significantly lower than the aforementioned models.
In terms of overall test accuracy, as shown in Table 4, the proposed model achieved an accuracy rate of 95.7%. This rate is higher than the test accuracies achieved by ResNet50, Xception, InceptionV3, DenseNet201, and MobileNetV2.
In summary, to validate the effectiveness of the proposed CNN method, we constructed a dataset that includes five types of rainy vehicular environments. Through extensive testing at different Signal-to-Noise Ratio (SNR) levels, our CNN model demonstrated outstanding performance in recognizing different rainy environments, with an accuracy rate of up to 95.7%. The proposed CNN model not only shows significant advantages in computational efficiency and identification accuracy but also proves the potential of our method in practical applications, providing a safer and more reliable driving parameter adjustment strategy for intelligent vehicles in rainy conditions.

5.2. Performance Overhead and Reliability

In this research, we focus on the environmental identification capabilities of autonomous vehicles under rainy conditions, particularly in scenarios where rapid response is crucial. The time sensitivity of autonomous driving systems when performing tasks necessitates that prediction times be optimized to the utmost. In such cases, the delay in prediction time is typically strictly limited to below the millisecond level to ensure that vehicles can operate safely and effectively execute tasks under varying and adverse weather conditions.
To meet this requirement, we propose a vehicle environment identification method based on deep learning, which uses Channel State Information (CSI) from vehicle-to-vehicle communication as input. Our CNN model demonstrates high-accuracy environmental identification capabilities within microsecond-level prediction times, far surpassing the limitations of traditional sensing systems under adverse weather conditions. This ensures that even in complex climatic conditions such as rain, intelligent vehicles can quickly and accurately adjust driving parameters, significantly enhancing the safety and reliability of the autonomous driving system. Moreover, the efficient performance of the CNN model means that in resource-constrained situations, it can minimize the consumption of energy and computational resources while ensuring task efficiency.

6. Conclusions

In this paper, we delve into the application of deep learning in the field of environmental identification for intelligent vehicles, especially under the unpredictable conditions of rainy weather. Our work not only demonstrates the potential of Channel State Information (CSI) in wireless communication but also proves the practicality of an efficient Convolutional Neural Network (CNN) model in environmental perception. This model, utilizing CSI values extracted from vehicle-to-vehicle (V2V) communication, successfully identified various rainy environments, providing precise parameter adjustment references for autonomous driving systems.
Furthermore, our research findings are significant for the autonomous behavior adjustment of intelligent vehicles under adverse weather conditions. By analyzing CSI data in real-time, intelligent vehicles can automatically adjust driving strategies, such as speed, significantly enhancing driving safety and reliability. This breakthrough not only optimizes the response mechanisms of autonomous driving systems but also provides a solid technical foundation for the future deployment of intelligent vehicles under a broader range of climatic conditions.
In summary, the contribution of this paper lies in proposing an innovative framework for rainy environment identification, combining wireless channel characteristics with advanced deep learning technologies. Our CNN model, by accurately capturing the subtle changes in wireless channels, provides reliable navigational support for intelligent vehicles under the most challenging weather conditions. This achievement not only enhances the adaptive capabilities of intelligent vehicles but also offers new perspectives and possibilities for the future direction of autonomous driving technology.

Author Contributions

Conceptualization, J.F. and X.L.; Methodology, X.L.; Formal analysis, H.F.; Investigation, H.F.; Data curation, X.L.; Writing—original draft, J.F. and X.L.; Writing—review & editing, J.F.; Supervision, J.F.; Project administration, J.F. and H.F.; Funding acquisition, J.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The processed data required to reproduce these findings cannot be shared as the data also form part of an ongoing study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sogandares, F.M.; Fry, E.S. Absorption Spectrum (340–640 Nm) of Pure Water. I. Photothermal Measurements. Appl. Opt. 1997, 36, 8699–8709. [Google Scholar] [CrossRef]
  2. Carballo, A.; Lambert, J.; Monrroy, A.; Wong, D.; Narksri, P.; Kitsukawa, Y.; Takeuchi, E.; Kato, S.; Takeda, K. LIBRE: The Multiple 3D LiDAR Dataset. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1094–1101. [Google Scholar]
  3. Parno, B.; Perrig, A. Challenges in Securing Vehicular Networks. In Proceedings of the Workshop on Hot Topics in Networks (HotNets-IV), College Park, MD, USA, 14–15 November 2005; pp. 1–6. [Google Scholar]
  4. Andrey, J.; Yagar, S. A Temporal Analysis of Rain-Related Crash Risk. Accid. Anal. Prev. 1993, 25, 465–472. [Google Scholar] [CrossRef] [PubMed]
  5. Wang, J.; Shao, Y.; Ge, Y.; Yu, R. A Survey of Vehicle to Everything (V2X) Testing. Sensors 2019, 19, 334. [Google Scholar] [CrossRef]
  6. Tong, W.; Hussain, A.; Bo, W.X.; Maharjan, S. Artificial Intelligence for Vehicle-to-Everything: A Survey. IEEE Access 2019, 7, 10823–10843. [Google Scholar] [CrossRef]
  7. Boban, M.; Kousaridas, A.; Manolakis, K.; Eichinger, J.; Xu, W. Connected Roads of the Future: Use Cases, Requirements, and Design Considerations for Vehicle-to-Everything Communications. IEEE Veh. Technol. Mag. 2018, 13, 110–123. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Carballo, A.; Yang, H.; Takeda, K. Perception and Sensing for Autonomous Vehicles under Adverse Weather Conditions: A Survey. ISPRS J. Photogramm. Remote Sens. 2023, 196, 146–177. [Google Scholar] [CrossRef]
  9. Fersch, T.; Buhmann, A.; Koelpin, A.; Weigel, R. The Influence of Rain on Small Aperture LiDAR Sensors. In Proceedings of the 2016 German Microwave Conference (GeMiC), Bochum, Germany, 14–16 March 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 84–87. [Google Scholar]
  10. Filgueira, A.; González-Jorge, H.; Lagüela, S.; Díaz-Vilariño, L.; Arias, P. Quantifying the Influence of Rain in LiDAR Performance. Measurement 2017, 95, 143–148. [Google Scholar] [CrossRef]
  11. Hasirlioglu, S.; Doric, I.; Lauerer, C.; Brandmeier, T. Modeling and Simulation of Rain for the Test of Automotive Sensor Systems. In Proceedings of the 2016 IEEE Intelligent Vehicles Symposium (IV), Gothenburg, Sweden, 19–22 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 286–291. [Google Scholar]
  12. Zang, S.; Ding, M.; Smith, D.; Tyler, P.; Rakotoarivelo, T.; Kaafar, M.A. The Impact of Adverse Weather Conditions on Autonomous Vehicles: How Rain, Snow, Fog, and Hail Affect the Performance of a Self-Driving Car. IEEE Veh. Technol. Mag. 2019, 14, 103–111. [Google Scholar] [CrossRef]
  13. Reway, F.; Huber, W.; Ribeiro, E.P. Test Methodology for Vision-Based Adas Algorithms with an Automotive Camera-in-the-Loop. In Proceedings of the 2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES), Madrid, Spain, 12–14 September 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–7. [Google Scholar]
  14. Liu, Z.; Cai, Y.; Wang, H.; Chen, L.; Gao, H.; Jia, Y.; Li, Y. Robust Target Identification and Tracking of Self-Driving Cars with Radar and Camera Information Fusion under Severe Weather Conditions. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6640–6653. [Google Scholar]
  15. Bijelic, M.; Gruber, T.; Mannan, F.; Kraus, F.; Ritter, W.; Dietmayer, K.; Heide, F. Seeing through Fog without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Identification, Seattle, WA, USA, 13–19 June 2020; pp. 11682–11692. [Google Scholar]
  16. Mai, N.A.M.; Duthon, P.; Khoudour, L.; Crouzil, A.; Velastin, S.A. 3D Object Detection with SLS-Fusion Network in Foggy Weather Conditions. Sensors 2021, 21, 6711. [Google Scholar] [CrossRef] [PubMed]
  17. Quan, R.; Yu, X.; Liang, Y.; Yang, Y. Removing Raindrops and Rain Streaks in One Go. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Identification, Nashville, TN, USA, 20–25 June 2021; pp. 9147–9156. [Google Scholar]
  18. Ni, S.; Cao, X.; Yue, T.; Hu, X. Controlling the Rain: From Removal to Rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Identification, Nashville, TN, USA, 20–25 June 2021; pp. 6328–6337. [Google Scholar]
  19. Yue, Z.; Xie, J.; Zhao, Q.; Meng, D. Semi-Supervised Video Deraining with Dynamical Rain Generator. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Identification, Nashville, TN, USA, 20–25 June 2021; pp. 642–652. [Google Scholar]
  20. Zhang, Z.; Ma, H. Multi-Class Weather Classification on Single Images. In Proceedings of the 2015 IEEE International Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 4396–4400. [Google Scholar]
  21. Heinzler, R.; Schindler, P.; Seekircher, J.; Ritter, W.; Stork, W. Weather Influence and Classification with Automotive Lidar Sensors. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1527–1534. [Google Scholar]
  22. Šabanovič, E.; Žuraulis, V.; Prentkovskis, O.; Skrickij, V. Identification of Road-Surface Type Using Deep Neural Networks for Friction Coefficient Estimation. Sensors 2020, 20, 612. [Google Scholar] [CrossRef] [PubMed]
  23. Wolcott, R.W.; Eustice, R.M. Robust LIDAR Localization Using Multiresolution Gaussian Mixture Maps for Autonomous Driving. Int. J. Robot. Res. 2017, 36, 292–319. [Google Scholar] [CrossRef]
  24. Ribouh, S.; Phan, K.; Malawade, A.V.; Elhillali, Y.; Rivenq, A.; Al Faruque, M.A. Channel State Information-Based Cryptographic Key Generation for Intelligent Transportation Systems. IEEE Trans. Intell. Transp. Syst. 2020, 22, 7496–7507. [Google Scholar] [CrossRef]
  25. Anwar, W.; Franchi, N.; Fettweis, G. Physical Layer Evaluation of V2X Communications Technologies: 5G NR-V2X, LTE-V2X, IEEE 802.11 Bd, and IEEE 802.11 p. In Proceedings of the 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall), Honolulu, HI, USA, 22–25 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–7. [Google Scholar]
  26. Wan, J.; Lopez, A.B.; Al Faruque, M.A. Exploiting Wireless Channel Randomness to Generate Keys for Automotive Cyber-Physical System Security. In Proceedings of the 2016 ACM/IEEE 7th International Conference on Cyber-Physical Systems (ICCPS), Vienna, Austria, 11–14 April 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–10. [Google Scholar]
  27. Alexander, P.; Haley, D.; Grant, A. Cooperative Intelligent Transport Systems: 5.9-GHz Field Trials. Proc. IEEE 2011, 99, 1213–1235. [Google Scholar] [CrossRef]
  28. Zhao, Z.; Zhang, M.; Wu, Z. Analytic Specific Attenuation Model for Rain for Use in Prediction Methods. Int. J. Infrared Millim. Waves 2001, 22, 113–120. [Google Scholar] [CrossRef]
  29. Glorot, X.; Bengio, Y. Understanding the Difficulty of Training Deep Feedforward Neural Networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; JMLR Workshop and Conference Proceedings. pp. 249–256. [Google Scholar]
  30. He, K.; Zhang, X.; Ren, S.; Sun, J. Identity Mappings in Deep Residual Networks. In Lecture Notes in Computer Science, Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part IV 14; Springer: Berlin/Heidelberg, Germany, 2016; pp. 630–645. [Google Scholar]
  31. Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Identification, Honolulu, HI, USA, 21–26 July 2017; pp. 1251–1258. [Google Scholar]
  32. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Identification, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  33. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Identification, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  34. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.-C. Mobilenetv2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Identification, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
Figure 1. Vehicular environment identification process.
Figure 1. Vehicular environment identification process.
Applsci 14 03788 g001
Figure 2. IEEE 802.11p PHY layer frame structure.
Figure 2. IEEE 802.11p PHY layer frame structure.
Applsci 14 03788 g002
Figure 3. Flow chart describing vehicular environment identification process.
Figure 3. Flow chart describing vehicular environment identification process.
Applsci 14 03788 g003
Figure 4. Proposed CNN Model.
Figure 4. Proposed CNN Model.
Applsci 14 03788 g004
Figure 5. The confusion matrix of the CNN model.
Figure 5. The confusion matrix of the CNN model.
Applsci 14 03788 g005
Figure 6. The confusion matrix of the ANN model.
Figure 6. The confusion matrix of the ANN model.
Applsci 14 03788 g006
Figure 7. The confusion matrix of the KNN model.
Figure 7. The confusion matrix of the KNN model.
Applsci 14 03788 g007
Figure 8. The confusion matrix of the RF model.
Figure 8. The confusion matrix of the RF model.
Applsci 14 03788 g008
Figure 9. The confusion matrix of the GNB model.
Figure 9. The confusion matrix of the GNB model.
Applsci 14 03788 g009
Figure 10. The confusion matrix of the SVM model.
Figure 10. The confusion matrix of the SVM model.
Applsci 14 03788 g010
Table 1. Vehicular environment characteristics.
Table 1. Vehicular environment characteristics.
TapsPower [dB]Delay [ns]Doppler [Hz]
U-LOSTap 1000
Tap 2−8117236
Tap 3−10183−157
Tap 4−15333492
U-NLOSTap 1000
Tap 2−3267295
Tap 3−4400−98
Tap 4−10533591
R-LOSTap 1000
Tap 2−1483492
Tap 3−17183−295
H-NLOSTap 1000
Tap 2−2200689
Tap 3−5433−492
Tap 4−7700886
H-LOSTap 1000
Tap 2−10100689
Tap 3−15167−492
Tap 4−20500886
Table 2. Vehicular environment labels and speed limits.
Table 2. Vehicular environment labels and speed limits.
Vehicular
Environment
LabelSpeed Limits
Highway NLOS0110
Highway LOS1110
Rural LOS270
Urban LOS330
Urban NLOS430
Table 3. Classification accuracy and average prediction time comparison.
Table 3. Classification accuracy and average prediction time comparison.
ApproachAccuracy (%)Prediction Time (µs)
Our Model95.743.82
ANN81.927.45
RF65.636.27
K-NN66.89723
GBN26.55.79
SVM35.216,724
Table 4. Comparison between our model and state-of-the-art alternatives.
Table 4. Comparison between our model and state-of-the-art alternatives.
ApproachAccuracy (%)Prediction Time (µs)
Our Model95.743.82
ResNet50 [30]86.7762
Xception [31]90.5709
InceptionV3 [32]88.3698
DenseNet201 [33]91.21417
MobileNetV2 [34]76.3349
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, J.; Li, X.; Fang, H. Rainy Environment Identification Based on Channel State Information for Autonomous Vehicles. Appl. Sci. 2024, 14, 3788. https://doi.org/10.3390/app14093788

AMA Style

Feng J, Li X, Fang H. Rainy Environment Identification Based on Channel State Information for Autonomous Vehicles. Applied Sciences. 2024; 14(9):3788. https://doi.org/10.3390/app14093788

Chicago/Turabian Style

Feng, Jianxin, Xinhui Li, and Hui Fang. 2024. "Rainy Environment Identification Based on Channel State Information for Autonomous Vehicles" Applied Sciences 14, no. 9: 3788. https://doi.org/10.3390/app14093788

APA Style

Feng, J., Li, X., & Fang, H. (2024). Rainy Environment Identification Based on Channel State Information for Autonomous Vehicles. Applied Sciences, 14(9), 3788. https://doi.org/10.3390/app14093788

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop