1. Introduction
Environmental perception is the core of autonomous vehicles. Autonomous vehicles, also known as driverless cars, refer to vehicles capable of perceiving their surroundings and navigating safely with little or no human input [
1]. Although Autonomous Driving Systems (ADS) have potential advantages in improving traffic safety, reducing congestion, lowering transportation costs, and positively impacting the environment, incidents and casualties involving ADS are still on the rise. To gain wider identification of the advantages of autonomous vehicles, it is imperative to address the pressing issues currently faced by ADS, especially the capability of environmental identification under adverse weather conditions [
2].
Vehicular networks, through intelligent technology, have enabled efficient information exchange between vehicles, infrastructure, pedestrians, and network services, creating an intelligent transportation system [
3]. In this system, accurately identifying the environment around the vehicle is crucial for the operation of intelligent vehicles. Intelligent vehicles must adjust their speed based on road conditions, traffic congestion, and the specific areas they are in, such as urban or rural regions. Particularly under adverse weather conditions like heavy rain, precise environmental identification becomes even more important, as statistics show that the risk of accidents in rainy weather is 70% higher than in normal conditions [
4].
The introduction of V2X (Vehicle-to-Everything) communication technology further enhances the functionality of vehicular networks. V2X communication is not just a part of vehicular networks; it also expands the sensory range of vehicles, enabling autonomous vehicles to communicate in real an additional an additional time with surrounding vehicles, infrastructure, and pedestrians. This capability of real-time information sharing and perception allows autonomous vehicles to understand their environment more accurately, leading to more rational decision-making and action planning. This is of significant importance for achieving higher levels of autonomous driving and enhancing traffic safety and efficiency [
5,
6,
7].
Rainy conditions pose additional challenges to the operation of intelligent vehicles, as weather conditions directly affect the environmental state, further complicating the task of environmental identification by ADS [
8]. Therefore, a new method for vehicle environmental identification under rainy weather conditions that does not require specific sensors is proposed in this paper. It utilizes Cooperative Awareness Messages (CAM) exchanged between vehicles to explore channel characteristics, which are then used to identify the vehicle’s environment. This enables vehicles to automatically determine the appropriate driving speed, thereby enhancing the safety and reliability of ADS in rainy weather conditions, as shown in
Figure 1.
The main contributions of our work are as follows:
A wireless vehicular communication network has been established. Considering the impact of raindrop scattering on the vehicle-to-vehicle (V2V) channel, an innovative approach that adds a multipath component has been proposed to simulate the channel characteristics of vehicular environments in rainy conditions. This new multipath component specifically represents the effect of raindrop scattering and is characterized by path delay, path gain, and Doppler shift parameters. An equalization strategy for the receiver end of OFDM systems has also been proposed. It adjusts the signal for each subcarrier to counteract channel distortion, thereby enhancing the reliability of the signal.
A deep learning-based method for rainy environment identification in vehicles is proposed. This method does not rely on specific sensors but utilizes the wireless channel characteristics shared among vehicles in the vehicular network for environmental perception. We use the Channel State Information (CSI), estimated from Cooperative Awareness Messages (CAM) exchanged in vehicle-to-vehicle (V2V) communication, as input features, which are then fed into the proposed Convolutional Neural Network (CNN) model. This enables the model to reliably identify the surrounding environment of the vehicle based on the channel characteristics of different rainy vehicular environments.
The rest of this paper is organized as follows:
Section 2 introduces related research.
Section 3 describes our wireless communication vehicular network.
Section 4 provides our rainy environment identification method for autonomous vehicles.
Section 5 describes the performance evaluation of the proposed method. Finally,
Section 6 presents the conclusions.
2. Related Work
Perception and sensing in adverse weather conditions have been a challenge for autonomous vehicles to achieve higher levels of automation. The impacts that weather poses to ADS sensors are significant; hence, solutions for dealing with adverse weather conditions are mainly discussed in this paper.
Weather challenges have always been an obstacle to the deployment of ADS, and it is necessary to recognize their impacts on sensors. LiDAR is one of the core perception sensors in the field of autonomous driving. The use of 3D-LiDAR on cars has not been widespread for more than a decade but has already proven to be indispensable in ADAS (advanced driver-assistance systems) with high measurement accuracy and independent sensing capabilities regardless of illumination. Fersch et al. [
9] studied the impact of rain on pulsed LiDAR systems with small apertures and narrow laser beam cross-sections. Filgueira et al. [
10] quantified the impact of rainfall on different LiDAR parameters: range, intensity, and number of detection points. Hasirlioglu et al. [
11] researched the impact of rain on automotive laser scanner sensor systems and introduced a new theoretical model based on a hierarchical model to describe the impact of rain on sensor behavior. Regarding radar sensors, the automotive radar system consists of a transmitter and a receiver. Zang et al. [
12] described the impact of rainfall on millimeter-wave radar, considering rain attenuation and backscatter effects. As for cameras, the camera is one of the most widely used sensors in perception tasks but also one of the most vulnerable in adverse weather conditions. Cameras in the rain, no matter how high the resolution, are easily damaged by a drop of water on the transmitter or lens. Reway et al. [
13] proposed a camera loop method to evaluate the performance of object detection algorithms under different weather conditions.
With the widespread use of machine learning and the rapid development of powerful sensors, multi-sensor modes and additional sensor components are used to help mitigate the impact of weather. A single sensor does not provide sufficient safety assurance for navigation under adverse weather conditions. Liu et al. [
14] utilized multi-sensor information fusion technology to improve the perception accuracy and reliability of autonomous vehicles in adverse weather conditions. The fusion scheme uses millimeter-wave radar as the main sensor and a monocular camera as an auxiliary sensor. Bijelic et al. [
15] developed a robust fusion model for multi-modal sensor inputs under unseen adverse weather conditions, laying the foundation for the development and evaluation of future autonomous vehicle technologies. Mai et al. [
16] proposed a three-dimensional (3D) object detection method based on a post-fusion architecture that performs well in foggy conditions.
Sensors are key to the safe navigation of autonomous vehicles in adverse weather; enhancing their perception capabilities is crucial. Quan et al. [
17] proposed a Complementary Cascade Network (CCN) architecture capable of uniformly removing raindrops and streaks from images, introducing a new real-world rain dataset, RainDS. Ni et al. [
18] introduced a rain intensity control network, RICNet, which can achieve bidirectional control of rain intensity from clear to downpour images while preserving the characteristics of specific scenes. Yue et al. [
19] introduced a new semi-supervised video deraining method that processes labeled synthetic data and unlabeled real data using a dynamic rain generator and different prior formats. This method has successfully achieved better deraining effects on real datasets.
The perception capabilities of intelligent vehicles are not limited to object identification; they also encompass a comprehensive understanding of their location and surrounding environment, especially the ability to accurately classify and locate in adverse weather conditions, which is crucial for ensuring the safe operation of the vehicle. Zhang and Ma [
20] discussed the challenging task of multi-class weather classification from a single image in outdoor computer vision applications. Heinzler et al. [
21] achieved quite precise weather classification using only multi-echo LiDAR sensors. The point cloud is first converted into a grid matrix, and the presence of rain or fog can be easily observed through the appearance of secondary echoes on objects. Šabanovič et al. [
22] estimated road friction coefficients by constructing a vision-based Deep Neural Network (DNN), as surfaces with different frictional force reductions, such as dry, slippery, muddy, and icy, can essentially be identified as clear, rainy, snowy, and icy weather, respectively. Their algorithm not only detects wet conditions but also classifies combinations of wet conditions and road types. Wolcott and Eustice [
23] proposed a fast and robust multi-resolution scan-matching algorithm for local vehicle positioning in autonomous driving. The algorithm solves the weather problem of vehicle positioning in autonomous driving by introducing Gaussian mixture maps, multi-resolution inference, and global optimization search, improving robustness under adverse conditions.
Almost all of the aforementioned methods fundamentally rely on specific types of sensors, such as cameras, millimeter-wave radars, and LiDAR. These sensors primarily collect high-dimensional measurement data in image and video formats, whose processing is not only cumbersome but also significantly energy-consuming. Moreover, meteorological conditions have a direct impact on the environmental state, thereby weakening the perception capabilities of sensors in autonomous driving systems and significantly increasing the difficulty of completing key perception tasks such as target detection and identification.
To overcome the limitations of traditional sensors in adverse weather conditions, a novel deep learning-based method for rainy environment identification is proposed in this paper. This method is designed for autonomous vehicles and allows for environmental perception without relying on specific sensors by utilizing the wireless channel characteristics shared among vehicles in the vehicular network. The core of this method is the Channel State Information (CSI), which is the most accurate indicator of wireless channel characteristics [
24]. We use the CSI values estimated from CAM exchanged in Vehicle to Vehicle (V2V) communication as input features for the proposed convolutional neural network model. After training, the model can reliably identify the surrounding environment based on the channel characteristics of different rainy environments. Thus, intelligent vehicles can adjust their driving parameters according to the identified environmental information to adapt to rainy conditions, thereby ensuring driving safety.
3. Wireless Communication Vehicle Network
Firstly, a wireless communication vehicular network is constructed. In this network, each vehicle is equipped with a half-duplex transmitter/receiver, enabling communication with other vehicles while limiting the ability to send and receive signals simultaneously. This half-duplex communication mechanism is crucial in dense vehicular environments as it helps reduce channel congestion and signal interference. In this network, vehicles process the received CAM characterized by Channel State Information (CSI) in a wireless channel. CSI provides a precise characterization of the wireless channel and contains detailed information about the signal propagation path, such as signal attenuation, phase changes, and time delay characteristics.
The proposed V2X network operates based on the IEEE 802.11p standard. The main physical layer (PHY) of the IEEE 802.11p standard is based on Orthogonal Frequency-Division Multiplexing (OFDM) waveforms [
25].
Figure 2 illustrates the frame structure of the Physical Protocol Data Unit (PPDU) specified by the IEEE 802.11p standard.
The vehicular wireless channel is constructed as a dual-selective fading propagation channel, where multipath effects cause the signal to experience varying degrees of fading at different times and frequencies. And the Doppler effect affects the signal’s frequency due to changes in moving speed.
In the high-speed moving vehicular communication environment, the time-varying nature of multipath effects poses a significant challenge to signal transmission quality. To adapt to this dynamic change, the baseband time-varying response of the multipath channel is represented as follows:
represents the total number of non-zero paths, is the i-th time-varying complex amplitude, and denotes the i-th time-varying path delay. The expression indicates the i-th phase shift due to the Doppler frequency shift, where is the Doppler frequency shift caused by the relative velocity change between the signal transmitter and receiver.
It is noteworthy that the phase of the complex amplitude is affected by changes due to the Doppler effect. As the signal propagates through the channel, Doppler frequency shifts may occur on each independent path. By incorporating the factor , it is possible to accurately describe and compensate for the phase changes caused by the Doppler effect during signal propagation, thereby enabling a more accurate recovery of the transmitted signal.
Assuming that the channel characteristics are stationary within a constant time
Tc, which is inversely proportional to the maximum Doppler frequency shift
[
26].The formula is as follows:
In vehicle communication, the Doppler frequency shift
can be represented by the velocity difference
between two communicating vehicles as follows:
where
represents the speed of light and
denotes the central frequency of the communication system.
Based on the channel’s coherence bandwidth 1/Tc, if this bandwidth is greater than the bandwidth of the signal itself, the channel exhibits flat fading characteristics. Conversely, if the channel’s coherence bandwidth 1/Tc is less than the signal bandwidth, the channel exhibits frequency-selective fading, which causes intersymbol interference in the time domain.
In vehicular networks, V2X communication scenarios significantly impact the propagation of electromagnetic waves, and these factors are key determinants in channel model construction. The channel model is the foundation of wireless communication system design and must consider specific factors under various vehicular environments, such as urban congestion, highway travel, tunnel crossing, and suburban areas. Each environment exhibits unique channel characteristics, including multipath effects, shadowing losses, reflection, and scattering.
To accurately model the channel characteristics in these environments, five main vehicular environments [
27], namely Rural Line of Sight (Rural LOS), Urban Line of Sight (Urban LOS), Urban Non-Line of Sight (Urban NLOS), Highway Line of Sight (Highway LOS), and Highway Non-Line of Sight (Highway NLOS) are considered. The channel models for these environments are based on characteristics such as power attenuation, time delay spread, and Doppler frequency shift. Power attenuation describes the energy loss during signal propagation, time delay spread reflects the signal delay variation caused by multipath propagation, and the Doppler frequency shift characterizes the frequency variation due to the change in the relative position of moving vehicles. The comprehensive consideration of these characteristics allows the channel model to more accurately reflect the actual communication environment, providing a solid theoretical foundation for the design of wireless communication systems. These vehicular environments are shown in
Table 1.
Under rainy conditions, the characteristics of the V2X channel indeed undergo significant changes. To accurately simulate these changes in the V2V channel model, a new multipath component is introduced. This component contains independent parameters for gain, delay, and Doppler frequency shift to reflect the specific variations in signal propagation under rainy conditions. Specifically, raindrops may cause additional attenuation and scattering of the signal, affecting its gain and delay. Moreover, rainy conditions may also change the moving speed of vehicles, thereby affecting the Doppler frequency shift. By precisely adjusting these parameters, the vehicle communication performance under rainy weather conditions can be simulated, providing theoretical support for the stable operation of autonomous driving systems in rainy environments.
Based on the existing multipath components in each vehicular communication environment, the introduced new multipath components to simulate the channel characteristics under rainy conditions. For path gain, it can be calculated by considering the attenuation effect of raindrops on electromagnetic waves. This typically involves calculating the impact of the scattering and absorption of raindrops on electromagnetic waves.
In the formula, is the gain under no-rain conditions, is the rain attenuation coefficient, is the rainfall rate (mm/h), and is the propagation distance (km).
Under no-rain conditions, the gain
can be estimated based on the free-space propagation model. This model assumes that the signal propagates in a straight line between two points without any obstructions. The gain calculation formula is as follows:
where
is the wavelength of the signal, and
is the propagation distance.
The specific attenuation caused by rain can be quantified using the ITU model, which provides an empirical relationship based on extensive measurements and studies. The model calculates the specific attenuation
as a function of the rain rate
(in mm/h) and the frequency of the signal (GHz) [
28]. The formula for specific attenuation is generally expressed as follows:
where
reflects the attenuation efficiency per unit rain rate. It increases with frequency, indicating that higher frequencies are more susceptible to rain attenuation.
describes the non-linear effect of rain rate on attenuation. It also varies with frequency and generally decreases as frequency increases, suggesting that the effect of increasing rain rate on higher frequencies is less pronounced than on lower frequencies.
is the rate at which rain falls, typically measured in mm/h. It is a direct measure of the intensity of the precipitation and is a critical factor in determining the extent of signal attenuation.
Delay is usually related to the length of the multipath propagation path. Rainy conditions may cause changes in the propagation path, thereby affecting the delay. If the additional scattering path caused by raindrops is considered, the delay can be expressed as follows:
where
is the delay under no-rain conditions, and
is the additional delay caused by raindrop scattering.
The delay
under no-rain conditions is related to the straight-line distance of signal propagation. It can be calculated using the following formula:
where
is the speed of light, and
is the propagation distance.
The additional delay
can be estimated by measuring the difference in signal propagation time between rainy and clear conditions. The formula is as follows:
where
is the propagation distance,
is the speed of light, and
is the refractive index of raindrops.
The calculation of the Doppler frequency shift can use the standard Doppler effect formula, taking into account the speed changes that may be caused by rainy conditions:
where
is the Doppler frequency shift under rainy conditions,
is the vehicle speed in the rain,
is the wavelength of the signal, and
is the angle of arrival of the signal.
The Signal-to-Noise Ratio (SNR) is defined as the ratio of the effective signal power to the noise power, and it is a key metric for evaluating the performance of communication systems. The formula for calculating SNR is as follows:
where
represents the average power of the received signal and
represents the total noise power. By accurately identifying these sources of noise and precisely calculating the total noise power, the reliability and accuracy of the SNR calculation can be ensured under various rainy conditions. This not only aids in assessing the performance of the communication systems but also ensures stable operation in complex environments.
In order to reduce the impact of raindrop scattering on V2V channel performance, a new multipath component is proposed. It can provide ample feature representation path components, including its own gain, delay, and Doppler frequency shift parameters, which meticulously reflect the changes in signal propagation under rainy conditions. By adjusting these parameters, the five vehicular environments under rainy weather conditions can be accurately simulated based on channel modeling characteristics such as power attenuation, time delay spread, and Doppler frequency shift. During the training process, particular attention was given to vehicular environments under torrential rain conditions. Torrential rain is defined as a rainfall rate exceeding 50 mm/h. Such conditions significantly affect wireless signals, including notable signal attenuation and multipath effects.
Due to the high mobility of vehicular environments, the transmitted messages are affected by the wireless channel. The signal received on the vehicular wireless channel can be represented as follows:
where
represents the k-th transmitted data symbol,
is the current noise, and
represents the k-th wireless channel response, which is described by the CSI. Channel estimation at the receiver end is performed to calculate the CSI, which is crucial for recovering the transmitted data.
In vehicular network communication, the Least Squares (LS) estimator is one of the most commonly used channel estimation methods, widely applied in the industrial implementation of V2X communication due to its low complexity. The mathematical expression for the LS estimator can be represented as follows:
where
is the
norm,
is the long training sequence vector, and
represents the corresponding observation vector.
Due to multipath propagation and the Doppler effect, the received signal often becomes distorted, which is particularly problematic for OFDM-based systems. To ensure the reliability and efficiency of data transmission, it is necessary to introduce equalization strategies at the receiver end to counteract these distortions in OFDM-based systems. The proposed equalization strategy aims to optimize the received signal using channel estimation information, thereby effectively suppressing multipath and Doppler effects in both frequency and time domains. The adjustment based on channel estimation values is as follows:
where
is the k-th equalized channel response,
is the k-th original channel response,
is the current noise power spectral density, and
is the current symbol energy.
This equalization strategy optimizes the signal recovery process by considering the signal’s energy and noise level, thereby improving the quality of the received signal and the overall performance of communication. At the receiver end of an OFDM system, the channel response is first obtained through the Least Squares method of channel estimation. Subsequently, Formula (11) is applied to adjust the signal of each subcarrier to counteract channel distortion and improve the reliability of the signal.
5. Simulation Analysis
In order to verify the effectiveness and accuracy of the proposed model, a comprehensive evaluation was conducted on multiple datasets. To construct the test set, a series of data packets conforming to the 802.11p standard were transmitted using different channel models. In various environments, the Signal-to-Noise Ratio (SNR) range of the data packets was set from 15 dB to 40 dB, adjusted in steps of 0.25 dB. This process was repeated 30 times with different releases of the channel model for each environment, resulting in 15,000 test sequences.
Before formally evaluating the proposed CNN model, it was trained using the categorical cross-entropy loss function and the Adam optimization algorithm. Once the model is trained and its performance is verified to be stable, it is not necessary to retrain it every time it is used. However, should there be significant changes in environmental conditions or new types of environments are introduced, it may be necessary to retrain or adjust the model.
In this study, the method of processing and inputting data into the Convolutional Neural Network (CNN) is a critical step. The CSI data, inherently complex-valued and containing channel gain and phase information between multiple antennas, must undergo specific preprocessing before being input into the CNN. To accommodate the CNN model, we map the real and imaginary parts of each complex number to two separate input channels. Specifically, the first channel handles the real part information, while the second channel processes the imaginary part information exclusively. After these processing steps, the data are input into the CNN as a one-dimensional (1D) array. Our CNN model features a one-dimensional convolutional layer (Conv1D) designed specifically to handle this format of data. Through this approach, the CNN can process the real and imaginary parts in parallel, effectively learning and extracting key features to identify different communication environments and conditions. This one-dimensional input method not only optimizes the learning efficiency of the model but also enhances processing speed and accuracy.
5.1. Model Evaluation
The confusion matrix, as an important tool for evaluating classification performance, not only intuitively displays the model’s identification capabilities across various categories but also provides detailed insights into the model’s misclassification in specific categories. The main diagonal of each matrix shows the true positives for each category, while the off-diagonal elements represent the model’s misjudgments, thus offering a dimension to evaluate the model’s detailed performance.
Figure 5 presents the confusion matrix for the proposed CNN model under a dual-channel configuration. The matrix reveals that the CNN model can reliably identify various vehicular environments in rainy conditions based on CSI values, achieving an overall test accuracy of 95.7%. Specifically, the model’s identification accuracies for H-NLOS, H-LOS, R-LOS, U-LOS, and U-NLOS environments are 95.2%, 96.1%, 97.3%, 95.3%, and 94.8%, respectively.
Additionally, the proposed CNN model was compared with an ANN model that includes four dense, fully connected layers to ensure compatibility with the number of environments to be identified. The comparative analysis also considers a Random Forest (RF) classifier with 100 decision trees, a K-Nearest Neighbors classifier (K-NN, with K set to five neighbors), Gaussian Naive Bayes (GNB), and a Support Vector Machine (SVM) with a linear kernel, as shown in
Table 3.
To further demonstrate the performance of various methods in accurate classification, the confusion matrices of each method are detailed in
Figure 5,
Figure 6,
Figure 7,
Figure 8,
Figure 9 and
Figure 10. These matrices provide a clear visual representation for evaluating and comparing the effectiveness of different models in environmental identification tasks. They not only intuitively display the predictive accuracy of classification models across various categories but also reveal the strengths and weaknesses of the models under specific conditions.
The analysis of the confusion matrices indicates that different machine learning models exhibit varied performances in the task of vehicular environment identification under rainy conditions. The ANN model performed admirably with an accuracy rate of 81.9%, while the K-NN and RF models achieved an accuracy of 80% in H-NLOS and U-NLOS environments, demonstrating their effectiveness under specific conditions. However, these two models performed poorly in H-LOS, R-LOS, and U-LOS environments, with accuracy rates below 65%, which may suggest insufficient feature learning in these settings. On the other hand, the GNB and SVM classifiers generally performed poorly across all test environments, which may imply their limited adaptability to complex environment identification tasks.
The proposed convolutional neural network (CNN) model was compared with a series of advanced deep learning models, including ResNet50 [
30], Xception [
31], InceptionV3 [
32], DenseNet201 [
33], and MobileNetV2 [
34]. To ensure fairness in comparison, all models were trained on the same training dataset, and their classification performance was evaluated on the test set. In response to the specific data input requirements of this study, the input layer shape of all models was adjusted accordingly, and the output layer structure was updated to include five categories, accurately matching the types of vehicular environments to be identified.
This approach ensured consistency in experimental setup across different models while also allowing each model to fully utilize its strengths in handling specific classification tasks. Through this comparison, we can more comprehensively assess the performance of the proposed CNN model in vehicular environment identification, as well as its relative advantages compared to other advanced models.
Since the aforementioned classic models are designed to receive two-dimensional (2D) inputs to accommodate shape sizes, a 2D channel matrix is considered in the input features rather than a one-dimensional (1D) channel vector. Therefore, the dataset is rearranged from 1D to 2D, with the formula as follows:
where
is a diagonal matrix and
and
represent the channel matrix and the corresponding channel vector, respectively. The coefficients of these matrices and vectors are composed of CSI values, with
being 128, reflecting the number of CSI values estimated for each packet.
Table 4 provides a comparative analysis of the CNN model proposed in this paper with existing classic models in terms of test accuracy and the time required for environmental identification. The data from the table show that, compared to other models, our model has a significant advantage in prediction time. Specifically, the prediction times for ResNet50, Xception, and InceptionV3 are about 700 µs, while DenseNet201 and MobileNetV2 have prediction times of 1417 µs and 349 µs, respectively. In contrast, the prediction time for our study’s model is only 43.82 µs, which is significantly lower than the aforementioned models.
In terms of overall test accuracy, as shown in
Table 4, the proposed model achieved an accuracy rate of 95.7%. This rate is higher than the test accuracies achieved by ResNet50, Xception, InceptionV3, DenseNet201, and MobileNetV2.
In summary, to validate the effectiveness of the proposed CNN method, we constructed a dataset that includes five types of rainy vehicular environments. Through extensive testing at different Signal-to-Noise Ratio (SNR) levels, our CNN model demonstrated outstanding performance in recognizing different rainy environments, with an accuracy rate of up to 95.7%. The proposed CNN model not only shows significant advantages in computational efficiency and identification accuracy but also proves the potential of our method in practical applications, providing a safer and more reliable driving parameter adjustment strategy for intelligent vehicles in rainy conditions.
5.2. Performance Overhead and Reliability
In this research, we focus on the environmental identification capabilities of autonomous vehicles under rainy conditions, particularly in scenarios where rapid response is crucial. The time sensitivity of autonomous driving systems when performing tasks necessitates that prediction times be optimized to the utmost. In such cases, the delay in prediction time is typically strictly limited to below the millisecond level to ensure that vehicles can operate safely and effectively execute tasks under varying and adverse weather conditions.
To meet this requirement, we propose a vehicle environment identification method based on deep learning, which uses Channel State Information (CSI) from vehicle-to-vehicle communication as input. Our CNN model demonstrates high-accuracy environmental identification capabilities within microsecond-level prediction times, far surpassing the limitations of traditional sensing systems under adverse weather conditions. This ensures that even in complex climatic conditions such as rain, intelligent vehicles can quickly and accurately adjust driving parameters, significantly enhancing the safety and reliability of the autonomous driving system. Moreover, the efficient performance of the CNN model means that in resource-constrained situations, it can minimize the consumption of energy and computational resources while ensuring task efficiency.