Next Article in Journal
The Refinement of a Common Correlated Effect Estimator in Panel Unit Root Testing: An Extensive Simulation Study
Previous Article in Journal
A Novel Color Image Encryption Algorithm Based on Hybrid Two-Dimensional Hyperchaos and Genetic Recombination
Previous Article in Special Issue
Hybrid Approach to Automated Essay Scoring: Integrating Deep Learning Embeddings with Handcrafted Linguistic Features for Improved Accuracy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Run-Off-Road Risk Prediction Based on Deep Learning Sequence Forecasting Approach

1
Business School, Shaoxing University, Shaoxing 312000, China
2
Shaoxing Communications Investment Group Co., Ltd., Shaoxing 312000, China
3
Shaoxing Public Transport Group Co., Ltd., Shaoxing 312000, China
4
School of Transportation, Southeast University, Nanjing 211189, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(22), 3456; https://doi.org/10.3390/math12223456
Submission received: 9 September 2024 / Revised: 25 October 2024 / Accepted: 1 November 2024 / Published: 5 November 2024
(This article belongs to the Special Issue Artificial Intelligence and Data Science)

Abstract

:
Driving risk prediction is crucial for advanced driving technologies, with deep learning approaches leading the way in driving safety analysis. Current driving risk prediction methods typically establish a mapping between driving features and risk statuses. However, status prediction fails to provide detailed risk sequence information, and existing driving safety analyses seldom focus on run-off-road (ROR) risk. This study extracted 660 near-roadside lane-changing samples from the high-D natural driving dataset. The performance of sequence and status prediction for ROR risk was compared across five mainstream deep learning models: LSTM, CNN, LSTM-CNN, CNN-LSTM-MA, and Transformer. The results indicate the following: (1) The deep learning approach effectively predicts ROR risk. The Macro F1 Score of sequence prediction significantly surpasses that of status prediction, with no notable difference in efficiency; (2) Sequence prediction captures risk evolution trends, such as increases, turns, and declines, providing more comprehensive safety information; (3) The presence of surrounding vehicles significantly impacts lane change duration and ROR risk. This study offers new insights into the quantitative research of ROR risk, demonstrating that risk sequence prediction is superior to status prediction in multiple aspects and can provide theoretical support for the development of roadside safety.

1. Introduction

According to data from the World Health Organization, road traffic collisions result in approximately 1.19 million fatalities worldwide each year. Moreover, road traffic injuries are the leading cause of death among children and young adults aged 5–29 [1]. The Roadside Safety Research Program of the Federal Highway Administration indicates that roadway departure crashes account for more than 50 percent of all traffic crash fatalities in the United States [2]. Roadside collisions often result in higher fatalities as vehicles collide with large stationary objects such as guardrails, traffic barriers, and trees [3]. Lane-changing is one of the most common driving behaviors, and run-off-road (ROR) incidents frequently occur during lane changes near the roadside. Therefore, it is crucial to conduct an in-depth analysis and research on the risks associated with near-roadside lane-changing. With the advancement of data-driven models, they have been widely applied in driving safety research. This study will employ deep learning models to predict ROR risks and compare the performance of risk sequence prediction with risk status prediction. Research on predicting ROR risks can advance the theoretical development of Advanced Driving Assistance Systems (ADAS) and driving intervention technologies, thereby reducing the risk of vehicles running off the road.
Research on roadside safety is increasing due to the significant frequency and severe consequences of roadside crashes [4]. Current roadside safety research typically analyzes safety-relevant factors such as road curves, road shoulders, and roadside signals [5]. Ewan et al. revealed that narrower road widths, narrower road shoulders, and larger curves increase crash risk [6]. Jiang et al. examined the relationship between road shoulder type and roadside crashes [7]. El Esawey et al. studied the relationship between the placement of roadside utility poles and utility pole collisions, finding that increasing the pole offset provides better safety improvements than increasing pole spacing [8]. Many reports have highlighted that pavement condition is critical to roadside safety [9,10]. Meanwhile, there are also studies focusing on the impact of human factors on roadside crashes [11,12].
While factor analysis enhances roadside safety through macro-policy and infrastructure development, it does not facilitate real-time analysis and prediction of ROR risks. Quantifying driving risks is fundamental for real-time driving safety analysis. Due to the rarity of traffic accidents, Surrogate Safety Measures (SSMs) are widely used in traffic safety research [13,14]. Time To Collision (TTC) is one of the earliest and most widely applied SSMs, initially used to assess the time required for a following vehicle to collide with a leading vehicle in car-following situations [15]. Subsequent studies have expanded the TTC metric from various perspectives, including application scenarios [16], assessment time windows [17], and mathematical formulation [18].
From the perspective of accident consequences, driving risk encompasses both the likelihood of collision and its severity. Shangguan et al. designed a rear-end risk assessment metric that considers both the probability of collision and its severity, primarily using the change in velocity derived from the law of conservation of energy to quantify collision severity [19]. Gabauer et al. also indicated that velocity change is effective for assessing the risk of roadside collisions [20]. Park et al. considered risk exposure and severity levels, designing a composite metric to quantify lane-changing risks based on stopping sight distance [21]. Chan et al. applied the product of velocity squared and the inverse of TTC to calculate collision risk, considering both collision likelihood and severity [22].
Safety evaluation is a crucial component of roadside safety. Before analyzing and predicting ROR risk, it is essential to assess and define it [23]. Previous studies have generally classified ROR risk into several levels using qualitative or quantitative methods. For instance, Cheng et al. categorized the rollover risk of roadside accidents into four categories based on accident outcomes and analyzed influencing factors using a Bayesian network [24]. Fang et al. utilized the inherent safety features of the roadside and the likelihood of vehicle ROR to statically classify roadside environment subject safety into five levels [25]. Long et al. employed the Acceleration Severity Index (ASI) to represent roadside risk levels and used the Fisher optimal segmentation algorithm to divide roadside risk into three categories [26].
Based on the quantification and classification of driving risks, modeling approaches can be used to predict these risks. The essence of real-time driving risk prediction lies in the analysis and forecasting of time series by constructing a mapping relationship between historical driving feature sequences and future driving risks. There are two main prediction methods in this field: statistical algorithms and data-driven algorithms. Traditional statistical algorithms predict future driving risks by capturing the evolutionary trends of historical driving feature sequences. Although they have a solid theoretical foundation, their application is often limited by assumptions and struggles to capture complex nonlinear dynamic features, resulting in suboptimal prediction performance. With the increase in available data and advancements in computer technology, data-driven algorithms have become the mainstream models in traffic safety analysis. Deep learning is a type of data-driven algorithm characterized by its non-parametric nature, which allows it to effectively capture nonlinear relationships among multidimensional variables. Shangguan et al. conducted a comparative analysis of several data-driven algorithms to evaluate their effectiveness in predicting real-time driving safety statuses [27]. Arvin et al. combined Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks to extract driving information from the initial 15 s for predicting crash and near-crash events [28]. Zhang et al. employed several deep learning models to predict the evolution of car-following risks, with results indicating that sequence prediction provides richer safety information compared to status prediction [29].
Despite the widespread application of machine learning and deep learning-based modeling methods in road traffic safety [30,31], and the extensive analysis of factors influencing roadside safety by numerous studies, three principal research gaps persist in this field: (1) ROR incidents can lead to severe consequences, yet current research on them is less extensive compared to car-following and lane-changing; (2) Real-time prediction and prevention of driving risks are key technologies in advanced driving systems, but there is a lack of quantitative and predictive research on ROR risks, and current factor analysis cannot directly support the construction of ADAS; (3) Predicting safety status is the mainstream approach in current driving risk prediction, but a single driving risk status fails to provide specific safety information, and few studies have comprehensively compared the disparities in prediction precision and efficiency between risk sequence prediction and risk status prediction.
In response to the aforementioned research gaps, this study makes three primary contributions: (1) The ROR risk prediction experiment was conducted from the perspective of quantitative analysis. Near-roadside lane change samples were selected from the high-D natural driving dataset, and ROR risk was quantified based on the likelihood and severity of collisions. Subsequently, deep learning prediction techniques were employed to forecast ROR risks. (2) The prediction experiments on driving risk effectively demonstrate that the performance of sequence prediction is superior to that of commonly used status prediction. Five models representing mainstream deep learning prediction techniques were selected to predict ROR risk across different time window combinations. The results revealed that sequence prediction can enhance prediction precision and provide richer safety information without compromising efficiency. (3) An in-depth analysis was conducted on the impact of sample imbalance on prediction performance, the influence of lane-changing scenarios on lane-changing safety and duration, and the prediction of car-following risks.
The subsequent sections of the paper are organized as follows: Section 2 elucidates the methods and models applied in the study. Section 3 introduces the datasets used in this research and the training environment of deep learning models. Section 4 provides a detailed analysis and discussion of the results. Finally, Section 5 concludes this study and outlines future research directions.

2. Methodologies

2.1. Model Formulation

The essence of ROR risk prediction is time series forecasting, which requires establishing a mapping relationship between input and output through neural networks. Let the input feature sequence be denoted as X = {x (1), x (2), ……x (C)}, where C represents the number of features. Assuming the length of the feature sequence input during the observation window is O, and the prediction sequence window is P. As shown in Figure 1, for the sequence prediction, the modeling problem can be expressed as establishing a mapping from XtO+1:t = {x (1), x (2), ……x (C)} ∈ ℝC×O to Yt+1:t+P ∈ ℝP. When there are K ROR risk statuses, the risk level within the next P time steps can be determined by identifying the extreme value of ROR risk within the prediction window. This approach maps multi-step predicted risk values to safety levels. For the status prediction, the modeling can be described as establishing a mapping from XtO+1:t = {x (1), x (2), ……x (C)} ∈ ℝC×O to Y ∈ ℝK, where the Y vector represents the probability of the risk statuses, with the highest probability representing the predicted status (Status marked by green dashed line).

2.2. ROR Risk Quantification

Real-time driving risk assessment typically considers both the potential and severity of a collision, which can be represented by Time To Collision (TTC) and velocity change, respectively. Referencing the criticality index proposed by Chan [22], the quantification formula for ROR risk is given by Equation (1):
ROR   Risk = v y 2 / T T C
Similar to the application of the TTC, the ROR risk index used in this study is mainly to represent the ROR risk before a collision. If the ROR risk value exceeds a certain value, it may indicate a dangerous situation. The vy2 (m2/s2) represents the severity of a collision. Assuming that the lateral velocity significantly decreases after a vehicle experiences a lateral collision, vy2 can be used to characterize the severity of the collision based on the kinetic energy formula (1/2 mv2). A larger lateral velocity indicates a more severe collision risk. The TTC (s) is the collision time between the lane change vehicle and the roadside, and it represents the collision likelihood. A smaller TTC value suggests a higher probability of collision and a greater risk level. Therefore, a larger ROR risk value indicates a higher overall risk of the vehicle running off the road.

2.3. Risk Status Clustering Algorithm

Based on the quantification of ROR risk, the risk status can be determined using clustering algorithms. Although the KMeans algorithm is widely used in driving safety analysis due to its simplicity and efficiency, its stability can be affected by the random selection of initial cluster centers. To address this issue, David et al. proposed KMeans++, which optimizes the selection of initial cluster centers based on data distribution, thereby enhancing the effectiveness and stability of the KMeans clustering algorithm [32].
The number of cluster centers needs to be specified before clustering. The optimal number of clusters can be determined by observing changes in clustering error as the number of clusters varies. The Sum of Squared Error (SSE) metric is used to assess the sum of the squared distances from all samples to their corresponding cluster centers (Equation (2)).
SSE = i = 1 k p C i p m i 2
where Ci represents the ith cluster, p represents the samples belonging to cluster Ci, and mi indicates the center of the ith cluster. With the increase of the clusters, the SSE will decrease as the reduction of inter-cluster error. When the clustering error does not significantly decrease with an increase of the clusters, the optimal clustering number is determined, and this method is known as the elbow method [33].

2.4. Risk Prediction Models

As mentioned above, constructing models based on deep learning is a mainstream method in the driving risk prediction field. Currently, the application of deep learning models in time series processing can be mainly divided into five categories: (1) Recurrent Neural Network-based models: these models consider the temporal extension characteristics of time series data, allowing them to encode feature information at each time step and thereby fully capture the evolution patterns and dependencies of the sequence; (2) Convolutional Neural Network-based models: convolution operations can simultaneously capture the interactions of multivariate time series data in both the temporal and feature dimensions and have been widely applied in the time series domain; (3) Model combination-based approaches: these approaches integrate multiple models in series or parallel to leverage the strengths of various models; (4) Attention mechanism-based models: the integration of attention mechanisms with models can guide the model to focus on important parts of the sequence, thereby enhancing prediction accuracy; (5) Transformer- based model: the combination of multi-head attention mechanism and advanced information encoding mechanisms. This study encompasses these five mainstream models to predict driving risk, employing LSTM, CNN, LSTM-CNN, CNN-LSTM-MA, and Transformer to comprehensively assess the differences in precision and efficiency between sequence prediction and status prediction.

2.4.1. Long Short-Term Memory (LSTM)

LSTM is a variant of the RNN that enhances the model’s ability to capture long-term dependencies in sequence through the application of gating mechanisms. It also mitigates the issue of vanishing gradients to some extent [34]. LSTM primarily consists of three gate structures: the forget gate, the input gate, and the output gate [35]. These gate control units primarily consist of activation functions that comprehensively evaluate the abandonment and retention of both current and historical input information at each time step, facilitating the effective integration of long-term and short-term information. After sequentially encoding and processing the input information for each time step, the state information from the last time step can be fed into the Multilayer Perceptron (MLP) and obtain the outputs.

2.4.2. Convolutional Neural Network (CNN)

CNNs were initially applied to the recognition and classification of grayscale images [36]. Grayscale images possess two dimensions: width and height. Two-dimensional CNNs can effectively capture image information by moving across these two dimensions. Similarly, multivariable time series data also have two dimensions: temporal and feature, making it feasible to apply CNNs to the task of processing driving feature sequences. This study employs a dual-layer two-dimensional convolutional network to extract input feature information. After feature extraction is completed, the features are flattened, and the prediction results are output through MLP.

2.4.3. LSTM-CNN

As previously mentioned, network combinations are a common application method in current deep learning models. This study employs a parallel combination of LSTM and two-dimensional CNN models for ROR risk prediction [37]. As illustrated in Figure 2, LSTM and CNN are initially used to extract driving feature information separately. The information extracted from the two networks is then concatenated to maximize the preservation of feature integrity. Finally, the MLP is used to decode the feature information to obtain the prediction results. In sequence prediction tasks, the output comprises the risk sequence projected for a specified prediction time window. In status prediction tasks, the output delineates the probability of occurrence for each risk status.

2.4.4. CNN-LSTM-MA

In addition to parallel combinations like the LSTM-CNN model, models can also be concatenated in series and enhanced using techniques such as multi-head attention mechanisms. The serial combination of CNN-LSTM and the application of multi-head attention mechanisms are widely used [38], with the multi-head attention mechanism being a core component of the Transformer [39].
The attention mechanism directs the model to concentrate on various segments of the input. The dot product attention mechanism is one of the fundamental attention mechanisms. It first maps the input variables to multiple distinct spaces through linear projections and then extracts interaction information between the input sequences using dot multiplication and scaling. The multi-head attention extends the dot product attention mechanism by incorporating multiple attention heads. This mechanism is capable of learning varied representations in low-dimensional feature spaces.
As depicted in Figure 3, the 1DCNN network is first used to extract information from input features, identifying different patterns within the feature information. The extracted features are then fed into an LSTM network, where multi-head attention mechanisms process the LSTM-encoded hidden features. Finally, an MLP is used to output the prediction results.

2.4.5. Transformer

The Transformer is the mainstream architecture currently applied in the field of deep learning, with its core innovations lying in the integration of Positional Encoding (PE), residual connection techniques [40], and multi-head attention mechanism. Considering that this study is focused on short-term prediction, only the encoder from the Transformer is used for encoding driving features. The model structure is shown in Figure 4 [41]. Firstly, a learnable positional encoding matrix is used to encode the positional information of the input sequence. Then, the multi-head attention mechanism is employed to enhance feature information, followed by the residual connection and feedforward neural network. After stacking N modules, pooling operations are performed, and the final prediction results are obtained through MLP.

3. Dataset and Experiment Setting

3.1. Dataset Description and Sample Selection

The high-D dataset is a widely used natural driving dataset for micro-driving behavior analysis, containing approximately 16.5 h of driving trajectory data on German highways from an aerial perspective [42], with each highway scenario being approximately 420 m long. The high-D dataset will be used in this study to extract near-roadside lane-changing samples for the following three reasons: (1) Large data volume. The high-D dataset records the trajectory data of over 110,000 vehicles. This large sample size enables deep learning models to learn data patterns during the training process, while sufficient validation and test samples enhance the validity of the prediction results; (2) High data quality. Advanced computer vision techniques were applied to extract the trajectories, limiting the position error to 10 cm. The data recording frequency is 25 Hz, which meets the real-time requirements of driving safety analysis; (3) Fixed scenarios. The selected scenarios in the high-D dataset are basic highway segments, where lane-changing behavior is not affected by ramps or changes in the number of lanes.
The high-D contains detailed lane change (LC) information (original lane, target lane, surrounding vehicles, etc.), and has been widely used by researchers to study LC risk and patterns [43]. The near roadside LC behaviors were extracted from the high-D dataset to analyze the risk evaluation of potential ROR. The driver generally pursues safety or efficiency by LC, and the surrounding vehicles of the subject vehicle can influence LC behavior. There are up to four surrounding vehicles (preceding vehicle in the original lane: pre; following vehicle in the original lane: fol; preceding vehicle in the target lane: t_pre; following vehicle in the target lane: t_fol), and the space and motion information of those vehicles were considered while analyzing the risk. The velocity, distance, and acceleration in both the longitudinal and lateral directions of the surrounding vehicles were collected, and their differences with the subject vehicle were calculated. If there are no corresponding vehicles in the four positions (pre, fol, t_pre, t_fol), the longitudinal distance was set to 420 m, the lateral distance was set to 5 m, and the relative velocity and acceleration were set to 0.
In this study, the extraction of lane-changing samples combines data on the vehicle’s lane position with specific quantitative criteria. As shown in Figure 5, lane-changing duration is determined by setting a threshold for the vehicle’s lateral velocity. If the threshold is set too high, some lane-changing driving data may be lost from the samples. Since there is an inherent disturbance in the lateral speed of manually driven vehicles, setting the threshold too low may include segments unrelated to the lane-changing process. By observing the lane-changing process of multiple samples, we set the lateral speed threshold to 0.01 m/s, which balances the effectiveness and completeness of sample extraction. The LC duration is determined following these three steps:
Step 1: The near roadside LC samples and surrounding vehicles were confirmed according to the official documents of High-D.
Step 2: The LC moment t0 was determined first. Then, the moment from t0 forward search to a lateral speed of 0.01 m/s is the end of the LC (tend). The moment from t0 backward search to a lateral speed of 0.01 m/s is the beginning of the LC (tbegin). The LC duration is [tend, tbegin].
Step 3: The samples with surrounding vehicles driving out the aerial perspective during the LC duration were screened out.
Finally, 660 complete near-roadside LC samples were recognized. To smooth the data and compress its volume, the data is aggregated at a granularity of 0.2 s by averaging.
The KMeans++ algorithm was used to cluster the ROR risk of selected samples. As shown in Figure 6, the clustering error ceases to decrease significantly when the number of clusters exceeds four. Therefore, the number of ROR risk statuses is determined to be four: Safe, Low-risk, Medium-risk, and High-risk. The numerical ranges and proportions of each risk status are presented in Table 1.

3.2. Experiment Setting

Our experiments were carried out on an Intel i7-14700KF CPU and NVIDIA GeForce RTX 4070 Ti SUPER GPU with 16 GB RAM. The framework was developed using Python 3.11 and PyTorch 2.2.1. The dataset was split into training, validation, and test sets in a ratio of 6:2:2. The data were standardized before forwarding the network, and the training dataset determined the scaler parameters.
The setting of parameters significantly impacts model performance. Referring to the empirical ranges of parameter settings from related models [37,38], we defined the hyperparameters space of the models, and grid search was employed to select these primary hyperparameters. In terms of model training parameters, the range of learning rates was [0.01, 0.005, 0.001]. Meanwhile, weight decay and learning rate decay [44] techniques were applied during the training process to alleviate overfitting issues. The batch learning strategy was utilized in the training process, with batch size options being (128, 256, 512).
In terms of model parameter selection, adjustments were made primarily to the model’s main hyperparameters. For the LSTM-related model, the range of layer choices was (1, 2), and the selection range for the hidden layer dimensions was (16, 64, 128). The number of attention heads in the multi-head attention mechanism was set between (2, 4), and the range of the number of encoders is (2, 3). The kernel size in the 2D convolutional network was uniformly set to 3 × 3, with the convolutional layers’ channel numbers set at 16 and 32, respectively. Similarly, the kernel size for the 1D convolutional network was set at 3, with channels for the convolutional layers set at 32 and 64, respectively. All models will be trained for 100 epochs, and the model that performed best on the validation set during training will be saved and used for testing.
Related research indicates that warning the driver 0.5–1 s before a potential traffic crash can effectively prevent the collision [45]. To investigate the impact of different observation and prediction window lengths, the lengths of these windows were set to (0.6 s, 1 s, 2 s). The prediction windows were set considering the gradient of driving intervention. If the model predicts a high driving risk within the next 0.6 or 1 s, emergency and forceful braking control measures can be taken to prevent an accident. A 2-s prediction window is more suitable for milder braking control measures and can also consider providing risk alerts to the driver.

3.3. Evaluation Metrics

To comprehensively assess the model’s prediction performance, the Macro F1 Score (MFS) was used to evaluate the model’s effectiveness in ROR risk prediction. The precision and recall rate for predicting status k are shown in Equations (3) and (4), respectively, where TPk represents the number of correctly predicted samples, and FPk and FNk represent the false positives and false negatives for status k. Considering both precision and recall rate, the F1 score is calculated as shown in Equation (5). Assuming there are n statuses, the MFS is calculated in Equation (6). A larger MFS value indicates higher average precision and recall for all risk categories, signifying better model performance.
Pr k = TP k / ( TP k + FP k )
Re k = TP k / ( TP k + FN k )
F 1 k = 2 Pr k Re k / ( Pr k + Re k )
MSF = ( 1 n F 1 i ) / n

4. Results

4.1. Prediction Results

The average MFS for each model under sequence prediction and status prediction is presented in Figure 7. The overall predictive performance of each model in sequence prediction is superior to that in status prediction. With the exception of the Transformer, which demonstrates similar predictive performance in both modes, the average MFS for the other four models in sequence prediction exceeds that of status prediction by 3.13% to 6.54%. This indicates that sequence prediction outperforms status prediction in terms of prediction precision. Among the models, LSTM-CNN achieves the best average predictive performance for sequence prediction, followed by CNN and LSTM. Although the Transformer significantly outperforms other models in status prediction, its status prediction performance remains a noticeable gap compared to the sequence prediction of LSTM-CNN, CNN, and LSTM. Despite the application of model combination and the multi-head attention mechanism, CNN-LSTM-MA performed the worst among the five models. This suggests that the combination of the serial structure of CNN and LSTM with the multi-head attention mechanism did not yield a positive gain in model performance. It also reveals that stacking models and advanced technologies does not necessarily enhance model performance.
The distribution of the MFS in differnet time window combinations is shown in Figure 8, and several key findings can be drawn:
Deep learning models have achieved excellent results in predicting ROR risk, with sequence prediction generally outperforming status prediction. In predicting ROR risk, the optimal prediction MFS for sequence prediction reached 0.964, 0.934, and 0.858 at 0.6 s, 1 s, and 2 s, respectively (with the CNN model’s observation window set at 2 s). Although mainstream models differ in their feature extraction mechanisms, the MFS for sequence prediction is basically higher than that for status prediction across different time window combinations. Intuitively, the MFS plane formed by sequence prediction results (red) is higher than the plane formed by status prediction (blue). Although the performance of sequence prediction is not consistently superior to that of status prediction across various combinations of time windows in the Transformer model, the performance of sequence prediction for the next 0.6 s is consistently better than that of status prediction within each observation window. Furthermore, under various combinations of time windows, the average performance of sequence prediction surpasses that of status prediction. The difference in MFS tends to widen with the increase in the prediction window, especially in the LSTM-CNN model, where the average MFS for sequence prediction is 11.76% higher than that for status prediction in a 2 s prediction window. This indicates that the sequence prediction modeling approach is superior to status prediction, and establishing a mapping relationship between historical features and future ROR risk sequences can yield more robust prediction results.
The MFS significantly declines as the prediction window increases. When the observation window is set to 2 s and the prediction window to 0.6 s, the MFS for all models in sequence prediction exceeds 0.9. However, when the prediction window is extended to 2 s, the highest MFS observed in sequence prediction is only 0.858 (CNN with a 2 s observation window). A similar trend is observed in status prediction, where the MFS decreases noticeably with the increase in the prediction window. This trend aligns with the general understanding that the difficulty of time series prediction increases with the length of the prediction window.
The predictive performance of models does not necessarily improve with the extension of the observation window. While the MFS generally increases with the observation window when the prediction window is set to 0.6 s, several models exhibit a decline in MFS when the prediction window is 2 s. For instance, when the prediction window is set to 2 s, the CNN-LSTM-MA model has an MFS of 0.793 with a 0.6 s observation window in sequence prediction, but this drops to 0.751 when the observation window is increased to 2 s. This decline is observed in both sequence and status predictions across different models. This suggests that although a longer observation window can provide more risk-related information for predictions, it also increases the complexity of the data and introduces potential noise unrelated to the risk, leading to a deterioration in model performance.

4.2. Real-Time Efficiency of Risk Prediction

In addition to prediction precision, the efficiency of model operation is crucial for real-time risk prediction. To fairly assess the efficiency of different prediction approaches, both sequence and status prediction models were configured with the same structural parameters. With both observation and prediction windows set at 2 s, the number of predictions performed per second was recorded, and the average results for 10 runs are shown in Figure 9. The test results indicate that there is no significant difference in performance between sequence and status predictions. Although sequence prediction theoretically involves a slightly larger number of model parameters due to its more extensive output counts, experimental results indicate that these minor differences do not significantly impact the model’s execution efficiency. In the LSTM, CNN, and CNN-LSTM-MA models, status prediction slightly outperforms sequence prediction in terms of efficiency, whereas sequence prediction is more efficient than status prediction in LSTM-CNN and Transformer. This variation is mainly attributed to the randomness in hardware performance during execution.
The CNN demonstrated the best model efficiency, capable of performing 10,231 predictions per second under sequence prediction. Due to its complex gating structure, the LSTM model is less efficient than the CNN. The efficiency of the LSTM-CNN model is further reduced due to the stacking of models. The multi-head attention mechanism introduces additional model parameters. Although the CNN-LSTM-MA and Transformer exhibit worse efficiency among the five models, it is still capable of performing approximately 2000 predictions per second, which is sufficient to meet the real-time requirements of intelligent driving systems.

4.3. The Impact of Imbalanced Dataset

As mentioned above, there is an imbalance among the four safety statuses of samples, with the Safe status accounting for 71.19% of the samples, while the High-risk category comprises only 0.79%. This imbalance can introduce learning biases during model training, as excessive focus on the predominant categories may adversely affect the prediction performance for minority categories. The confusion matrix for predictions by CNN with both observation and prediction windows set at 2 s is shown in Figure 10. Both sequence and status predictions exhibit varying degrees of category accuracy disparity. In sequence prediction, the accuracy for predicting the Low-risk category reaches 96.71%, whereas the accuracy for the minority High-risk category is only 64.71%, a significant gap of 32%. In status prediction, the accuracy for High-risk predictions is merely 50.98%, widening the gap to 36.51% compared to the Low-risk category.
On the one hand, this finding highlights that the imbalance in safety status significantly impacts prediction outcomes. On the other hand, it also demonstrates that sequence prediction can mitigate status imbalance issues compared to status prediction. The sequence-to-sequence mapping approach aligns with the sequential nature of risk sequences, potentially overcoming the learning biases that arise during the mapping from sequences to category probabilities in status prediction.

4.4. Case Study of Sequence Prediction

In addition to accuracy advantages, sequence prediction provides detailed information about the evolution of risk sequences compared to status prediction. The evolution of ROR risk for a specific sample is shown in Figure 11a, which reflects the general trend of risk evolution during a near-roadside lane change. In the initial stage of the lane change, due to the increase in lateral speed and the decrease in the distance to the roadside, the ROR risk gradually increases to Medium-risk. Once the vehicle enters the target lane, the lateral velocity begins to decrease, and the ROR risk starts to decline.
The sequence prediction models’ results for the rising, turning, and declining phases of ROR risk are shown in Figure 11b–d. The results indicate that all five prediction models accurately forecasted the continuous upward trend of ROR risk. Except for the LSTM model, which predicted the highest risk level as Low-risk for the upcoming 2 s, the other models predicted the risk status correctly. The turning points of risk contain two critical information: the risk degree and the risk transition moment. On one hand, the sequence prediction models accurately predicted the safety status at the risk extremum. On the other hand, they identified the risk turning point approximately 1.2 s in advance (each time step being 0.2 s), providing additional valuable information for driving safety. Simultaneously, the sequence prediction models also accurately captured the sequence trend where the ROR risk declined.
In the practical application of risk sequence prediction models, precise driving interventions can be implemented by combining the predicted risk status with the trend of risk evolution. For instance, if the ROR risk rises rapidly to a high-risk level without showing any signs of reversal, this indicates that the driving risk is likely to continue increasing in the future. In this case, lateral emergency braking measures should be taken to prevent the vehicle from running off the road. Conversely, when the model predicts that driving risk will rise to a high-risk level within a certain timeframe and simultaneously indicates a risk reversal, appropriate safety warnings can be issued to the driver. Since the model performs risk predictions adopting a sliding time window approach, it can also dynamically adjust the driving intervention methods by comparing the actual risk values with the predicted values during the prediction process.

4.5. The Impact of Surrounding Vehicles on ROR Risk

To further study the influence of interaction information, the lane change (LC) duration and mean risk of different situations are also discussed. The situations are determined by the presence or absence of four surrounding vehicles. There are mainly four LC situations: 211 pre and t_pre (S1); 66 pre, t_pre, and fol (S2); 121 pre, t_pre, and t_fol (S3); 195 all four vehicles (S4). The average LC duration and risk of these situations are shown in Figure 12. As the duration and risk do not follow a normal distribution, the Kruskal-Wallis H test is applied to assess the differences among situations. The duration and risk in different situations show significant differences (duration: H = 17.177, p < 0.001; risk: H = 17.103, p < 0.001), and the Bonferroni multiple comparisons reveal significant differences in the duration of S2–S3 (p = 0.012), S2–S1 (p = 0.004), and S4–S1 (p = 0.042), and in the risk of S1–S4 (p < 0.001). The main difference among the situations is the presence of the following vehicle in the original and target lanes. The duration difference in S2–S1 and S2–S3 indicates that the following vehicle in the original lane would prompt the LC process, and the duration and risk difference in S1–S4 suggests that the increase of following vehicles will increase the risk of near-roadside LC while accelerating the LC process.

4.6. Prediction Performance in Car-Following Scenario

To study the applicability of sequence prediction in car-following scenarios, we randomly selected 1000 car-following samples from the high-D dataset to conduct a longitudinal risk prediction study. Considering that car-following on highways may involve high velocity with small velocity differences, traditional car-following risk indicators like Time-To-Collision (TTC) may not adequately reflect the driving risk. We used the Safety Margin (SM) to quantify driving risk during the car-following process [46]. The empirical formula for SM is shown in Equation (7), where Vn represents the velocity of the following vehicle, Vn−1 represents the velocity of the leading vehicle, Dn is the following distance, and g denotes gravitational acceleration (9.8 m/s2). This empirical formula also takes into account comprehensive factors such as driver reaction time and road friction. The dataset division and model training process are consistent with those used for ROR risk prediction.
S M n ( t ) = 1 0.15 V n ( t ) D n ( t ) + V n ( t ) + V n 1 ( t ) V n ( t ) V n 1 ( t ) 1.5 g D n ( t )
The input features for the car-following risk prediction model include 11 feature sequences related to the leading and following vehicles, which can be categorized into three groups: (1) following vehicle-related: velocity, acceleration, and jerk; (2) leading vehicle-related: velocity, acceleration, and jerk; (3) interaction between following and leading vehicles: velocity difference, acceleration difference, jerk difference, car-following distance, and SM. The model’s output is either the SM sequence over a future period (sequence prediction) or the car-following risk level (status prediction).
As shown in Figure 13, the car-following risk is categorized into five statuses using the KMeans++ clustering algorithm and the elbow method. The average MFS for different models in car-following risk prediction across various prediction windows is illustrated in Figure 14. Notably, the MFS for sequence prediction is consistently higher than that for status prediction across all five models, further indicating that the sequence prediction approach is superior to status prediction. Among the models, LSTM-CNN and Transformer achieved the best performance for sequence prediction and status prediction, with MFS values of 0.984 and 0.974, respectively. Although a basic model structure was employed, LSTM still attained suboptimal performance in sequence prediction, demonstrating the advantages of recurrent neural network-based models in addressing time series encoding challenges. The CNN-LSTM-MA model also did not achieve the ideal performance in car-following risk prediction.

5. Conclusions

This study employs deep learning modeling techniques to predict ROR risk and focuses on a comparative analysis between sequence prediction and status prediction. A total of 660 near-side LC samples were extracted from the high-D natural driving dataset, and five mainstream deep learning models—LSTM, CNN, LSTM-CNN, CNN-LSTM-MA, and Transformer—were used to predict ROR risk. Although status prediction is the mainstream approach for driving risk prediction, experimental results demonstrate that sequence prediction surpasses status prediction in terms of prediction accuracy and the safety information provided without compromising prediction efficiency. This underscores the superiority of the sequence prediction modeling approach.
Although this study has demonstrated that sequence prediction is superior to typical risk status prediction in many aspects, the experimental results also indicate that sequence prediction suffers from category imbalance. Furthermore, the study does not delve into the utilization of ROR risk prediction information. Future research will consider using cutting-edge generative models for oversampling minority samples and establishing a prediction-control-based driving risk prevention framework to quantitatively analyze the applicability of the prediction model across different time windows.

Author Contributions

Conceptualization, Y.C. and H.Z.; methodology, Y.C. and H.Z.; software and visualization, H.Z.; investigation, Q.B. and L.W.; formal analysis, Q.B. and L.W.; writing—original draft preparation, Y.C. and H.Z.; writing—review and editing, H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Natural Science Foundation of China (Grant No.: 52372324).

Data Availability Statement

The data that support the findings of this study are openly available at: https://levelxdata.com/highd-dataset/ (accessed on 31 August 2024), reference number [42].

Conflicts of Interest

Yunteng Chen was employed by Shaoxing Communications Investment Group Co., Ltd., and Lijun Wei was employed by Shaoxing Public Transport Group Co., Ltd., The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Road Traffic Injuries. Available online: https://www.who.int/zh/news-room/fact-sheets/detail/road-traffic-injuries (accessed on 31 August 2024).
  2. Roadway Departure Safety. Available online: https://highways.dot.gov/safety/RwD (accessed on 31 August 2024).
  3. Daniello, A.; Gabler, H.C. Fatality risk in motorcycle collisions with roadside objects in the United States. Accid. Anal. Prev. 2011, 43, 1167–1170. [Google Scholar] [CrossRef] [PubMed]
  4. Cheng, G.; Cheng, R.; Pei, Y.; Han, J. Research on Highway Roadside Safety. J. Adv. Transp. 2021, 2021, 1–19. [Google Scholar] [CrossRef]
  5. McGee, H.W.; Transportation Research Board; National Academies of Sciences, Engineering and Medicine. Practices for Preventing Roadway Departures; Transportation Research Board: Washington, DC, USA, 2018; p. 25165. [Google Scholar]
  6. Ewan, L.; Al-Kaisy, A.; Hossain, F. Safety Effects of Road Geometry and Roadside Features on Low-Volume Roads in Oregon. Transp. Res. Rec. J. Transp. Res. Board 2016, 2580, 47–55. [Google Scholar] [CrossRef]
  7. Jiang, X.; Yan, X.; Huang, B.; Richards, S.H. Influence of Curbs on Traffic Crash Frequency on High-Speed Roadways. Traffic Inj. Prev. 2011, 12, 412–421. [Google Scholar] [CrossRef]
  8. El Esawey, M.; Sayed, T. Evaluating safety risk of locating above ground utility structures in the highway right-of-way. Accid. Anal. Prev. 2012, 49, 419–428. [Google Scholar] [CrossRef]
  9. American Traffic Safety Services Association (ATSSA). Preventing Vehicle Departures from Roadways; American Traffic Safety Services Association (ATSSA): Fredericksburg, VA, USA, 2015. [Google Scholar]
  10. Liu, C.; Subramanian, R. Factors Related to Fatal Single Vehicle Run-Off-Road Crashes; Publication DOT-HS-811-232; U.S. Department of Transportation: Washington, DC, USA, 2009. [Google Scholar]
  11. McLaughlin, S.B.; Hankey, J.M.; Klauer, S.G.; Dingus, T.A. Contributing Factors to Run-Off-Road Crashes and Near-Crashes; Publication DOT-HS-811-079; National Highway Traffic Safety Administration (NHTSA): Washington, DC, USA, 2009. [Google Scholar]
  12. Liu, C.; Ye, T.J. Run-Off-Road Crashes: An On-Scene Perspective; Publication DOT-HS-811-500; U.S. Department of Transportation: Washington, DC, USA, 2011. [Google Scholar]
  13. Vogel, K. A comparison of headway and time to collision as safety indicators. Accid. Anal. Prev. 2003, 35, 427–433. [Google Scholar] [CrossRef]
  14. Zhang, Z.; Wei, Z.; Chen, Z.; Pei, M. A real-time collision risk assessment method at tunnel entrance based on safety field theory. Multimodal Transp. 2024, 3, 100139. [Google Scholar] [CrossRef]
  15. Hayward, J.C.; Pennsylvania Transportation and Traffic Safety Center. Near Miss Determination Through Use of a Scale of Danger (Traffic Records 384); Highway Research Record: Washington, DC, USA, 1972. [Google Scholar]
  16. Ward, J.R.; Agamennoni, G.; Worrall, S.; Bender, A.; Nebot, E. Extending Time to Collision for probabilistic reasoning in general traffic scenarios. Transp. Res. Part C Emerg. Technol. 2015, 51, 66–82. [Google Scholar] [CrossRef]
  17. Minderhoud, M.M.; Bovy, P.H. Extended time-to-collision measures for road traffic safety assessment. Accid. Anal. Prev. 2001, 33, 89–97. [Google Scholar] [CrossRef]
  18. Wang, C.; Xiong, F.; Winner, H. Reduction of Uncertainties for Safety Assessment of Automated Driving Under Parallel Simulations. IEEE Trans. Intell. Veh. 2020, 6, 110–120. [Google Scholar] [CrossRef]
  19. Shangguan, Q.; Fu, T.; Wang, J.; Jiang, R.; Fang, S. Quantification of Rear-End Crash Risk and Analysis of Its Influencing Factors Based on a New Surrogate Safety Measure. J. Adv. Transp. 2021, 2021, 5551273. [Google Scholar] [CrossRef]
  20. Gabauer, D.J.; Gabler, H.C. Comparison of roadside crash injury metrics using event data recorders. Accid. Anal. Prev. 2008, 40, 548–558. [Google Scholar] [CrossRef] [PubMed]
  21. Park, H.; Oh, C.; Moon, J.; Kim, S. Development of a lane change risk index using vehicle trajectory data. Accid. Anal. Prev. 2018, 110, 1–8. [Google Scholar] [CrossRef] [PubMed]
  22. Chan, C.-Y. Defining Safety Performance Measures of Driver-Assistance Systems for Intersection Left-Turn Conflicts. In Proceedings of the 2006 IEEE Intelligent Vehicles Symposium, Meguro-Ku, Japan, 13–15 June 2006. [Google Scholar]
  23. Han, L.; Du, Z. Status, Challenges, and Trends of International Research on Roadside Safety. Transp. Res. Rec. J. Transp. Res. Board 2024, 03611981241242363. [Google Scholar] [CrossRef]
  24. Cheng, G.; Cheng, R.; Zhang, S.; Sun, X. Risk evaluation method for highway roadside accidents. Adv. Mech. Eng. 2019, 11, 1687814018821743. [Google Scholar] [CrossRef]
  25. Fang, Y.; Guo, Z.; Li, Z. Assessment model of roadside environment objective safety on two-lane highway. J. Tongji Univ. (Nat. Sci.) 2013, 41, 1025–1030. [Google Scholar]
  26. Long, K.; Li, Y.; Lei, Z.; Zheng, J. Evaluating roadside hazard rating based on acceleration severity index. China J. Highw. Transp. 2013, 26, 143–149. [Google Scholar]
  27. Shangguan, Q.; Fu, T.; Wang, J.; Fang, S.; Fu, L. A proactive lane-changing risk prediction framework considering driving intention recognition and different lane-changing patterns. Accid. Anal. Prev. 2022, 164, 106500. [Google Scholar] [CrossRef]
  28. Arvin, R.; Khattak, A.J.; Qi, H. Safety critical event prediction through unified analysis of driver and vehicle volatilities: Application of deep learning methods. Accid. Anal. Prev. 2021, 151, 105949. [Google Scholar] [CrossRef]
  29. Zhang, H.; Shen, Y.; Bao, Q.; Qu, Q.; Zhang, R.; Yang, M.; Han, T. Rethinking real-time risk prediction from multi-step time series forecasting on highway car-following scenarios. Accid. Anal. Prev. 2024, 207, 107748. [Google Scholar] [CrossRef]
  30. Qu, Q.; Shen, Y.; Yang, M.; Zhang, R.; Zhang, H. Expressway Traffic Incident Detection Using a Deep Learning Approach Based on Spatiotemporal Features with Multilevel Fusion. J. Transp. Eng. Part A Syst. 2024, 150, 04024020. [Google Scholar] [CrossRef]
  31. Qu, Q.; Shen, Y.; Yang, M.; Zhang, R. Towards efficient traffic crash detection based on macro and micro data fusion on expressways: A digital twin framework. IET Intell. Transp. Syst. 2024, in press. [Google Scholar] [CrossRef]
  32. Arthur, D.; Vassilvitskii, S. k-means++: The Advantages of Careful Seeding; Stanford University: Stanford, CA, USA, 2006. [Google Scholar]
  33. Zhang, Y.; Zou, Y.; Selpi; Zhang, Y.; Wu, L. Spatiotemporal Interaction Pattern Recognition and Risk Evolution Analysis During Lane Changes. IEEE Trans. Intell. Transport. Syst. 2023, 24, 6663–6673. [Google Scholar] [CrossRef]
  34. Schmidhuber, J.; Hochreiter, S. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar]
  35. Understanding LSTM Networks. Available online: https://colah.github.io/posts/2015-08-Understanding-LSTMs/ (accessed on 31 August 2024).
  36. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  37. Li, P.; Abdel-Aty, M.; Yuan, J. Real-time crash risk prediction on arterials based on LSTM-CNN. Accid. Anal. Prev. 2020, 135, 105371. [Google Scholar] [CrossRef] [PubMed]
  38. Gao, K.; Li, X.; Hu, L.; Chen, B.; Du, R. Lane change intention prediction of CNN-LSTM based on multi-head attention. J. Mech. Eng. 2022, 58, 369. [Google Scholar]
  39. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
  40. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  41. Guo, H.; Keyvan-Ekbatani, M.; Xie, K. Lane change detection and prediction using real-world connected vehicle data. Transp. Res. Part C: Emerg. Technol. 2022, 142. [Google Scholar] [CrossRef]
  42. Krajewski, R.; Bock, J.; Kloeker, L.; Eckstein, L. The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2118–2125. [Google Scholar]
  43. Zhang, Y.; Chen, Y.; Gu, X.; Sze, N.; Huang, J. A proactive crash risk prediction framework for lane-changing behavior incorporating individual driving styles. Accid. Anal. Prev. 2023, 188, 107072. [Google Scholar] [CrossRef]
  44. Loshchilov, I.; Hutter, F. SGDR: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
  45. Board, N.T.S. Special Investigation Report-Highway Vehicle and Infrastructure Based Technology for the Prevention of Rear-End Collisions; NTSB Number SIR-01; National Transportation Safety Board: Washington, DC, USA, 2001. [Google Scholar]
  46. Lu, G.; Cheng, B.; Lin, Q.; Wang, Y. Quantitative indicator of homeostatic risk perception in car following. Saf. Sci. 2012, 50, 1898–1905. [Google Scholar] [CrossRef]
Figure 1. Description of the sequence prediction and status prediction.
Figure 1. Description of the sequence prediction and status prediction.
Mathematics 12 03456 g001
Figure 2. Structure of LSTM-CNN Network.
Figure 2. Structure of LSTM-CNN Network.
Mathematics 12 03456 g002
Figure 3. Structure of CNN-LSTM-MA Network.
Figure 3. Structure of CNN-LSTM-MA Network.
Mathematics 12 03456 g003
Figure 4. Structure of Transformer.
Figure 4. Structure of Transformer.
Mathematics 12 03456 g004
Figure 5. The determination of LC sample.
Figure 5. The determination of LC sample.
Mathematics 12 03456 g005
Figure 6. SSE changes with the number of clusters.
Figure 6. SSE changes with the number of clusters.
Mathematics 12 03456 g006
Figure 7. Average MFS of the models.
Figure 7. Average MFS of the models.
Mathematics 12 03456 g007
Figure 8. MFS comparison between sequence prediction and status prediction among the models: (a) LSTM; (b) CNN; (c) LSTM-CNN; (d) CNN-LSTM-MA; (e) Transformer.
Figure 8. MFS comparison between sequence prediction and status prediction among the models: (a) LSTM; (b) CNN; (c) LSTM-CNN; (d) CNN-LSTM-MA; (e) Transformer.
Mathematics 12 03456 g008
Figure 9. Prediction efficiency of the models.
Figure 9. Prediction efficiency of the models.
Mathematics 12 03456 g009
Figure 10. Confusion matrix of CNN under 2 s observation and 2 s prediction window: (a) sequence prediction; (b) status prediction.
Figure 10. Confusion matrix of CNN under 2 s observation and 2 s prediction window: (a) sequence prediction; (b) status prediction.
Mathematics 12 03456 g010
Figure 11. Sequence prediction case: (a) sequence prediction case; (b) part 1 prediction; (c) part 2 prediction; (d) part 3 prediction.
Figure 11. Sequence prediction case: (a) sequence prediction case; (b) part 1 prediction; (c) part 2 prediction; (d) part 3 prediction.
Mathematics 12 03456 g011
Figure 12. Average duration and risk of different situations.
Figure 12. Average duration and risk of different situations.
Mathematics 12 03456 g012
Figure 13. SSE changes in car-following samples clustering.
Figure 13. SSE changes in car-following samples clustering.
Mathematics 12 03456 g013
Figure 14. Average MFS of car-following risk prediction.
Figure 14. Average MFS of car-following risk prediction.
Mathematics 12 03456 g014
Table 1. Numerical range of driving risk status.
Table 1. Numerical range of driving risk status.
Risk StatusNumerical RangePercentage
SafeROR Risk ≤ 0.144471.19%
Low-risk0.1444 < ROR Risk ≤ 0.430320.72%
Median-risk0.4303 < ROR Risk ≤ 1.00097.30%
High-riskROR Risk > 1.00090.79%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, Y.; Wei, L.; Bao, Q.; Zhang, H. Real-Time Run-Off-Road Risk Prediction Based on Deep Learning Sequence Forecasting Approach. Mathematics 2024, 12, 3456. https://doi.org/10.3390/math12223456

AMA Style

Chen Y, Wei L, Bao Q, Zhang H. Real-Time Run-Off-Road Risk Prediction Based on Deep Learning Sequence Forecasting Approach. Mathematics. 2024; 12(22):3456. https://doi.org/10.3390/math12223456

Chicago/Turabian Style

Chen, Yunteng, Lijun Wei, Qiong Bao, and Huansong Zhang. 2024. "Real-Time Run-Off-Road Risk Prediction Based on Deep Learning Sequence Forecasting Approach" Mathematics 12, no. 22: 3456. https://doi.org/10.3390/math12223456

APA Style

Chen, Y., Wei, L., Bao, Q., & Zhang, H. (2024). Real-Time Run-Off-Road Risk Prediction Based on Deep Learning Sequence Forecasting Approach. Mathematics, 12(22), 3456. https://doi.org/10.3390/math12223456

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop