Next Article in Journal
IoT-Enabled WBAN and Machine Learning for Speech Emotion Recognition in Patients
Next Article in Special Issue
VV-YOLO: A Vehicle View Object Detection Model Based on Improved YOLOv4
Previous Article in Journal
A Survey of Deep Learning Based NOMA: State of the Art, Key Aspects, Open Challenges and Future Trends
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Real-Time Trajectory Prediction Method for Intelligent Connected Vehicles in Urban Intersection Scenarios

1
Beijing Key Lab of Urban Intelligent Traffic Control Technology, North China University of Technology, Beijing 100144, China
2
Key Laboratory of Operation Safety Technology on Transport Vehicles, Research Institute of Highway Ministry of Transport, Beijing 100088, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(6), 2950; https://doi.org/10.3390/s23062950
Submission received: 2 February 2023 / Revised: 27 February 2023 / Accepted: 6 March 2023 / Published: 8 March 2023
(This article belongs to the Special Issue Intelligent Perception for Autonomous Driving in Specific Areas)

Abstract

:
Intelligent connected vehicles (ICVs) have played an important role in improving the intelligence degree of transportation systems, and improving the trajectory prediction capability of ICVs is beneficial for traffic efficiency and safety. In this paper, a real-time trajectory prediction method based on vehicle-to-everything (V2X) communication is proposed for ICVs to improve the accuracy of their trajectory prediction. Firstly, this paper applies a Gaussian mixture probability hypothesis density (GM-PHD) model to construct the multidimension dataset of ICV states. Secondly, this paper adopts vehicular microscopic data with more dimensions, which is output by GM-PHD as the input of LSTM to ensure the consistency of the prediction results. Then, the signal light factor and Q-Learning algorithm were applied to improve the LSTM model, adding features in the spatial dimension to complement the temporal features used in the LSTM. When compared with the previous models, more consideration was given to the dynamic spatial environment. Finally, an intersection at Fushi Road in Shijingshan District, Beijing, was selected as the field test scenario. The final experimental results show that the GM-PHD model achieved an average error of 0.1181 m, which is a 44.05% reduction compared to the LiDAR-based model. Meanwhile, the error of the proposed model can reach 0.501 m. When compared to the social LSTM model, the prediction error was reduced by 29.43% under the average displacement error (ADE) metric. The proposed method can provide data support and an effective theoretical basis for decision systems to improve traffic safety.

1. Introduction

With the rapid development of 5G communication and intelligent connected vehicles (ICVs), the trajectory prediction of ICVs under a vehicle-to-everything (V2X) [1] system has become an important technology to improve the service level of ICVs [2,3,4]. Meanwhile, considering that transportation requires high levels of efficiency and safety, the accuracy and lower latency performance of the trajectory prediction methods need to be further improved [5,6,7].
The collision risk between vehicles existing in urban intersections can be reduced effectively by predicting the trajectory of ICVs [8,9]. The first step of trajectory prediction is target detection and tracking, which can provide reliable and microscopic multidimension data for the real-time prediction of ICV trajectories, and the target detection methods are mainly based on multi-sensor data fusion (MSDF) technology. As deep learning technologies are applied more and more widely within multilayer data coupling, vehicle perception technologies are also rapidly developing based on camera and light detection and ranging (LiDAR) sensors [10,11,12]. Jie et al. [13] proposed an optimal attribute fusion algorithm for target detection and tracking based on a Gaussian mixture probability hypothesis density (GM-PHD) filter, which could output stable classification information and high-accuracy positioning and tracking information from the target.
Perception methods for vehicles can reduce the rate of accidents at an intersection. Meanwhile, improvements in the performance of control algorithms also need the support of real-time and accurate prediction data [14,15]. Based on perception methods, the trajectory prediction methods gradually became the focus of research. Schreier et al. [16] used an integrated Bayesian approach with a Monte Carlo algorithm to achieve trajectory prediction for longer time domain intervals. When combined with an unscented Kalman filter (UKF) and a dynamic Bayesian network, Xie et al. [17] proposed the interactive multiple model trajectory prediction (IMMTP) methods to predict vehicle trajectories accurately in specified scenarios. In recent years, vehicle trajectory prediction methods based on neural networks with autonomous learning capabilities have gradually become a research hotspot and are gradually being applied to the field of ICVs trajectory prediction. Cui et al. [18] proposed an autonomous driving multimodal trajectory prediction method based on deep convolutional neural networks (CNNs), which encoded the vehicle’s environment as a raster image that was input into the CNN network to output the predicted trajectory of the vehicle. Luo et al. [19] proposed a single vehicle trajectory prediction method based on CNN networks to extract the motion features of vehicles in point cloud data, and then a new convolutional layer was added to achieve the prediction of the vehicle trajectories. Since single neural networks cannot satisfy the requirements of vehicle trajectory prediction [20], many scholars focus on hybrid neural networks. When considering the temporal features of trajectories in roadway scenarios, Qin et al. [21] proposed a Q-LSTM model to reduce the collision phenomenon. Better prediction performance is obtained by optimizing the LSTM network parameters. With the widespread application of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication technologies [22,23,24], the accuracy of trajectory prediction under urban scenarios has been improved greatly by combining the advantages of sensor fusion technologies. Zyner et al. [25] proposed a trajectory prediction method based on multimodal probabilistic solutions, which combined recurrent neural networks (RNNs) with mixture density networks (MDNs) to predict vehicle trajectories with high prediction accuracy. Schreiber et al. [26] proposed a method to fuse RNN networks with LiDAR grids. The top view was used as the input of the RNN networks, and better predictions were achieved by optimizing the weighted parameters of the network. When considering the mobility, interaction, and similarity of the vehicles, and the problem of gradient explosion and gradient disappearance when applying long sequence data to the training process, Ma et al. [27] proposed a real-time trajectory prediction model based on long short-term memory (LSTM) to refine the predicted vehicle trajectory types. Ji et al. [28] proposed a vehicle trajectory prediction method for the forced lane changes of vehicles in a weaving area, which considered the multimodal characteristic of vehicle motion. The experimental results showed that it had higher prediction accuracy in the lane changes of autonomous vehicles for trajectory prediction when compared with the model-based traditional methods.
In summary, the existing ICVs technology applications have rich research achievements in target detection, tracking, and trajectory prediction. However, how to combine microscopic V2X driving dynamic data and multisource environment data to further improve vehicle positioning accuracy and real-time trajectory prediction capabilities under urban intersection scenarios still needs to be further studied. The key contributions of this paper are summarized as follows:
  • We designed a real-time trajectory prediction method for ICVs, which combines the advantages of the Q-Learning algorithm and LSTM network with more consideration of spatiotemporal characteristics. We utilized the GM-PHD model to fuse the multi-sensor data output from the camera, LiDAR, V2X unit and traffic signal controller. Therefore, we not only enhanced positioning capabilities but also improved the capability of the trajectory prediction of the ICV;
  • We improved the dimensionality of the input of an improved LSTM model by using microscopic data from V2X communication, such as speed, acceleration, and traffic light timing data. Meanwhile, the signal light factor was considered in the improved LSTM model, and the proposed trajectory prediction method had better performance at signal-controlled intersections;
  • Different from most previous research results on vehicle trajectory prediction, we constructed an intelligent roadside unit for perceiving the data states of the ICVs, such as latitude, longitude, altitude, acceleration, and the trajectories of the ICVs, which could be predicted. Meanwhile, a practical urban intersection was selected for testing and evaluating the performance of the proposed model, obtaining a more credible result than the simulation.
The remainder of this paper is organized as follows. In Section 2, combined with V2X communication, an MSDF model based on GM-PHD and an improved LSTM model based on Q-Learning are presented. In Section 3, the experimental results of the proposed model are demonstrated and analyzed. Finally, In Section 4, the conclusions from this paper, along with the aspects of future work, are introduced.

2. Real-Time Trajectory Prediction Method for Intelligent Connected Vehicles

In this section, we present a real-time trajectory prediction method based on V2X communication, which is shown in Figure 1. Multisource data were obtained by the camera, LiDAR, V2X unit, and traffic signal controller as the input of the GM-PHD model to achieve ICV perception. When combined with the historical traffic state data, which had been preprocessed, the spatial-temporal trajectory information of the ICVs was obtained via an improved LSTM model.
The proposed method includes two parts: (1) a vehicle perception model based on GM-PHD; (2) a vehicle trajectory prediction model based on improved LSTM. Based on GM-PHD theory, we collected the ICVs state data, which were applied to improving the LSTM model. Q-Learning was then added to the LSTM to realize the real-time trajectory prediction of the ICVs.

2.1. Vehicle Perception Model Based on GM-PHD

The perception model has two parts: (1) data preprocessing and the (2) GM-PHD model. The specific processing is shown in Figure 2. GM-PHD is a multiple object tracking (MOT) model that can adapt to fuse multi-sensor data and apply this to the situation, with varying numbers of the ICVs. The processing consisted of: (1) data preprocessing; (2) modeling; (3) initialization, and (4) state prediction and processing.
(1)
Data preprocessing
The image data were detected by the YOLOv5 algorithm to obtain information on the ICV states. When considering that the point cloud has a large amount of data, the VoxelGrid [29] filtering algorithm was selected to reduce the data load. Then, the target-level perception data of the multi-sensor were transformed by perspective-n-point (PNP) and camera calibration to a global co-ordinate system. The timestamps of the sensory data are aligned by linear interpolation, and the image data are matched with the V2X communication data by license plate number. In addition, the global nearest neighbor (GNN) algorithm was applied to fuse the data of the camera, LiDAR, and V2X, and the state of ICVs as output.
(2)
The Modeling of ICVs
The geodetic coordinate system was selected as the reference of the ICVs, with an x-axis along the road direction and a y-axis along the vertical road direction. Measurement data [x, y, vx, vy, ax, ay, δ], which were acquired by V2X communication, were added to improve the accuracy of the tracking. We define N obj as the number of ICVs at time k and we define Xk as the set of ICV states X k = { x 1 , k , x 2 , k , , x i , k , , x N o b j ( k ) , k } , where the state vector x i , k at time k constitutes of the position, velocity, and acceleration. The definition is shown in Equation (1), and the updating equation is defined in Equation (2).
x i , k = [ x y v x v y ] T , i N obj ( k )
x i , k + 1 = F k x i , k + ε k
where [x, y] indicates the vector of the vehicular position and [vx, vy] indicates the vector of the vehicular speed. εk represents Gaussian white noise, which covariance follows the normal distribution N(·, R), and Fk indicates the state transition matrix.
The number of perceived ICVs at time k is defined as Ns(k), then all the observed ICVs at the intersection can be represented by the measurement data set Z k = { z 1 , k , z 2 , k , , z i , k , , z N s ( k ) } . The observed vector of the vehicle i state at time k is defined as zi,k, which contains perturbation, as shown in Equation (3). The observation equation of the sensors is shown in Equation (4).
z i , k = [ x y v x v y ] T , i N s ( k )
z i , k = H k x i , k + ς k
where Hk indicates the observation matrix of the linear system, and ς k indicates the Gaussian white noise observed by the sensor, which follows the distribution of N(·, R).
(3)
Initialization of the GM-PHD parameters
The ICVs and potential ICVs are represented by Gaussian components {w, m, P, ξ, n}, which denote the weights, the mean states, the covariance matrix, the number of Gaussian components, and the classification based on the GM-PHD [13] algorithm.
(4)
ICV states prediction and processing
The Kalman filter was applied to the GM-PHD algorithm to predict the Gaussian components, as shown in Equations (5)–(9). In the updated processing, the weights are updated by the observed ICVs state based on the current state w and the detection probability and martingale distance, as shown in Equation (10). Then, the Gaussian component is updated to obtain the new Gaussian component.
v k 1 ( x ) = i = 1 J k 1 w k 1 i N x ; m k 1 i , P k 1 i
γ k ( x ) = i = 1 J γ , k w γ , k i N x ; m γ , k i , P γ , k i
w k k 1 i = w k 1 i
m k k 1 i = F k 1 m k 1 i
P k k 1 i = Q k 1 + F k 1 P k 1 i F k 1 T
v k ( x ) = 1 P D , k v k k 1 ( x ) + z Z k v D , k ( x ; z )
where v k 1 indicates the intensity function of the ICVs at time k−1, Jk−1 indicates the number of Gaussian components, and N x ; m k 1 i , P k 1 i indicates the distribution of i-th Gaussian components. w k 1 i , m k 1 i , P k 1 i indicate the weights, mean, covariance matrix of the distribution of Gaussian components, Fk indicates the matrix of state transition, P γ , k i indicates the covariance matrix described by the distribution of v k 1 near the peak m γ , k i ; w γ , k i indicates the weights of the number of newborn ICVs; PD,k indicates the detection probability of the vehicle; v D , k indicates the posterior density of the detected ICVs; 1 P D , k v k | k 1 x indicates the intensity of the undetected ICVs; z Z k v D , k ( x ; z ) indicates the intensity of the detected ICVs by the sensors, and γ k ( x ) indicates the newborn ICVs intensity at the intersection.
Moreover, a large number of computational resources can be consumed in complex scenarios with background noise, interference, and measurements. Therefore, we cite the method introduced by Lindenmaier [30] to prune the Gaussian components, and the accurate ICVs data states at the intersection were obtained. The pseudocode of GM-PHD algorithm is shown in Table 1.

2.2. Vehicle Trajectory Prediction Model Based on Improved LSTM

When combined with the states of the ICVs outputted by improved GM-PHD and signal light states, we applied graph modeling and an encoding unit before LSTM. The feature of V2X communication data, which can be acquired under the connected scenarios, is compressed to unify the feature dimensions. Then, considering the positional relationship between vehicle-to-vehicle, the Q-Learning algorithm was selected to gain the features of spatial dimension. LSTM was selected to gain the features of the temporal dimension. After the processing of merge and decoding, the trajectories of the ICVs could be predicted by the features. The structure of the improved LSTM model is shown in Figure 3.

2.2.1. Graph Modeling and Features Encoding for Improved LSTM

The number of ICVs is defined as N, and each ICV is defined as a node from the graph. The node feature matrix X consists of position coordinates (x, y), velocity v, acceleration a, heading angle φ, body length L, body width W, and signal light factor TL, which is shown in Equation (11). The fixed co-ordinate system was selected to unify the co-ordinate system, and the x-axis direction of the ICVs is defined as the road direction, the y-axis is vertical to the x-axis, and the co-ordinate system obeys the right-handed system rule.
X = X 1 X 2 X i X n = x 1 y 1 v 1 a 1 φ 1 L 1 W 1 x 2 y 2 v 2 a 2 φ 2 L 2 W 2 x i x n y i y n v i v n a i a n φ i φ n L i L n W i W n T L 1 T L 2 T L i T L n
where TL indicates the signal light factor, which can be introduced as the remaining time of the red light when the ICV arrives at the next crosswalk maintaining constant velocity. The parameters [xi, yi, vi, ai, φi, Li, Wi] (i = 1, 2, 3, …, n) of matrix X can be obtained from the V2X fusion perception trajectory information.
The adjacency matrix G of the graph is shown as Equations (12) and (13).
G = g 11 g 12 g 1 n g 21 g 22 g 2 n g n 1 g n 2 g n n
g i j = D ( i , j ) , i j 0 , i = j , D ( i , j ) = ( x i x j ) 2 + ( y i y j ) 2
where gij indicates the Euclidean distance between the vehicles. The heading angle of the ICVs can be directly obtained from the V2X perception data.
Both the input and output trajectory prediction data of the ICVs are shown in Equation (14).
P r = [ ( x t , y t ) , ( x t 1 , y t 1 ) , , ( x t α in , y t α in ) ] P f = Λ ( P r ) = [ ( x t , y t ) , ( x t + 1 , y t + 1 ) , , ( x t + β out , y t + β out ) ]
where Λ indicates the mapping of the historical trajectory space to the prediction trajectory space, αin indicates the number of historical trajectory points, and βout indicates the number of predicted trajectory points at time t.

2.2.2. Prediction of ICVs Trajectory Based on the LSTM Model

In the time dimension, LSTM (with a deep structure) has the memory unit for storing historical time-series information, and the structure of the LSTM model is shown in Figure 4.
In Figure 4, the vehicle features, lane feature, and signal timing information are adopted as the input of the LSTM model. The input gates, forget gates, and output gates as the constraint control of ICVs, are provided by the model. Moreover, parts of trajectory features can be forgotten by forget gates, and the new features obtained by the Sigmoid function σ and hyperbolic tangent function tanh are added to the LSTM instead of the trajectory features that are discarded in the forgetting gates, as shown in Equations (15) and (16).
σ ( x ) = 1 1 + e x
tanh ( x ) = 2 1 + e 2 x 1
The calculation process of LSTM is summarized as follows:
f t = σ ( W xf x t + W hf h t 1 + b f )
i t = σ ( W xi x t + W hi h t 1 + b i )
o t = σ ( W xo x t + W ho h t 1 + b o )
c t = f t c t 1 + i t t a n h ( W xc x t + W hc h t 1 + b c )
h t = o t t a n h ( c t )
where [Wxc, Wxo, Wxi, and Wxf]T indicate the weight matrix of vehicle feature, [Whc, Who, Whi, and Whf]T indicates the weight matrix of the hidden layer, xt indicates the input value of the node features of the ICVs at time t, [bc, bo, bi, bf]T indicates the offset vector and ht−1 indicates the output value of vehicle trajectory sequence at time t−1. In Equation (17), ft (ft ∈ [0, 1]) indicates the state of the forget gate. In Equation (18), it (it ∈ [0, 1]) indicates the state of the input gate. In Equation (19), ot (ot ∈ [0, 1]) indicates the state of the output gate. In Equation (20), ht indicates the output of the LSTM in Equation (21).

2.2.3. Improved LSTM Based on Q-Learning

When combined with the feature of ICV spatial distribution, the Q-Learning algorithm was selected in this section to optimize the LSTM model. Q-Learning is the value-based reinforcement learning algorithm, one of the key parameters Q(s, m) denotes the expectation that the benefit can be obtained by the action mM, and the corresponding reward can be the feedback, according to the action set M of the ICVs. The optimal route, which is stored in the Q-table, can be selected to obtain the maximum benefit action. The structure of Q-Learning is shown in Figure 5.
The Q-Learning algorithm can be integrated with the LSTM model for the purpose of accurate predicting the ICV trajectory. Meanwhile, the road is coded in a grid pattern, and each of the road grids is defined as a road node with red node numbers in Figure 5. The processing of the algorithm is shown as follows:
Step 1: Initialize the action value function Q(s, m);
Step 2: A new action m is selected by the ICV, according to the Q-greedyUCB policy [31] and executing;
Step 3: Reward r is received by the ICV, and a new state s + 1 is selected;
Step 4: Updating the Q * ( s , m ) function;
Step 5: Repeating Steps 2–4 until the ICV reaches the expectation states of ICV;
Step 6: Output the last generated path scheme of the ICV.
The updated of Q * ( s , m ) function is shown as Equation (22).
Q * ( s , m ) = ( 1 μ ) Q ( s , m ) + μ r + γ m a x m + 1 M Q ( s + 1 , m + 1 )
where μ (μ ∈ [0, 1]) indicates the learning rate of the Q-Learning algorithm, γ (γ ∈ [0, 1]) indicates the discount factor, which can make the algorithm pay more attention to the current or future reward, Q(s, m) indicates the current reward under the current state for the current action, Q * ( s , m ) indicates the desired maximum reward obtained by the ICVs.
Generally, the ICVs may have five actions (straight ahead, left lane change, right lane change, left turn, and right turn) at an intersection, as shown in Figure 6.
After action m is executed, if the ICV cannot reach the target grid, the Q-value is set as 0. Otherwise, the Q-value configurations are shown in Table 2. In addition, The Q-table of the ICVs at initial time t is shown in Equation (23).
Q = 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
The route with less time cost is defined as a better scheme for ICVs. In this section, the Q-greedyUCB algorithm [31] is selected as the action policy in the Q-Learning algorithm. In the processing of LSTM model training, five driving behaviors (straight ahead, left lane change, right lane change, left turn, and right turn) are considered to achieve trajectory prediction. The trajectories of the ICVs at the intersection are shown in Figure 7.
The weight matrix and offset vector of the vehicle features are obtained by training the LSTM model, and the loss function of the LSTM is shown in Equation (24).
J 1 = t = 1 T ln P τ υ ( t ) υ h
where υ ( t ) indicates the predicted trajectory at time t, and τ indicates the parameters of the weight matrix and offset vector in the LSTM model. υ h indicates the vector of historical trajectory feature.
The trajectory prediction by LSTM needs to be optimized in combination with the Q-Learning algorithm. The loss function of Q-Learning combined with the LSTM is designed to fuse the vehicle trajectory behavior features and the driving features of the ICV, as shown in Equation (25).
J 2 = t = 1 T D α P lstm υ ( t ) Q ql υ ( t )   = t = 1 T 4 1 α 2 1 P lstm ( υ ( t ) ) 1 + α 2 Q q l ( υ ( t ) ) 1 α 2 d υ ( t )
where Plstm indicates the probability functions of the predicted trajectories in LSTM, Qql indicates the probability functions of the predicted trajectories in Q-Learning, and Dα indicates the α divergence. When considering the symmetry of Dα, we set α as 0 to make the LSTM and Q-Learning prediction results as similar as possible. Finally, the loss function can be defined as Equation (26) by combining J1 and J2.
J = β J 1 + ( 1 β ) J 2
where β (β ∈ [0, 1]) indicates the ratio of J2 in the final loss function.

3. Results and Discussion

In this section, a field test scenario for the ICVs was constructed based on an intelligent roadside unit, and the parameters of the model and scenario are listed in detail. Then, the evaluation metrics of GM-PHD and the improved LSTM model are introduced to verify and analyze the advanced of proposed model.

3.1. Scenario and Parameters

The ICVs and road infrastructure have real-time data-exchange capabilities via the V2X unit. DSMP (LTE-V communication protocol) was adopted by the roadside unit (RSU) to communicate with the on-board unit (OBU). Meanwhile, an intersection on the auxiliary road of Fushi Road in Shijingshan District, Beijing, was selected as the experimental scenario. The time of the experiments was selected between 7:00 and 19:30, and the saturation flow of the intersection was 319.04 pcu/h. The top view of the experimental scenarios is shown in Figure 8b.
According to the survey of the selected scenarios, the experimental scenarios occupied 350 × 350 m2 areas, which is marked in red rectangle block in Figure 8b, and the driving route of the ICVs is shown as an example. The driving route includes three types of driving behaviors: straight ahead, right turn, and left turn, where the green “△” indicates the origin point of the vehicle and the yellow “△” indicates the destination point of the driving. In addition, in order to verify the detection accuracy, we adopted the centimeter-level positioning data of the ICVs as the ground truth value.
Moreover, an intelligent roadside unit was deployed beside the intersection, which is equipped with a gigabit switch, cameras, LiDAR, V2X units, and mobile edge computing (MEC). A high-performance embedded processor with 30 Tops as the MEC device could ensure the speed and efficiency of algorithm execution. The intelligent roadside unit is shown in Figure 8a. The list of configurations is shown in Table 3.

3.2. Evaluation Metrics

The performance of trajectory prediction is susceptible to the perceptional accuracy of the ICVs, and to evaluate the perceptional accuracy, the mean absolute percentage error (MAPE), the mean absolute error (MAE), and root mean square error (RMSE) are used in this paper, as shown in Equations (27)–(29).
M A P E = 100 % n i = 1 n y ^ i y i y i
M A E = 1 q i = 1 q h i l i
R M S E = 1 m i = 1 m y ^ i y i 2
where y ^ i indicates the output of fusion positioning, y i indicates the actual position of the vehicle, m indicates the sample numbers of MAPE, q indicates the number of points, hi indicates the i-th point of predicted trajectory, and li indicates the ground truth of the i-th point of trajectory.
For the evaluation of the proposed prediction model, the average displacement error (ADE) and final displacement error (FDE) are adopted as the evaluation metrics. The ADE is the average Euclidean distance between the predicted trajectory and the real trajectory. FDE is defined as the Euclidean distance between the end-point of the predicted trajectory and the end-point of the actual trajectory. The ADE and FDE functions are shown in Equations (30) and (31).
A D E = k = 1 r i n   ( D i d i s t ) ( k ) n
F D E = ( x pred x truth ) 2 + ( y pred y truth ) 2
where n indicates the number of vehicles, r indicates the prediction step, Didist indicates the Euclidean distance between the actual and predicted coordinates of vehicle i, [xpred, ypred]T indicates the end point of the predicted trajectory, and [xtruth, ytruth]T indicates the end point of the actual trajectory.

3.3. Experimental Results and Analysis

There are three experimental ICVs, with a maximum speed of 35 km/h. The three ICVs track the route in Figure 8. The range of V2X communication between the RSU and the ICVs is considered to be 300 m. The real-time traffic states, including the ICV dataset and the signal light state dataset, can be perceived by intelligent roadside unit. There are 12,403 data states for the vehicles and 1151 data states for the signal light in the dataset; parts of the dataset are shown in Table 4 and Table 5.

3.3.1. Accuracy of ICV Perception Analysis

A tested route was set for the ICVs, which is described in Figure 8b, and consequently, a series of perception data were recorded. The ICV perception results were perceived by the camera, LiDAR, V2X unit, the GM-PHD model, and the ground truth position, and the errors were selected to mark the map, which is shown in Figure 9.
The error distribution of the single-sensor model presents an irregular elliptical distribution, and it is more dispersed compared with the error distribution of the fused model. Thus, the vehicular detection information after fusion processing is closer to the real results, and the statistic of perception error is shown in Table 6.
The perceptional accuracy of the ICVs applying the GM-PHD model is more advantageous when compared with the single sensor. The maximum, minimum, and average error of the LiDAR has better performance when compared with the camera and V2X unit. When compared to LiDAR, the minimum error of the GM-PHD model is reduced by 86.58%. Moreover, the average error of the GM-PHD model is 0.1181 m, which is a 44.05% reduction compared to the LiDAR. By combining the data from multiple sources, the data noise is reduced, the outliers are eliminated, and the biases of each individual sensor data are corrected. Thus, when compared with the perception result of the single sensors, the perception results can be described more accurately by the fusion of the data from different sensors. Finally, the accuracy analysis of perception can further verify that the data obtained by GM-PHD has enough credibility.
In order to evaluate the performance of the GM-PHD model, we compared the GM-PHD model with the LSTM model [32], MV3D (Multi-View 3D) model [33], and RoarNet model [34]. The RMSE and MAE metrics were selected to evaluate the performance of the models. The comparison results are shown in Figure 10.
In Figure 10, the MAE of the GM-PHD model is 0.0843 m in the X directions, and the MAE of the GM-PHD model is 0.0828 m in the Y direction. The RMSE of the GM-PHD model is 0.1100 m in the X direction, and the RMSE of the GM-PHD model is 0.1063 m in the Y direction. When compared with the LSTM model, the MV3D model, RoarNet model, and the MAE of the GM-PHD model were reduced by 27.74%, 43.49%, and 28.77%, respectively. When compared with the LSTM model, the MV3D model, RoarNet model, and the RMSE of the GM-PHD model were reduced by 30.08%, 45.93%, and 28.18%, respectively. Therefore, when compared with the LSTM model, the MV3D model, RoarNet model, and the GM-PHD model have better performance with lower MAE and RMSE values.

3.3.2. Advanced ICV Trajectory-Prediction Analysis

In order to evaluate the performance of the improved LSTM model for real-time trajectory prediction, the RNN encoder-decoder (RNN ED) model [35], the social LSTM [36], and social attention [37] method were selected for comparisons with the proposed model. In addition, this section analyzes the stability of the proposed model in different time periods with different traffic flows and analyzes the time latency in the trajectory prediction.
We deployed the intelligence roadside unit in the auxiliary road of Fushi Road, and the RNN ED, social LSTM, and social attention methods were adopted to predict the trajectory of the ICVs in the condition of driving behaviors (straight ahead, right turn, and left turn). Part of the trajectory prediction results (e.g., right turn) is shown in Figure 11.
In Figure 11, the trajectories of the ICVs that were predicted by the proposed model are shown as bold red lines, and the ground truth of the vehicle trajectories is shown as blue lines. When compared with the trajectory real-time prediction results of the RNN ED model, the social LSTM model, and the social attention model, the proposed model is closer to the actual driving trajectory of the ICV. The trajectory prediction results are statistically significant through repeated experiments. Under the FDE and ADE evaluation metrics, the results are shown in Figure 12.
In Figure 12, the error of the improved LSTM model under the FDE and ADE metrics is 0.845 m and 0.501 m, and the prediction error of the social LSTM under the ADE metric is 0.710 m. Therefore, when compared to the social LSTM model, the ADE of proposed model was reduced by 29.43%. Meanwhile, when compared with the social attention and social LSTM models, the prediction error of the improved LSTM model is smaller because the proposed model utilizes the intersection environment features, vehicle features, and V2X communication data. In summary, the proposed model can predict the ICV trajectory more accurately.
The system latency has an impact on the real-time performance of the system and the safety of ICVs at the intersection. In order to analyze the latency of the prediction model, the calculation latency of the improved LSTM model is shown in Figure 13.
In Figure 13, the time interval of the fusion perception is 100 ms in the system prediction processing, and the trajectory prediction model needs 96 ms of processing time to predict the trajectory of the ICVs. The total latency for the perception and trajectory prediction needs 196 ms, which is marked as same color in adjacent time period. In the processing of fusion perception, a higher number of Gaussian components need to be calculated by the prune operation of GM-PHD, leading to a reduction in work efficiency. In the trajectory prediction processing, the parameter calculations and graph modeling parts of the improved LSTM model take a certain amount of time to increase the latency of the model. When applying pipeline technology, the computation processing of the trajectory prediction is carried out simultaneously with the next computation processing of the fused perception algorithm. Thus, when considering that the time of trajectory prediction 2 s is larger than a latency of 0.196 s, the total latency satisfies the requirement of real-time trajectory prediction.
In order to verify the effect of traffic flow on trajectory tracking and prediction over different time periods, three ICVs were continuously tested in the intersection scenarios. The errors of prediction of the ICVs at different volumes of traffic flow in different time periods are shown in Figure 14.
In Figure 14, the orange column of the histogram indicates the perception error, and the upper and lower edges indicate the maximum and minimum errors of detection. The blue color indicates the trajectory prediction error, and the green color indicates the traffic flow volume. By analyzing the data in Figure 14, the decrease in the accuracy of prediction is due to the increase in traffic flow. However, the average displacement error of trajectory prediction is still lower than 0.501 m, so the prediction model satisfies the requirements of high-precision trajectory prediction (of ICVs). In addition, the proposed method can be extended to other similar systems, such as high-speed highway monitoring systems and tunnel monitoring systems, et al., to monitor the collision risk of vehicles on highways and ensure vehicular safety in tunnels.

4. Conclusions

In this paper, we focused on the intelligent perception at an urban intersection and proposed a real-time vehicular trajectory prediction method based on V2X communication; the above method was applied to an urban intersection to further improve ICV real-time trajectory and real-time prediction capabilities. When combined with V2X data, we improved the LSTM model based on the Q-Learning algorithm; the vehicle trajectory behavior features and the ICVs driving features were fused to optimize the loss function. The experimental results demonstrated that the improved LSTM model achieved an average prediction error of 0.501 m, and the error was reduced by 29.43% when compared to the social LSTM model under ADE metrics and 26.03% under FDE, which could achieve the stable and real-time prediction of ICV trajectory at different time periods and under different traffic volume flows.
In the future, in order to construct a multidimensional dataset of an intersection scenario, real-time and accurate trajectories can be provided by our achievements. Meanwhile, a more complex model for trajectory prediction will be designed and applied in challenging scenarios, such as highways, tunnels, off-ramps, and roundabouts, to improve vehicular safety and urban traffic efficiency.

Author Contributions

Conceptualization, P.W.; methodology, H.Y.; software, H.Y. and C.L.; validation, Y.W and C.L.; formal analysis, Y.W.; investigation, P.W.; resources, P.W.; data curation, H.Y. and R.Y.; writing—original draft preparation, H.Y.; writing—review and editing, P.W.; visualization, H.Y.; supervision, P.W.; project administration, P.W.; funding acquisition, P.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Beijing Natural Science Foundation (grant number 4212034) and the Beijing Higher Education Undergraduate Teaching Reform and Innovation Project (grant number 202210009003).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data and models used during the study appear in this article.

Acknowledgments

The authors would like to thank X. Liu and Y. Wang for their technical assistance with the experiments and analysis.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, S.; Jung, Y.; Park, Y.-H.; Kim, S.-W. Design of V2X-Based Vehicular Contents Centric Networks for Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2022, 23, 13526–13537. [Google Scholar] [CrossRef]
  2. Thandavarayan, G.; Sepulcre, M.; Gozalvez, J. Cooperative perception for connected and automated vehicles: Evaluation and impact of congestion control. IEEE Access 2020, 8, 197665–197683. [Google Scholar] [CrossRef]
  3. Thandavarayan, G.; Sepulcre, M.; Gozalvez, J. Generation of cooperative perception messages for connected and automated vehicles. IEEE Trans. Veh. Technol. 2020, 69, 16336–16341. [Google Scholar] [CrossRef]
  4. Wang, P.; Yu, H.; Zhang, W.; Wang, L.; Wu, W. Real-time traffic status evaluation method for urban cooperative vehicle infrastructure system. China J. Highw. Transp. 2019, 32, 176–187. [Google Scholar]
  5. Xiang, C.; Cheng, W.; Zhang, Z.; Jiao, X.; Qu, Y.; Chen, C.; Dai, H. Intelligent edge-empowered adaptive data recovery for urban traffic. J. Comp. Res. Dev. 2022, 1–15. [Google Scholar] [CrossRef]
  6. Yang, Z.; Tang, R.; Bao, J.; Lu, J.; Zhang, Z. A Real-time trajectory prediction method of small-scale quadrotors based on GPS data and neural network. Sensors 2020, 20, 7061. [Google Scholar] [CrossRef] [PubMed]
  7. Wang, P.; Deng, H.; Zhang, J.; Wang, L.; Zhang, M.; Li, Y. Model predictive control for connected vehicle platoon under switching communication topology. IEEE Trans. Intell. Transp. Syst. 2022, 23, 7817–7830. [Google Scholar] [CrossRef]
  8. Wang, X.; Ning, Z.; Hu, X.; Ngai, E.C.-H.; Wang, L.; Hu, B.; Kwok, R.Y. A city-wide real-time traffic management system: Enabling crowdsensing in social internet of vehicles. IEEE Commun. Mag. 2018, 56, 19–25. [Google Scholar] [CrossRef] [Green Version]
  9. Lyu, N.; Wen, J.; Duan, Z.; Wu, C. Vehicle trajectory prediction and cut-in collision warning model in a connected vehicle environment. IEEE Trans. Intell. Transp. Syst. 2022, 23, 966–981. [Google Scholar] [CrossRef]
  10. Li, X.; Guo, Z. Multi-source information fusion model of traffic lifeline based on improved D-S evidence theory. In Proceedings of the 2018 26th International Conference on Geoinformatics (Geoinformatics 2018), Kunming, China, 1–6 June 2018. [Google Scholar]
  11. Hossain, M.; Elshafiey, I.; Al-Sanie, A. Cooperative vehicle positioning with multi-sensor data fusion and vehicular communications. Wirel. Netw. 2019, 25, 1403–1413. [Google Scholar] [CrossRef]
  12. Bounini, F.; Gingras, D.; Pollart, H.; Gruyer, D. From simultaneous localization and mapping to collaborative localization for intelligent vehicles. IEEE Commun. Mag. 2021, 13, 196–216. [Google Scholar] [CrossRef]
  13. Jie, B.; Li, S.; Zhang, H.; Huang, L.; Wang, P. Robust target detection and tracking algorithm based on roadside radar and camera. Sensors 2021, 21, 1116. [Google Scholar]
  14. Sabzalian, M.H.; Mohammadzadeh, A.; Rathinasamy, S.; Zhang, W. A developed observer-based type-2 fuzzy control for chaotic systems. Int. J. Syst. Sci. 2021, 1–20. [Google Scholar] [CrossRef]
  15. Sabzalian, M.H.; Mohammadzadeh, A.; Lin, S.; Zhang, W. A robust control of a class of induction motors using rough type-2 fuzzy neural networks. Soft Comput. 2020, 24, 9809–9819. [Google Scholar] [CrossRef]
  16. Schreier, M.; Willert, V.; Adamy, J. An integrated approach to maneuver-based trajectory prediction and criticality assessment in arbitrary road environments. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2751–2766. [Google Scholar] [CrossRef]
  17. Xie, G.; Gao, H.; Qian, L.; Huang, B.; Li, K.; Wang, J. Vehicle trajectory prediction by integrating physics and maneuver-based approaches using interactive multiple models. IEEE Trans. Ind. Electron. 2018, 65, 5999–6008. [Google Scholar] [CrossRef]
  18. Cui, H.; Radosavljevic, V.; Chou, F.; Lin, T.; Nguyen, T.; Huang, T.; Schneider, J.; Djuric, N. Multimodal trajectory predictions for autonomous driving using deep convolutional networks. In Proceedings of the International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 2090–2096. [Google Scholar]
  19. Luo, W.; Yang, B.; Urtasun, R. Fast and furious: Real time end-to-end 3D detection, tracking and motion forecasting with a single convolutional net. In Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 3569–3577. [Google Scholar]
  20. Wen, H.; Zhang, W.; Zhao, S. Vehicle lane-change trajectory prediction model based on generative adversarial networks. J. Cent. South Univ. (Nat. Sci. Ed.) 2020, 48, 32–40. [Google Scholar]
  21. Qin, S.; Li, T. Research on multi-interaction vehicle trajectory prediction. Comput. Ind. Eng. 2021, 57, 232–238. [Google Scholar]
  22. Wang, P.; Wang, Y.; Deng, H.; Zhang, M.; Zhang, J. Multilane spatiotemporal trajectory optimization method (MSTTOM) for connected vehicles. J. Adv. Transp. 2020, 2020, 8819911. [Google Scholar] [CrossRef]
  23. Häfner, B.; Bajpai, V.; Ott, J.; Schmitt, G.A. A survey on cooperative architectures and maneuvers for connected and automated vehicles. IEEE Commun. Surv. Tutor. 2022, 24, 380–403. [Google Scholar] [CrossRef]
  24. Wang, P.; Liu, X.; Wang, Y.; Wang, T.; Zhang, J. Short-term traffic state prediction based on mobile edge computing in V2X communication. Appl. Sci. 2021, 11, 11530. [Google Scholar] [CrossRef]
  25. Zyner, A.; Worrall, S.; Nebot, E. Naturalistic driver intention and path prediction using recurrent neural networks. IEEE Trans. Intell. Transp. Syst. 2018, 21, 1584–1594. [Google Scholar] [CrossRef] [Green Version]
  26. Schreiber, M.; Hoermann, S.; Dietmayer, K. Long-term occupancy grid prediction using recurrent neural networks. In Proceedings of the 2019 International IEEE Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 9299–9305. [Google Scholar]
  27. Ma, Y.; Zhu, X.; Zhang, S.; Yang, R.; Wang, W.; Manocha, D. TrafficPredict: Trajectory prediction for heterogeneous traffic-agents. In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Honolulu, HI, USA, 27 January–1 February 2019. [Google Scholar]
  28. Ji, X.; Fei, C.; He, X.; Liu, Y.; Liu, Y. Intention recognition and trajectory prediction for vehicles using LSTM network. China J. Highw. Transp. 2019, 32, 34–42. [Google Scholar]
  29. Shi, W.; Xu, J.; Zhu, D.; Zhang, G.; Wang, X.; Li, J.; Zhang, X. RGB-D semantic segmentation and label-oriented Voxelgrid fusion for accurate 3D semantic mapping. IEEE Trans. Circ. Syst. Video Technol. 2022, 32, 183–197. [Google Scholar] [CrossRef]
  30. Lindenmaier, L.; Aradi, S.; Bécsi, T.; Törő, O. GM-PHD filter based sensor data fusion for automotive frontal perception system. IEEE Trans. Veh. 2022, 71, 7215–7229. [Google Scholar] [CrossRef]
  31. Zhao, Y.; Lee, J.; Chen, W. Q-greedyUCB: A new exploration policy to learn resource-efficient scheduling. China Commun. 2021, 18, 12–23. [Google Scholar] [CrossRef]
  32. Inou, M.; Tang, S.; Obana, S. LSTM-Based High Precision Pedestrian Positioning. In Proceedings of the 2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 8–11 January 2022; pp. 675–678. [Google Scholar]
  33. Rubino, C.; Crocco, M.; Bue, A.D. 3D Object Localisation from Multi-View Image Detections. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 1281–1294. [Google Scholar] [CrossRef]
  34. Shin, K.; Kwon, Y.P.; Tomizuka, M. RoarNet: A Robust 3D Object Detection Based on Region Approximation Refinement. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 29 August 2019; pp. 2510–2515. [Google Scholar]
  35. Cho, K.; Merrienboer, B.V.; Gulcehre, C.; Bougares, F. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
  36. Alahi, A.; Goel, K.; Ramanathan, V.; Robicquet, A.; Li, F.; Savarese, S. Social LSTM: Human trajectory prediction in crowded spaces computer vision and pattern recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 961–971. [Google Scholar]
  37. Vemula, A.; Muelling, K.; Oh, J. Social Attention: Modeling attention in human crowds. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–26 May 2018; pp. 4601–4607. [Google Scholar]
Figure 1. Structure of real-time trajectory prediction method.
Figure 1. Structure of real-time trajectory prediction method.
Sensors 23 02950 g001
Figure 2. Processing of vehicle perception based on GM-PHD model.
Figure 2. Processing of vehicle perception based on GM-PHD model.
Sensors 23 02950 g002
Figure 3. Structure of improved LSTM model for ICVs.
Figure 3. Structure of improved LSTM model for ICVs.
Sensors 23 02950 g003
Figure 4. The LSTM model in the trajectory prediction of the ICVs.
Figure 4. The LSTM model in the trajectory prediction of the ICVs.
Sensors 23 02950 g004
Figure 5. Structure of Q-Learning in ICVs trajectory prediction.
Figure 5. Structure of Q-Learning in ICVs trajectory prediction.
Sensors 23 02950 g005
Figure 6. Diagram of ICV action, including straight ahead, left lane change, right lane change, left turn, and right turn.
Figure 6. Diagram of ICV action, including straight ahead, left lane change, right lane change, left turn, and right turn.
Sensors 23 02950 g006
Figure 7. Driving behaviors and trajectories of ICVs at the intersection. (a) straight ahead; (b) left lane change; (c) right lane change; (d) left turn; (e) right turn.
Figure 7. Driving behaviors and trajectories of ICVs at the intersection. (a) straight ahead; (b) left lane change; (c) right lane change; (d) left turn; (e) right turn.
Sensors 23 02950 g007
Figure 8. Experimental scenarios and equipment. (a) roadside unit; (b) experimental scenarios.
Figure 8. Experimental scenarios and equipment. (a) roadside unit; (b) experimental scenarios.
Sensors 23 02950 g008
Figure 9. The error distribution and detection results of ICV perception. (a) the error distribution of the single-sensor model; (b) the detection results of the single-sensor model and the GM-PHD model.
Figure 9. The error distribution and detection results of ICV perception. (a) the error distribution of the single-sensor model; (b) the detection results of the single-sensor model and the GM-PHD model.
Sensors 23 02950 g009
Figure 11. Real-time trajectory prediction results for the driving behavior of a right turn.
Figure 11. Real-time trajectory prediction results for the driving behavior of a right turn.
Sensors 23 02950 g011
Figure 12. Performance of the improved LSTM model under FDE and ADE evaluation metrics.
Figure 12. Performance of the improved LSTM model under FDE and ADE evaluation metrics.
Sensors 23 02950 g012
Figure 13. Analysis of the calculation latency of the improved LSTM model.
Figure 13. Analysis of the calculation latency of the improved LSTM model.
Sensors 23 02950 g013
Figure 14. Perception and prediction errors under different traffic flows.
Figure 14. Perception and prediction errors under different traffic flows.
Sensors 23 02950 g014
Figure 10. Comparison of MAE and RMSE between the LSTM, MV3D, RoarNet, and GM-PHD models. (a) the comparison results under MAE metric; (b) the comparison results under RMSE metric.
Figure 10. Comparison of MAE and RMSE between the LSTM, MV3D, RoarNet, and GM-PHD models. (a) the comparison results under MAE metric; (b) the comparison results under RMSE metric.
Sensors 23 02950 g010
Table 1. Pseudocode for the GM-PHD.
Table 1. Pseudocode for the GM-PHD.
1:Given  { w i , k 1 ( v ) , m i , k 1 ( v ) , P i , k 1 ( v ) } v = 1 V i , k 1   for target i { 1 , , N } , the set of measurements Z j , k for j { 1 , , N }
2:Step 1. (Initialization)
3:        for  i = 1 , , N do
4:              Initialize { w i , k 1 ( v ) , m i , k 1 ( v ) , P i , k 1 ( v ) } v = 1 V i , k 1 , Initialize Z i , k
5:        end for
6:Step 2. (Prediction for birth ICVs)
7:        i: = 0
8:        for  j = 1 , , J γ , k  do
9:              i: = i+1, w k | k 1 ( i ) = w γ , k ( j ) , m k | k 1 ( i ) = m γ , k ( j ) , P k | k 1 ( i ) = P γ , k ( j )
10:        end for
11:        for  j = 1 , , J β , k  do
12:              for  q = 1 , , J k 1  do
13:                       i: = i+1, w k | k 1 ( i ) = w k 1 ( q ) w β , k j , w k | k 1 ( i ) = d β , k 1 ( q ) + F β , k 1 ( j ) m k 1 ( q )
14:                        P k | k 1 ( i ) = Q β , k 1 ( j ) + F β , k 1 ( j ) P k 1 ( q ) ( F β , k 1 ( j ) ) T
15:              end for
16:        end for
17:Step 3. (Prediction for existing ICVs)
18:for  j = 1 , , J k 1  do
19:              i: = i+1, w k | k 1 ( i ) = p S , k w k 1 ( j ) , m k | k 1 ( i ) = F k 1 m k 1 ( j ) , P k | k 1 ( i ) = Q k 1 + F k 1 P k 1 ( j ) F k 1 T
20:        end for
21:         J k | k 1 = i
22:Step 4. (Construction of PHD update components)
23:        for  j = 1 , , J k | k 1  do
24:               η k | k 1 ( j ) = H k m k | k 1 ( j ) , S k ( j ) = R k ( j ) + H k P k | k 1 ( j ) H k T
25:               K k ( j ) = P k | k 1 ( j ) H k T [ S k ( j ) ] 1 , P k | k ( j ) = I K k ( j ) H k P k | k 1 ( j )
26:        end for
27:Step 5. (Update)
28:        for  j = 1 , , J k | k 1  do
29:               w k ( j ) = ( 1 p D , k ) w k | k 1 ( j ) , m k ( j ) = m k | k 1 ( j ) , P k ( j ) = P k | k 1 ( j )
30:        end for
31:        l: = 0
32:        for each z Z k  do
33:              l: = l+1
34:              for  j = 1 , , J k | k 1  do
35:                     w k l J k | k 1 + j = p D , k w k | k 1 ( j ) N ( z ; η k | k 1 ( j ) , S k ( j ) ) , m k l J k | k 1 + j = m k | k 1 ( j ) + K k ( j ) ( z η k | k 1 ( j ) )
36:                     P k l J k | k 1 + j = P k | k ( j )
37:              end for
38:              for  j = 1 , , J k | k 1  do
39:                     w k l J k | k 1 + j : = w k l J k | k 1 + j κ k ( z ) + i = 1 J k | k 1 w k ( l J k | k 1 + i )
40:              end for
41:        end for
42:         J k = l J k | k 1 + J k | k 1
43:Output  { w k ( i ) , m k ( i ) , P k ( i ) } i = 1 J i , k
Table 2. The configurations list of Q-values according to ICV states.
Table 2. The configurations list of Q-values according to ICV states.
The Speed States of ICVsQ-Value
Acceleration2
Constant1
Deceleration−1
Table 3. List of configurations.
Table 3. List of configurations.
ParametersDescriptionValues
Intelligent roadside unit and ICVsThe number of ICVs3
V2X communicationYES
Average latency of V2X communication 6.3 ms
SensorsCamera1080 p/25 Hz
LiDAR32 lines/10 Hz
V2X unitLTE-V/10 Hz
GM-PHDThe updating period of the transformation equation0.1 s
State transition matrix of ICV F k 1 0 0.1 0 0 1 0 0.1 0 0 1 0 0 0 0 1
Improved LSTMNumber of hidden layers3
Number of hidden layer nodes300
Epoch20
Batch size100
Loss function weight β0.5
Learning rate0.001
OptimizerAdam
The number of historical trajectory points αin30
The number of predicted trajectories points βout20
Table 4. Data states of vehicles at the intersection.
Table 4. Data states of vehicles at the intersection.
IDTimestampV2XLongitudeLatitudeSteering Angle
(°)
Speed
(m/s)
Acceleration
(m/s2)
Horizontal Distance
(m)
Heading Angle
(°)
561609232645.1Yes116.213874439.93066012.30.10−0.067.8287.22
571609232645.1No116.213937839.9306706——2.12——12.1489.92
661609233146.8No116.212760539.9306347——2.12——18.15155.52
671609233146.8No116.212012139.9306501——4.98——15.4788.59
Table 5. Signal light data states at the intersection.
Table 5. Signal light data states at the intersection.
TimestampPeriod
(s)
Signal Light State
(East-West)
Time Remaining
(s)
1609232622105Green23
1609232623105Green22
1609233152105Red17
1609233153105Red16
Table 6. Performance of the GM-PHD model and the single-sensor model under maximum error, minimum error, average error, and MAPE evaluation metrics.
Table 6. Performance of the GM-PHD model and the single-sensor model under maximum error, minimum error, average error, and MAPE evaluation metrics.
Evaluation MetricsCameraLiDARV2X UnitGM-PHD
Maximum Error (m)11.46230.826810.99800.1401
Minimum Error (m)0.19170.00820.04880.0011
Average Error (m)3.58810.21118.13860.1181
MAPE20.26%0.91%28.87%0.10%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, P.; Yu, H.; Liu, C.; Wang, Y.; Ye, R. Real-Time Trajectory Prediction Method for Intelligent Connected Vehicles in Urban Intersection Scenarios. Sensors 2023, 23, 2950. https://doi.org/10.3390/s23062950

AMA Style

Wang P, Yu H, Liu C, Wang Y, Ye R. Real-Time Trajectory Prediction Method for Intelligent Connected Vehicles in Urban Intersection Scenarios. Sensors. 2023; 23(6):2950. https://doi.org/10.3390/s23062950

Chicago/Turabian Style

Wang, Pangwei, Hongsheng Yu, Cheng Liu, Yunfeng Wang, and Rongsheng Ye. 2023. "Real-Time Trajectory Prediction Method for Intelligent Connected Vehicles in Urban Intersection Scenarios" Sensors 23, no. 6: 2950. https://doi.org/10.3390/s23062950

APA Style

Wang, P., Yu, H., Liu, C., Wang, Y., & Ye, R. (2023). Real-Time Trajectory Prediction Method for Intelligent Connected Vehicles in Urban Intersection Scenarios. Sensors, 23(6), 2950. https://doi.org/10.3390/s23062950

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop