Next Article in Journal
Willingness of Participation in an Application-Based Digital Data Collection among Different Social Groups and Smartphone User Clusters
Previous Article in Journal
Spatial Calibration of Humanoid Robot Flexible Tactile Skin for Human–Robot Interaction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scenario Generation for Autonomous Vehicles with Deep-Learning-Based Heterogeneous Driver Models: Implementation and Verification

1
Shenzhen International Graduate School, Tsinghua University, Shenzhen 518000, China
2
Research Institute of Tsinghua, Pearl River Delta, Guangzhou 510530, China
3
Institute of Systems Engineering, Macau University of Science and Technology, Macau 999078, China
4
Waytous Inc., Shenzhen 518000, China
*
Authors to whom correspondence should be addressed.
Sensors 2023, 23(9), 4570; https://doi.org/10.3390/s23094570
Submission received: 6 April 2023 / Revised: 4 May 2023 / Accepted: 5 May 2023 / Published: 8 May 2023
(This article belongs to the Section Vehicular Sensing)

Abstract

:
Virtual testing requires hazardous scenarios to effectively test autonomous vehicles (AVs). Existing studies have obtained rarer events by sampling methods in a fixed scenario space. In reality, heterogeneous drivers behave differently when facing the same situation. To generate more realistic and efficient scenarios, we propose a two-stage heterogeneous driver model to change the number of dangerous scenarios in the scenario space. We trained the driver model using the HighD dataset, and generated scenarios through simulation. Simulations were conducted in 20 experimental groups with heterogeneous driver models and 5 control groups with the original driver model. The results show that, by adjusting the number and position of aggressive drivers, the percentage of dangerous scenarios was significantly higher compared to that of models not accounting for driver heterogeneity. To further verify the effectiveness of our method, we evaluated two driving strategies: car-following and cut-in scenarios. The results verify the effectiveness of our approach. Cumulatively, the results indicate that our approach could accelerate the testing of AVs.

1. Introduction

As challenging driving scenarios rarely occur in reality, traditional on-road testing is not worth the time and money [1]. According to the 2021 annual disengagement reports of the Department of Motor Vehicles (DMV), the autonomous driving road test mileage reached 4.1 million miles in 2021, surpassing the previous reporting cycle by 2 million miles. However, the highest miles per intervention (MPI) exceeded 50,000 miles. Although specific challenging scenarios can be created manually, some conditions, such as extreme weather, are difficult to modify. Scenario-based simulation testing is a reliable solution to the problem of road testing [2,3,4,5].
Although low-cost and efficient scenario-based virtual testing has attracted more attention, testing all scenarios is also a waste of computing resources. Accelerated evaluation focuses on finding representative scenarios such as small probability events that may violate autonomous vehicles’ (AVs) safety requirements [6,7]. For example, Zhao et al. used importance sampling to effectively sample rare events, achieving the same test results, but with fewer scenarios [8]. Huang et al. proposed a piecewise model as a more flexible structure to capture the tails of the data more accurately [9]. Furthermore, Althoff et al. combined reachability analysis and optimization techniques to reduce the size of the solution space for autonomous vehicles [10]. The above three methods use empirical distributions, but it is also possible to focus on only part of the particular space. Sun et al. summarized three types of methods for finding partially unsafe scenarios by delineating boundaries [11]: finding high-risk scenarios [12], boundary scenarios [13], and collision scenarios [14].
However, the sampling space for those acceleration approaches is fixed because all environmental vehicles utilize the same model and cannot be adjusted. Aggressive drivers are more likely to behave in a manner that endangers others [15,16,17]. Therefore, we could consider environmental vehicles variables, and by controlling these variables, we could adjust the proportion of hazardous situations in the scenario space. Ge et al. tried to describe driving behavior with utility functions [18]. Different drivers’ driving strategies can be represented by different utility functions. Modifying the driving strategies of surrounding vehicles (SVs) can result in more challenging events for the AV. A utility function defines the selection of a particular behavior, such as the likelihood of lane changing or the following distance [19]. A control model is also needed to calculate the speed of SVs.
Designing a driver model that simulates human behavior for environmental vehicles is necessary to satisfy the uncertainty of human behavior. Imitation learning is a common approach, but it requires the manual definition of cost functions and is computationally expensive [20]. Aksjonov et al. used an artificial neural network (ANN) to predict human behavior and achieved good performance [21]. We could equate driver modeling to the trajectory prediction problem. Yang et al. used a Gaussian mixture model to identify the driving style, and proposed a personalized joint time series modeling method for trajectory prediction [22]. Such methods are deterministic predictions that cannot handle the multiple possibilities of human behavior. To explore uncertainty about future states, some methods predicted multiple possible paths. Zhao et al. estimated the endpoint candidates with high probability on the basis of the environmental context and generated trajectories [23]. Tian et al. proposed a joint learning architecture to incorporate the lane orientation, vehicle interaction, and driving intention in multi-modal vehicle trajectory forecasting [24]. The GAN-based method incorporates latent variables into network learning and optimizes the generated trajectories [25]. However, these stochastic prediction methods do not represent driver heterogeneity well. For scenario sampling, the prediction target is not the driver’s optimal trajectory [26,27], but the probability distribution of the next action. Deo et al. utilized the information of surrounding vehicles to predict multimodal trajectory distributions [28]. According to this knowledge, we propose heterogeneous driver models with integrated decision and control implemented with deep learning. Heterogeneity is reflected in separately training the models with different driver data types. Scenarios are generated through real-time interaction between SVs with our driver model and AVs. Taking five common initialization scenarios as examples, we changed the model style of the SVs to obtain more dangerous events. To demonstrate that our method works, we evaluated the two driving strategies with the generated scenarios. Compared with the above methods, our method could better accelerate evaluation. The overview of our work is depicted in Figure 1.
The contributions of this paper are as follows:
(1) Uncertainty in driver behavior is learned using deep-learning methods for the dynamic generation of stochastic scenarios.
(2) Driver heterogeneity is demonstrated to be able to generate more realistic and complex scenarios, and in some cases, increase the proportion of critical scenarios.
(3) Autonomous vehicles with scenarios generated by our method were tested, and the safety and efficiency of two driving strategies were evaluated.
The rest of this paper is structured as follows: Section 2 introduces our scenario generation method. Section 3 analyzes our experimental results, and Section 4 shows the conclusions.

2. Scenario Generation Method

This section describes the definition of our scenario generation problem and introduces the heterogeneous driver models for environmental vehicles.

2.1. Problem Description

The scene of time step t can be defined as x t = ( d , v , a , r d ) , where d is the coordinate, v is velocity, a is acceleration, and r d is relative distance. Scenario X = ( x t , x t + 1 , . . . ) is defined as a sequence of scenes. Initializing the scene sequence for each vehicle, we describe the scenario generation problem as sampling a new scene sequence through the interaction between objects. The driver model inputs historical scene information, I = ( x t p , . . . , x t 1 ) , and makes decisions first. p is the observation window. P ( m i | I ) is known for a given I, where m indicates lane-change and braking events. The decision results were sampled from this distribution. Moreover, a distribution for sampling acceleration needs to be found. Directly predicting the acceleration distribution’s next frame and sampling it leads to an unsmooth vehicle trajectory [29]. Therefore, we predicted the distribution of endpoints in the future observation time and sampled an endpoint from it as driver intent feature. The probability of a control event can be written as follows:
P ( a | I ) = P ( a | Y ) P θ ( Y | m i , I ) P ( m i | I )
where P θ ( Y | m i , I ) denotes the probability of future endpoints. The parameter θ is obtained via model learning. The coordinate system must always have self-location at time t− 1 as its origin for the model to be valid at any location. If the original coordinate sequence is ( d t p , . . . , d t 1 ) , the coordinate conversion calculation formula is defined as follows:
d t i = d t i d t 1
AV acceleration is calculated with a car-following control model. The other parameters of x t can be calculated via a, which denotes a generated scene. The above process is repeated until AV passes the test or a collision occurs.

2.2. Datasets

We trained the model using the public highD dataset [30] containing UAV data recorded on German highways, including the trajectory information of more than 110,500 vehicles sampled at 25 Hz. We down-sampled the trajectory data to 5 Hz to improve the training speed. Each track’s data contain coordinates, speed, acceleration, and the surrounding vehicle ID.
To train the heterogeneous driver models, we had to classify the dataset. Drivers could be divided into three categories according to style: aggressive, normal, and conservative. We used the k-means algorithm to cluster all drivers into these categories on the basis of the mean, variance, and maximal values of velocity and acceleration. Figure 2 visualizes some of the features of each cluster. Aggressive drivers perform more lane changes and a wide range of longitudinal acceleration, while conservative drivers tend to maintain their lane and change speed smoothly. Our models learn their properties separately.

2.3. Heterogeneous Driver Modeling

Acceleration calculation in scenario generation should not be considered a simple regression task. Spatial navigation awareness drives a person to reach a predetermined area and plan a route [31]. Their actions change with the scene context and intentions at any time. On the basis of this observation, we propose a novel two-stage driver model that first estimates the maneuver probability and then generates endpoint areas on the basis of a sampled maneuver for planning actions.
Figure 3 shows an overview of our model, consisting of two components:
  • Maneuver model (MM): estimates maneuver probabilities from the scene context.
  • Action model (AM): Generates possible future terminal areas on the basis of selected maneuvers and then samples the endpoint from the terminal area as the intention feature. The endpoint and historical trajectory serve as input to generate the next action.

2.3.1. Maneuver Module (MM)

We consider six maneuver classes. Lateral maneuvers are left-lane change, right-lane change, and maintaining the current lane. Lane changing takes about 6 s from start to finish. Therefore, the observation window was set to 3 s. The lane-changing state is defined as the lane ID change within the observation window. Longitudinal maneuvers are braking and normal driving. Braking is defined as the average speed of the next 3 s being less than 0.9 times the average speed of the historical 3 s [32]. MM consists of a long short-term memory (LSTM) encoder concatenated with two softmax functions. We sampled a maneuver from the conditional probability, P ( m i | I ) , as the input of the action module.

2.3.2. Action Module (AM)

According to the output of the MM, a maneuver is randomly sampled as the premise of the driver’s intention. Specifically, the ground truth is used during training. We need a sampling space that represents all scenes. Environmental information is encoded into context vectors. When decoding, the context vector is concatenated with the selected maneuver, and a five-dimensional vector representing the parameters of the Gaussian endpoint distribution is output. An endpoint is sampled in this distribution as the intention feature, and it is input to the MLP together with the historical trajectory information to generate the acceleration of the next frame, which can better reflect the randomness of behavior. We experimentally confirmed that the stochasticity of action was reflected well.
The AM consists of a classic LSTM encoder–decoder [33] and a multilayer perceptron (MLP) [34]. The encoder–decoder framework estimates the sampling space for the short-term endpoint region. An MLP is a fully connected class of a feedforward artificial neural network (ANN). The encoder is the same as that in MM. It can extract the displacement information and relative position to the surrounding eight vehicles.

2.3.3. Model Training

During training, the objective function could minimize the following error. Because there are few lane-changing categories in the dataset, to reduce the impact of data imbalance, we chose to minimize the focal loss of the maneuver category, which adds a weight factor to the loss function to increase the weight of the minority category in the loss function [35], noted as follows:
l f o c a l = α t ( 1 p t ) γ l o g ( p t ) .
For the 2D Gaussian distribution of the endpoint, we chose to minimize its negative log likelihood loss [36] as follows.
l n l l = 0.5 * ( l o g ( m a x ( v a r , e p s ) ) + y y ^ m a x ( v a r , e p s ) ) .
We used the mean squared error [36] for the sampled end point and next frame acceleration as follows.
l m s e = 1 n ( Y i , Y i ^ ) 2 .
Therefore, the objective function was defined as follows.
L = l n l l + l f o c a l + l m s e
We used LSTMs with 128 units and an MLP with 3 hidden layers. Heterogeneous models were obtained by training separately for each style. All models are trained using Adam with a learning rate of 0.001. The models were implemented using pyTorch. As a comparison, we also trained an original driver model without considering heterogeneity using the unclassified dataset.
Because accuracy is not a critical part of our method and does not affect the results of this paper, we later focused on the generated scenarios and did not compare the training results with those of other methods.

2.4. Implementation and Verification

We designed experiments to demonstrate the effect of heterogeneity on the number of challenging scenarios. To further verify the effectiveness of our scenario generation method, we evaluated the performance of two driving strategies by testing AV with our generated scenarios. This section introduces the autonomous driving model, driving strategies, and experimental scheme.

2.4.1. Intelligent Driver Model

In this work, AVs were implemented using the intelligent driver model (IDM) [37], a car-following model with longitudinal control. It aims to calculate the desired speed and distance on the basis of the current vehicle speed and relative distance. The basic definition is as follows:
a ˙ = a m a x 1 v v ˜ β s ˜ s 2
where a m a x is the maximal acceleration, v is the ego car (EC) speed, v ˜ is the EC’s desired speed, β is the acceleration exponent, s is the relative distance between EC and front car (FC), s ˜ is the desired relative distance as defined in (8), s 0 is the minimal gap at standstill, T is the desired time headway, Δ v is the speed difference between EC and FC, and b is the comfortable deceleration.
a ˙ is the desired acceleration of the vehicle. In this equation, the second item in parentheses measures the gap between the speed and desired speed to promote vehicle acceleration, and the third item measures the gap between the actual distance and desired distance to promote vehicle braking. The desired vehicle distance is defined as follows:
s ˜ = s 0 + m a x 0 , v T + v Δ v 2 a m a x b
Table 1 is the common parameter setting of IDM.

2.4.2. Driving Strategies

Driving strategies are interaction rules with other road users that use mathematical formulas to express the idea of keeping a safe distance from other vehicles. The responsibility sensitive safety (RSS) model is proposed to ensure absolute security [38]. It is defined as follows:
s r s s = v ρ + 1 2 a m a x ρ 2 + ( v + ρ a m a x ) 2 2 a m i n , b r a k e v F C 2 2 a m a x ,
where s r s s is the safety distance, v is the EC speed, v F C is the FC speed, ρ is the response time, a m i n , b r a k e is the minimal braking deceleration until stoppage. Table 2 is the parameter setting of RSS.
As a defensive driving strategy, RSS is assumed to accelerate at maximal acceleration during the reaction time of detecting FC braking and to decelerate at minimal braking speed after the reaction. When facing a dangerous situation, defensive driving actively abandons the right of way to avoid conflict.
A negotiated driving strategy disagrees with the FC’s ownership of the right of way by adjusting the safety distance in the car-following state [39]. The new safety gap should be as follows:
s n = v ρ + v 2 2 a b r a k e v F C 2 2 a m a x .
The formula removes the unreasonable acceleration term during the reaction and redefines the braking deceleration as follows:
a b r a k e = a m i n , b r a k e + v v m a x ( a m a x a m i n , b r a k e )
To evaluate these two strategies, we embedded two safety distances into the IDM model by substituting s r s s and s n for s ˜ .

2.4.3. Simulation Scheme

The goal of scenario generation is to obtain scene sequences through interaction. Calculating the acceleration and updating the timing sequence are repeated until the vehicle passes the test or crashes. The driver model does not sample maneuvers per inference. When the driver decides to change lanes, the maneuver label remains unchanged for 3 s. The inference algorithm for SV is summarized in Algorithm 1. The scenario generation process based on this algorithm is shown in Figure 4.
As shown in Figure 5, we first initialized five scenarios of different complexity levels with different vehicle numbers and locations. For each scenario, the basic configuration was to set all SVs as aggressive driver models or all as conservative driver models. Furthermore, the aggressive driver position was set according to the odd–even car number to examine the impact of relative position. In addition, the original driver model was used as a control group for all scenarios. In total, there were 20 experimental groups and 5 control groups. AV controlled by IDM was tested 1000 times in each group, using the time to collision (TTC) as the safety indicator [40]. TTC is defined as the time to collision between the EC and the FC on the current road. The situation is considered dangerous if EC speed is greater than FC speed, and the relative distance is closer.
TTC ( t ) = | x FC ( t ) x EC ( t ) | L v EC ( t ) v FC ( t ) v EC ( t ) > v FC ( t ) v EC ( t ) v FC ( t )
where L is the length of the car, x FC ( t ) is the position of FC, x EC ( t ) is the position of EC, v FC ( t ) is the velocity of FC, v FC ( t ) is the velocity of EC.
Algorithm 1:Inference algorithm for SV.
  • Initialize: I ; T ; t = 0 ; i = 0 ;
  • while  t + + < T and no-collision do
  •     if  m l a t = lane change and i++ < 15 then
  •         Sample maneuver m l o n
  •          a output of SV model;
  •         update I by a;
  •     end if
  •     if  m l a t = keep lane then
  •         Sample maneuver ( m l a t , m l o n )
  •          a output of SV model;
  •         update I by a;
  •     end if
  • end while
TTC is aimed at emergency situations where the distance between vehicles is relatively close and where there is a large speed difference, such as the sudden braking of the vehicle in front, which is a dangerous and urgent situation.
To evaluate the driving strategy, we observed the following distance and safety indicator changes during the test in the randomly selected car-following and cut-in scenarios. It was more appropriate to use another safety indicator, time headway (THW), because the safety distance grows [41]. THW is defined as the time difference between EC and FC passing the same place, and it was calculated by dividing the distance between the two vehicles by EC speed.
THW ( t ) = | x FC ( t ) x EC ( t ) | v EC ( t )
THW mainly alarms when the distance between vehicles is close, and can help drivers in developing a standardized driving habit to maintain a distance between vehicles. We defined it as a dangerous but not urgent situation. We defined the dangerous threshold as TTC less than 5 s and THW as less than 2 s [42,43].

3. Results and Analysis

In this section, we discuss and analyze the results of scenario generation with heterogeneous driver models and verify two driving strategies with our scenarios. The verification results prove the validity of our approach.

3.1. Implementation of Scenario Generation

Table 3 shows the evaluation results on our models. Negative log likelihood is a metric in endpoint distribution prediction. Our model outperformed the baseline CS-LSTM [28] because of the mean squared error loss at sampling endpoints during training [44]. CS-LSTM uses convolutional social pooling and generates a unimodal output distribution. We additionally report the cross entropy of maneuver probability. A cross entropy of less than 0.05 for classification tasks indicates good performance.
Figure 6 exemplifies some generated scenarios for a simple situation. The scenarios had smooth curves and could change lanes at any possible moment. If the lane change is not completed due to time constraints, the test time can be extended as needed. Sampling as much as possible enables coverage-oriented test automation. Furthermore, we could achieve accelerated evaluation in two ways. One is the manual control of dangerous maneuvers. For example, it is dangerous to change lanes directly at the beginning, as shown in Figure 6. In Algorithm 1, the initial lateral maneuver could be set as a lane change. The other is to use special sampling methods such as importance sampling methods when sampling endpoints. Combining the two approaches can achieve spatially oriented test automation. Our method is able to generate realistic and plausible scenarios.
According to the experimental design, we tested an AV with the IDM model in 25 groups. Table 4 shows the percentages of challenging scenarios. Compared to the original driver model, the heterogeneous driver models could change the number of challenge scenarios in the scenario space. All SVs set to be aggressive bring more dangerous scenarios. Some situations depict scenarios where conservative vehicles are in front of traffic, also increasing the number of dangerous scenarios. The comparison in the column shows hat the increase in vehicles is also one of the reasons for the increase in dangerous situations.
The results indicate that we could generate more dangerous and complex scenarios by adjusting the location and number of aggressive drivers.

3.2. Verification

To demonstrate that the scenarios generated using our driver model are usable, the scenarios are applied to evaluate driving strategies. Given a car-following scenario with an initial THW of less than 2 s, the AV with the original IDM model was continuously dangerous during the test time owing to the close following distance. Figure 7 shows that both driving strategies converge THW from danger to safety in car-following scenarios, but the convergence value of RSS is larger. This is attributable to the more reasonable safety distance of the negotiation strategy.
As in Figure 8, if a vehicle suddenly cuts in, both strategies could respond in time and brake at a safe distance. The convergence process of THW is similar to that of car following. From the deceleration process, the slope of the speed curve indicates that the negotiation strategy had a shorter deceleration time and smoother braking speed.
Table 5 summarizes the value range of THW and safety distance. RSS can guarantee absolute security, but negotiation policies have higher traffic efficiency. This shows that our verification results are correct and proves the validity of our driver model and method.

4. Conclusions

In this paper, we proposed a scenario generation method considering driver heterogeneity. This method improves the number of challenging events in the scenario space by changing the driver model style of the environmental vehicles. Our model quantifies different drivers’ preferences by learning the probability of their behavior. Simulations were implemented in multiple initialization scenarios to demonstrate the role of heterogeneity. The results show that adjusting the number and location of aggressive drivers could lead to more dangerous scenarios and thus improve the efficiency of testing. Thus, the method ensures realism and diversity. Then, we used our scenarios to evaluate conservative strategy and negotiate strategy. The evaluation results show that the conservative strategy was safer and that the negotiation strategy was more efficient, which verified the effectiveness of our approach. The choice of driving strategy depends on the trade-off between safety and efficiency. Cumulatively, our approach could accelerate the testing of AVs. In future work, we could delineate more detailed driver styles or consider heterogeneity from other perspectives. As driver models become more diverse, scenarios become more complex, and danger increases, so our future work could consider these factors.

Author Contributions

Conceptualization, L.G. and R.Z.; methodology, L.G.; validation, L.G. and R.Z.; investigation, L.G.; writing—original draft preparation, L.G.; writing—review and editing, L.G., R.Z. and K.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Key-Area Research and Development Program of Guangdong Province (2020B0909050003), and the Science and Technology Innovation Committee of Shenzhen (CJGJZD20200617102801005).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, and further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank the reviewers and editors for improving this manuscript, the key-Area Research and Development Program of Guangdong Province (2020B0909050 003), and Science and Technology Innovation Committee of Shenzhen (CJGJZD20200617102801005).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, S.; Capretz, L.F. An analysis of testing scenarios for automated driving systems. In Proceedings of the 2021 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER), Honolulu, HI, USA, 9–12 March 2021; pp. 622–629. [Google Scholar]
  2. Li, L.; Huang, W.-L.; Liu, Y.; Zheng, N.-N.; Wang, F.-Y. Intelligence testing for autonomous vehicles: A new approach. IEEE Trans. Intell. Veh. 2016, 1, 158–166. [Google Scholar] [CrossRef]
  3. Li, L.; Wang, X.; Wang, K.; Lin, Y.; Xin, J.; Chen, L.; Xu, L.; Tian, B.; Ai, Y.; Wang, J.; et al. Parallel testing of vehicle intelligence via virtual-real interaction. Sci. Robot. 2019, 4, eaaw4106. [Google Scholar] [CrossRef]
  4. Ma, Y.; Sun, C.; Chen, J.; Cao, D.; Xiong, L. Verification and validation methods for decision-making and planning of automated vehicles: A review. IEEE Trans. Intell. Veh. 2022, 7, 480–498. [Google Scholar] [CrossRef]
  5. Wang, F.-Y.; Song, R.; Zhou, R.; Wang, X.; Chen, L.; Li, L.; Zeng, L.; Zhou, J.; Teng, S.; Zhu, X. Verification and validation of intelligent vehicles: Objectives and efforts from china. IEEE Trans. Intell. Veh. 2022, 7, 164–169. [Google Scholar] [CrossRef]
  6. Zhou, R.; Liu, Y.; Zhang, K.; Yang, O. Genetic algorithm-based challenging scenarios generation for autonomous vehicle testing. IEEE J. Radio Freq. Identif. 2022, 6, 928–933. [Google Scholar] [CrossRef]
  7. Li, L.; Zheng, N.; Wang, F.-Y. A theoretical foundation of intelligence testing and its application for intelligent vehicles. IEEE Trans. Intell. Transp. Syst. 2020, 22, 6297–6306. [Google Scholar] [CrossRef]
  8. Zhao, D.; Lam, H.; Peng, H.; Bao, S.; LeBlanc, D.J.; Nobukawa, K.; Pan, C.S. Accelerated evaluation of automated vehicles safety in lane-change scenarios based on importance sampling techniques. IEEE Trans. Intell. Transp. Syst. 2016, 18, 595–607. [Google Scholar] [CrossRef] [PubMed]
  9. Huang, Z.; Lam, H.; LeBlanc, D.J.; Zhao, D. Accelerated evaluation of automated vehicles using piecewise mixture models. IEEE Trans. Intell. Transp. Syst. 2017, 19, 2845–2855. [Google Scholar] [CrossRef]
  10. Althoff, M.; Lutz, S. Automatic generation of safety-critical test scenarios for collision avoidance of road vehicles. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1326–1333. [Google Scholar]
  11. Sun, J.; Zhang, H.; Zhou, H.; Yu, R.; Tian, Y. Scenario-based test automation for highly automated vehicles: A review and paving the way for systematic safety assurance. IEEE Trans. Intell. Transp. Syst. 2021, 23, 14088–14103. [Google Scholar] [CrossRef]
  12. Zhang, X.; Li, F.; Wu, X. Csg: Critical scenario generation from real traffic accidents. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 9 October–13 November 2020; pp. 1330–1336. [Google Scholar]
  13. Tuncali, C.E.; Fainekos, G.; Ito, H.; Kapinski, J. Sim-atav: Simulation-based adversarial testing framework for autonomous vehicles. In Proceedings of the 21st International Conference on Hybrid Systems: Computation and Control (Part of CPS Week), Porto, Portugal, 11–13 April 2018; pp. 283–284. [Google Scholar]
  14. Najm, W.G.; Toma, S.; Brewer, J. Depiction of Priority Light-Vehicle Pre-Crash Scenarios for Safety Applications Based on Vehicle-to-Vehicle Communications; Tech Report; National Highway Traffic Safety Administration: Washington, DC, USA, 2013. [Google Scholar]
  15. Antić, B.; Čabarkapa, M.; Čubranić-Dobrodolac, M.; Čičević, S. The Influence of Aggressive Driving Behavior and Impulsiveness on Traffic Accidents. 2018. Available online: https://rosap.ntl.bts.gov/view/dot/36298 (accessed on 5 April 2023).
  16. Bıçaksız, P.; Özkan, T. Impulsivity and driver behaviors, offences and accident involvement: A systematic review. Transp. Res. Part Traffic Psychol. Behav. 2016, 38, 194–223. [Google Scholar] [CrossRef]
  17. Berdoulat, E.; Vavassori, D.; Sastre, M.T.M. Driving anger, emotional and instrumental aggressiveness, and impulsiveness in the prediction of aggressive and transgressive driving. Accid. Anal. Prev. 2013, 50, 758–767. [Google Scholar] [CrossRef]
  18. Ge, J.; Xu, H.; Zhang, J.; Zhang, Y.; Yao, D.; Li, L. Heterogeneous driver modeling and corner scenarios sampling for automated vehicles testing. J. Adv. Transp. 2022, 2022, 8655514. [Google Scholar] [CrossRef]
  19. Arslan, G.; Marden, J.R.; Shamma, J.S. Autonomous vehicle-target assignment: A game-theoretical formulation. J. Dyn. Syst. Meas. Control. Trans. ASME 2007, 129, 584–596. [Google Scholar] [CrossRef]
  20. Bhattacharyya, R.; Wulfe, B.; Phillips, D.J.; Kuefler, A.; Morton, J.; Senanayake, R.; Kochenderfer, M.J. Modeling human driving behavior through generative adversarial imitation learning. IEEE Trans. Intell. Transp. Syst. 2022, 24, 2874–2887. [Google Scholar] [CrossRef]
  21. Aksjonov, A.; Nedoma, P.; Vodovozov, V.; Petlenkov, E.; Herrmann, M. A novel driver performance model based on machine learning. IFAC-PapersOnLine 2018, 51, 267–272. [Google Scholar] [CrossRef]
  22. Xing, Y.; Lv, C.; Cao, D. Personalized vehicle trajectory prediction based on joint time-series modeling for connected vehicles. IEEE Trans. Veh. Technol. 2020, 69, 1341–1352. [Google Scholar] [CrossRef]
  23. Zhao, H.; Gao, J.; Lan, T.; Sun, C.; Sapp, B.; Varadarajan, B.; Shen, Y.; Shen, Y.; Schmid, C.; Li, C.; et al. Tnt: Target-driven trajectory prediction. In Proceedings of the 5th Annual Conference on Robot Learning, London, UK, 8–11 November 2021; pp. 895–904. [Google Scholar]
  24. Tian, W.; Wang, S.; Wang, Z.; Wu, M.; Zhou, S.; Bi, X. Multi-modal vehicle trajectory prediction by collaborative learning of lane orientation, vehicle interaction, and intention. Sensors 2022, 22, 4295. [Google Scholar] [CrossRef] [PubMed]
  25. Gupta, A.; Johnson, J.; Fei-Fei, L.; Savarese, S.; Alahi, A. Social gan: Socially acceptable trajectories with generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2255–2264. [Google Scholar]
  26. Fang, L.; Jiang, Q.; Shi, J.; Zhou, B. Tpnet: Trajectory proposal network for motion prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6797–6806. [Google Scholar]
  27. Zhang, Y.; Sun, H.; Zhou, J.; Pan, J.; Hu, J.; Miao, J. Optimal vehicle path planning using quadratic optimization for baidu apollo open platform. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 978–984. [Google Scholar]
  28. Deo, N.; Trivedi, M.M. Convolutional social pooling for vehicle trajectory prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1468–1476. [Google Scholar]
  29. Alahi, A.; Goel, K.; Ramanathan, V.; Robicquet, A.; Fei-Fei, L.; Savarese, S. Social lstm: Human trajectory prediction in crowded spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 961–971. [Google Scholar]
  30. Krajewski, R.; Bock, J.; Kloeker, L.; Eckstein, L. The highd dataset: A drone dataset of naturalistic vehicle trajectories on german highways for validation of highly automated driving systems. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2118–2125. [Google Scholar]
  31. Bellmund, J.L.; Gärdenfors, P.; Moser, E.I.; Doeller, C.F. Navigating cognition: Spatial codes for human thinking. Science 2018, 362, eaat6766. [Google Scholar] [CrossRef]
  32. Deo, N.; Trivedi, M.M. Multi-modal trajectory prediction of surrounding vehicles with maneuver based lstms. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China, 26–30 June 2018; pp. 1179–1184. [Google Scholar]
  33. Malhotra, P.; Ramakrishnan, A.; Anand, G.; Vig, L.; Agarwal, P.; Shroff, G. Lstm-based encoder-decoder for multi-sensor anomaly detection. arXiv 2016, arXiv:1607.00148. [Google Scholar]
  34. Gardner, M.W.; Dorling, S. Artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences. Atmos. Environ. 1998, 32, 2627–2636. [Google Scholar] [CrossRef]
  35. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
  36. Mehta, P.; Bukov, M.; Wang, C.-H.; Day, A.G.; Richardson, C.; Fisher, C.K.; Schwab, D.J. A high-bias, low-variance introduction to machine learning for physicists. Phys. Rep. 2019, 810, 1–124. [Google Scholar] [CrossRef] [PubMed]
  37. Treiber, M.; Hennecke, A.; Helbing, D. Congested traffic states in empirical observations and microscopic simulations. Phys. Rev. E 2000, 62, 1805. [Google Scholar] [CrossRef] [PubMed]
  38. Shalev-Shwartz, S.; Shammah, S.; Shashua, A. On a formal model of safe and scalable self-driving cars. arXiv 2017, arXiv:1708.06374. [Google Scholar]
  39. Zhao, C.; Li, L.; Pei, X.; Li, Z.; Wang, F.-Y.; Wu, X. A comparative study of state-of-the-art driving strategies for autonomous vehicles. Accid. Prev. 2021, 150, 105937. [Google Scholar] [CrossRef] [PubMed]
  40. Minderhoud, M.M.; Bovy, P.H. Extended time-to-collision measures for road traffic safety assessment. Accid. Anal. Prev. 2001, 33, 89–97. [Google Scholar] [CrossRef]
  41. Hayward, J.C. Near miss determination through use of a scale of danger. Highw. Res. Rec. 1972, 384, 24–34. [Google Scholar]
  42. Borsos, A.; Farah, H.; Laureshyn, A.; Hagenzieker, M. Are collision and crossing course surrogate safety indicators transferable? a probability based approach using extreme value theory. Accid. Anal. Prev. 2020, 143, 105517. [Google Scholar] [CrossRef]
  43. Shoaeb, A.; El-Badawy, S.; Shawly, S.; Shahdah, U.E. Time headway distributions for two-lane two-way roads: Case study from dakahliya governorate, egypt. Innov. Infrastruct. Solut. 2021, 6, 1–18. [Google Scholar] [CrossRef]
  44. Dendorfer, P.; Osep, A.; Leal-Taixé, L. Goal-gan: Multimodal trajectory prediction based on goal position estimation. In Proceedings of the Asian Conference on Computer Vision, Kyoto, Japan, 30 November–4 December 2020. [Google Scholar]
Figure 1. An overview of the work in this paper.
Figure 1. An overview of the work in this paper.
Sensors 23 04570 g001
Figure 2. Feature visualization for the three driver classes.
Figure 2. Feature visualization for the three driver classes.
Sensors 23 04570 g002
Figure 3. The structure of the proposed driver model.
Figure 3. The structure of the proposed driver model.
Sensors 23 04570 g003
Figure 4. Flowchart of scenario generation.
Figure 4. Flowchart of scenario generation.
Sensors 23 04570 g004
Figure 5. Initialization of five scenarios.
Figure 5. Initialization of five scenarios.
Sensors 23 04570 g005
Figure 6. Generated scenario examples.
Figure 6. Generated scenario examples.
Sensors 23 04570 g006
Figure 7. Safety and efficiency evaluation in car-following scenarios.
Figure 7. Safety and efficiency evaluation in car-following scenarios.
Sensors 23 04570 g007
Figure 8. Safety and efficiency evaluation in cut-in scenarios.
Figure 8. Safety and efficiency evaluation in cut-in scenarios.
Sensors 23 04570 g008
Table 1. Parameters for the IDM model.
Table 1. Parameters for the IDM model.
ParameterDescriptionValue
v ˜ Desired speed40 m/s
β Acceleration exponent4
s 0 Minimal gap2 m
TDesired time headway2 s
a m a x Maximal acceleration6 m/s 2
bComfortable deceleration3 m/s 2
Table 2. Parameters for driving strategies.
Table 2. Parameters for driving strategies.
ParameterDescriptionValue
ρ Response time2/3 s
a m a x Maximal acceleration6 m/s 2
a m i n , b r a k e Minimal deceleration3 m/s 2
Table 3. Evaluation results on our models.
Table 3. Evaluation results on our models.
ModelNLLCross Entropy
Aggressive2.430.018
Conservative2.560.025
Original2.170.021
CS-LSTM3.30-
Table 4. Percentage of challenging scenarios.
Table 4. Percentage of challenging scenarios.
Initial ScenarioAll SVs AggressiveAll SVs ConservativeFront SVs AggressiveBack SVs AggressiveOriginal Driver Model
17%0%0%5%0%
21%0%0%5%0%
311%0%1%6%1%
410%1%0%13%2%
59%1%0%6%2%
Table 5. Summary of driving strategies evaluation.
Table 5. Summary of driving strategies evaluation.
IDMIDM + RSSIDM + Negotiated Strategy
THW0–1 s0–8 s0–5 s
Safety distance2–30 m>50 m>40 m
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao , L.; Zhou, R.; Zhang, K. Scenario Generation for Autonomous Vehicles with Deep-Learning-Based Heterogeneous Driver Models: Implementation and Verification. Sensors 2023, 23, 4570. https://doi.org/10.3390/s23094570

AMA Style

Gao  L, Zhou R, Zhang K. Scenario Generation for Autonomous Vehicles with Deep-Learning-Based Heterogeneous Driver Models: Implementation and Verification. Sensors. 2023; 23(9):4570. https://doi.org/10.3390/s23094570

Chicago/Turabian Style

Gao , Li, Rui Zhou, and Kai Zhang. 2023. "Scenario Generation for Autonomous Vehicles with Deep-Learning-Based Heterogeneous Driver Models: Implementation and Verification" Sensors 23, no. 9: 4570. https://doi.org/10.3390/s23094570

APA Style

Gao , L., Zhou, R., & Zhang, K. (2023). Scenario Generation for Autonomous Vehicles with Deep-Learning-Based Heterogeneous Driver Models: Implementation and Verification. Sensors, 23(9), 4570. https://doi.org/10.3390/s23094570

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop