1. Introduction
Transport systems, whether for goods or passengers, regardless of their type (railways, roads, ships, air), must meet certain safety conditions. According to the authors of this study [
1], the key elements of the road transport system that need to be focused on to enhance safety are (a) the infrastructure, (b) the users, (c) the vehicles, and (d) the facilities. In the last decade, the problem of implementing autonomous vehicles in road transport systems has been raised. Like classic vehicles, these must meet the safety conditions the transport systems impose. In addition, there are other challenges for these vehicles, which are related to replicating their operation and copying the mode of operation from classic vehicles with a driver.
Autonomous vehicles (AVs) are designed with advanced algorithms and sensor systems to behave predictably in various driving situations. On the other hand, human drivers have varied, and unique driving styles influenced by individual experiences and habits [
2]. This difference can produce uncertainty in scenarios where human drivers and autonomous vehicles interact, especially in the case of safety-critical situations, such as lane changing. This uncertainty requires a deeper approach to personal driving style identification and exploring how such individual skills can be integrated into the operational logic of AVs.
Transferring personal skills to autonomous vehicles would likely involve identifying the specific skills relevant to the task designing algorithms that can replicate those skills and testing and refining those algorithms in various scenarios. In essence, transferring personal skills to autonomous vehicles demands meticulous research, blending human intuition with machine precision. This is a complex and ongoing research area in autonomous driving [
3].
Personalized driving styles significantly impact autonomous vehicles’ performance algorithms, notably in vehicle control and decision-making. A recent study [
4] suggests that integrating personalization factors, specifically driving characteristics and driving styles, into algorithmic models effectively captures the complexities of individual human behavior. Consequently, it is recommended to use a personalized strategy to create autonomous driving styles rather than a general approach. Machine learning modules appear as valuable tools in this context for developing a decision-making model that is adaptable and personalized to individual driving preferences [
5]. Such modules enable the autonomous car to modify its operational settings, matching them more closely with the passenger’s individual driving style preferences. According to [
6], personalized driving styles, classified as cautious, average, or aggressive using backpropagation neural networks, bring new challenges and opportunities for autonomous driving system calibration and adaptation.
The connection between human–machine paradigms and personalized driving styles constitutes a critical frontier in autonomous driving research, requiring complex algorithms and models that not only adapt to human behaviors but demonstrate efficiency and safety during navigation. Within this context, [
7] presents an approach to the distribution of control authority. Game theory delineates a strategic framework for control authority allocation between human operators and automated systems. Based on the same approach, Chandra et al. demonstrated safer navigation using a risk-aware planning method [
8]. In this case, the challenge is distinguishing these personalized driving styles and ensuring that the AV matches them without compromising safety and passenger comfort.
This paper presents an approach to detecting personalized driving style in safety-critical scenarios using driver-in-the-loop simulations. The accent is on developing more personalized and safer autonomous driving systems, incorporating the individual driving preferences of drivers. Furthermore, it explores the possibility of copying and transmitting a driver’s driving style to an AV using a neural network learning system. By investigating the potential of driver-in-the-loop simulations in this context, this research contributes to recognizing individual driving styles for AVs. It provides the basis for further exploration of the broader implications of human–machine interaction on road safety and passenger comfort.
This paper is organized as follows.
Section 2 presents the related work that serves as the basis for the present study.
Section 3 describes the experimental setup and methodology.
Section 4 presents the results and is followed by the discussion. The paper ends with a conclusion and a perspective on upcoming research.
2. Related Work
In recent years, there has been a notable focus on transforming the driver’s role in the context of autonomous vehicles. The aim is to shift the driver’s status from an active participant passenger to a completely passive one. According to the SAE definition [
9], the levels of autonomous driving range from level 0 (fully human-driven) to level 5 (fully autonomous), with most passenger vehicles currently at levels 1–3. The development of level 4 and 5 vehicles requires the successful implementation of algorithms [
10,
11] for navigation, path tracking, and control, as well as robustness and self-diagnosis capabilities to handle critical situations. However, a significant challenge in this field is to make autonomous vehicles behave more like human drivers, particularly in safety-critical situations.
Numerous variables, including the driver’s personality, driving history, comfort level with technology, and even distractions, have a significant impact on one’s driving style [
12]. Therefore, in the case of safety-critical scenarios, where quick and precise decision-making is vital, there is a delicate balance between maintaining behavior personalization and ensuring optimal safety [
13].
Identifying and adapting to personalized driving styles becomes a cornerstone in increasing autonomous driving capabilities. Researchers are investigating ways to transmit personal driving styles to autonomous vehicles and enable them to make human-like decisions without needing real tests [
14].
Thus, a personalized AV adapts to the user’s driving behavior by constructing driver models from observing manual driver styles and designing vehicle controllers that can be parameterized to be personalized to specific driving styles using these models. The driver model can determine how a human would drive, and it is designed based on users’ actual driving behavior in a series of typical traffic scenarios. By incorporating a driver’s driving behavior and adjusting the control system accordingly, a personalized AV can provide a system with the ability to personalize each individual driver and gain such drivers’ trust.
Driving behavior research based on driving simulators eliminates road test difficulties, allowing the opportunity to test different driving situations in a controlled and safe environment [
15]. The driving simulator allows for an understanding of driving behavior due to the possibility of realistic interaction in a safe operating environment, generating feedback [
16]. Several researchers have used this method to implement and verify the driving styles of autonomous vehicles and their reactions in special cases.
In [
17], in the driving simulator experiment, the warning type in the head-up display (HUD) varied in its strategy (attention-/reaction-oriented) and specificity (generic/specific) over four warning groups and a control group without a warning. The study investigated how to warn drivers visually to prevent accidents in various safety-critical situations, with collision frequencies, driving behavior, and subjective evaluations of situation criticality and warning understandability being measured. The study results showed that drivers rated the warning concepts more understandable in the second trial than in the first, but the understandability was relatively high both times. The analysis of the warning understandability showed a significant interaction effect of the factor’s scenario and warning type, with each of the three factors themselves having a primary effect on the warning understandability.
In [
3], a framework for generating safety-critical driving scenarios is proposed: a Conditional Multiple Trajectory Synthesizer (CMTS), which is able to test the autonomous driving algorithms in near-miss scenarios that are rarely found in off-the-shelf datasets. A CMTS connects safe and collision driving data by encoding their distributions into a low-dimensional latent space, embedding road information, and synthesizing risk scenarios from the interpolated intermediate distribution. In other words, a CMTS uses a generative model to represent the distributions of safe and collision data and then combines them to generate safety-critical scenarios.
On the other hand, some authors have researched passengers’ acceptance of autonomous vehicles. This study [
18] suggests that a personalized AV is more reliable and familiar, which can increase a user’s willingness to trust the system. The positive effects of personalized AVs may affect the implementation of AVs to facilitate user trust and production and market penetration. Therefore, a personalized AV design should allow a system to be personalized to each driver and gain the driver’s trust. A personalized AV can provide a more socially acceptable and trustworthy system by incorporating a driver’s driving behavior and adjusting the control system accordingly.
According to [
19], future AVs should be designed to allow users to indicate and adjust the vehicle’s driving behavior to their own preferences. This would help to maximize comfort in the traveling experience and ensure that the driving style of the autonomous vehicle matches the user’s preferences. The study also highlights the importance of considering individual differences in driving style when designing autonomous vehicles. Finally, the study suggests that further research is needed to explore the relationship between driving style and user experience in autonomous vehicles.
Ref. [
20] analyzed driving style preferences for conditional automated driving, considering the participants’ driving styles. The study also analyzed how experienced system characteristics impact driving style preferences in automated driving. The study used a new approach to analyze user preferences by calculating the difference between the experienced automated driving style and the participants’ driving style.
According to [
21], the existing algorithms for safety-critical scenario generation are divided into three categories: data-driven generation, adversarial generation, and knowledge-based generation. Simulation platforms and packages are valuable tools for scenario generation. They allow users to create and test scenarios in a virtual environment, which can be more efficient and cost-effective than real-world testing. Additionally, some simulation platforms provide API support for specific programming languages, which allows users to run batches of scenarios. In Ref. [
21], the authors mention five main challenges of current works in safety-critical scenario generation:
Fidelity—the accuracy and realism of the generated scenarios;
Efficiency—the need to increase the density of safety-critical scenarios while considering computational efficiency;
Diversity—the need to generate as many different safety-critical scenarios as possible;
Transferability—the ability to generate scenarios that can be used for different autonomous vehicles;
Controllability—the ability to generate specific scenarios rather than random ones.
A comprehensive summary of the topic, methodologies, contributions, and gaps employed in previous studies can be found in
Table 1.
3. Methodology
The proposed driver-in-the-loop simulation analyzes the driver’s behavior during a safety-critical scenario and transfers it to an autonomous vehicle. The selected scenario is a double lane change maneuver to overtake a stationary obstacle at a relatively high speed (20 m/s) while driving on a two-lane straight road. A realistic driving simulator was built on a six-legged Stewart platform to obtain the driver’s behavior, in which the user can interact with the real-time simulation of an electric vehicle using a steering wheel and pedals. The user was instructed to perform the overtaking in their style, and the lateral accelerations during the maneuver were recorded.
As an autonomous vehicle, the same vehicle was designed to perform a similar maneuver in the same conditions. Obviously, the paths of the user-driven vehicle and the autonomous one are different, and we decided to use lateral accelerations during the overtaking process as the criteria to compare the two. The tuning parameters of the lateral and longitudinal controllers were modified to obtain different lateral accelerations of the autonomous vehicle. Although the same reference path is imposed for the autonomous vehicle, different tuning parameters will lead to different paths and lateral acceleration during the maneuver. The main task of this research is to find the combination of the tuning parameters of the two controllers to match the lateral accelerations of the driver in the same double lane change overtaking action.
3.1. The Experimental Setup
The experiments are performed on a driver-in-the-loop simulator with a vehicle model running on real-time hardware. Several software modules are used to design the simulation environment, the vehicle model, and the autonomous vehicle.
3.1.1. Hardware
The driver-in-the-loop simulator architecture (
Figure 1) consists of the MOOG Motion System 6DOF 2000E hexapod motion platform (Stewart platform), a driver’s seat mounted on the motion platform with a seatbelt, a Logitech G29 steering wheel with pedals, three high-definition monitors, a Speedgoat real-time computing platform, one display for tracking the real-time operation, and one computer with a high-performance graphics card.
3.1.2. Software—Driver Model and Simulation Scenarios
Several software programs were involved in developing the simulation scenarios (
Figure 2), such as RoadRunner 2022a, Epic Game Unreal Engine 4.26, a compiler C++ (Visual Studio 2019), and the Driving Scenario Designer App from MATLAB 2022a and Simulink 2022a. The simulation environment was created in RoadRunner 2022a (
Figure 2a), and the simulation’s visualization is based on the Epic Games Unreal Engine 4.26 [
22] (
Figure 2b,d).
The simulation scene, designed in RoadRunner 2022a, is imported into Driving Scenario Designer, a MATLAB 2022a application that allows building the scenario in which the driven and the autonomous vehicles are tested. In this scene, vehicles, their paths, and their speeds are introduced (
Figure 2c). Also, sensors like cameras, Lidar, and IMU can be mounted on the tested vehicle.
MathWorks MATLAB/Simulink 2022a, a high-performance software package, was chosen to create the vehicle model and calculate the motion platform’s displacements to reproduce the driving sensations. This software is also used to determine the vehicle’s current position, which will be displayed in the simulation environment (
Figure 3).
The vehicle model, presented in
Figure 3, was developed in the Virtual Vehicle Composer application and contains the battery model, the electric motor, transmission, steering mechanism, suspension, breaks, wheels, and controllers.
Figure 3 shows the following components: the interface with the steering wheel and pedals, the electric vehicle, the controllers, the simulation environment parameters, the visualization module, the sensors mounted on the vehicle, and the motion cueing module that transfers the hexapod platform to the required displacements.
The trajectory of the autonomous vehicle to overtake the stationary obstacle was imposed by points as a reference trajectory to be followed.
3.1.3. The Connection between Hardware and Software
The motion platform offers fast response and realistic force feedback [
23,
24]. The data transfer between the vehicle model and the hexapod system is based on commands in the form of user datagram protocol (UDP) packages with a frequency of at least 30 Hz, which contain the type of the command and its parameters (in total, eight simple precision real numbers). The vehicle dynamic model was expanded with an additional module that includes the filtering and motion cueing algorithm (MCA). This allows us to take the calculated accelerations of the vehicle and convert them into displacements of the hexapod system in the three translational degrees of freedom (surge, sway, and heave) and the three rotational degrees of freedom (roll, pitch, and yaw) [
25]. These displacements are commands that reach the hexapod platform, packaged as UDP commands, through a separate network. The efficiency of the MCA is highlighted by the fact that it does not allow the simulator user to perceive any undesirable motion [
26]. The MCA contains high-pass filters, first-order low-pass filters, and a rate limiter introduced as a hyperbolic tangent function. More information about the motion platform’s functionality and the MCA implementation are presented in previous research [
27,
28].
To ensure the transfer of UDP packets to the hexapod platform and the real-time and seamless operation of the application with complicated models, it was decided to introduce the Speedgoat real-time hardware module [
29]. Simulink Real-Time [
30] and Speedgoat provide real-time computation performance and allow for easy integration of various MATLAB/Simulink models [
10]. The vehicle dynamic model is compiled into an executable form and loaded into the memory of the Speedgoat system. The real-time operating system (a modified version of Linux) ensures that the simulation steps are executed smoothly with minimal deviation. This allows the system to accurately simulate the motion and behavior of the vehicle in real time.
3.2. Obtaining the Tuning Parameters
The selected critical scenario for obtaining the tuning parameters is the overtaking maneuver to avoid a static obstacle. The autonomous vehicle has the task of starting from a standstill, accelerating up to 20 m/s, and changing the traffic lane twice, thus avoiding a possible obstacle that appeared suddenly. In the simulated scenario, this obstacle is a stationary vehicle.
Stanley Controller
The control system is an imperative part of the path planning process of the autonomous vehicle, and the three most known vehicle control methods are pure pursuit, Stanley controller, and model predictive control. The proposed study uses a Stanley path-following controller that steers the vehicle along a desired trajectory by combining lateral and longitudinal control.
The Stanley controller is a path tracking method used by Stanford University’s Darpa Grand Challenge team [
31]. Compared to the pure pursuit method, this method uses the front axle as its reference point and looks at the heading error to adjust the steering angle, making it a practical and steady method for vehicle path tracking [
32]. The geometry of the Stanley controller is shown in
Figure 4.
The required steering angle is calculated based on [
32]:
where
e(t) is the cross-track error,
k is the position gain,
v is the velocity,
δ represents the angle of the front wheels with respect to the vehicle, and
is the yaw angle (heading) of the vehicle with respect to the closest trajectory segment.
According to [
32], the Stanley controller is asymptotically stable for any non-zero velocity and steering angles between 0 and
,
. This indicates the range of steering commands that the controller can safely handle. If the required steering angle value exceeds this range, the controller’s behavior may not be guaranteed or performed optimally. Considering this range of steering commands, the steering law can be expressed as:
The longitudinal controller selects the lowest speed as the desired reference point based on the recommendations provided by the trajectory planner, a safety speed recommender, and a health monitor [
32]. This controller computes a single proportional integral (PI) with tracking windup and feed-forward gains, and can be expressed as [
33]:
where
y represents the nominal control output magnitude,
is the saturated control output magnitude,
is the proportional gain,
is the integral gain,
is the velocity feed-forward gain,
is the anti-windup gain,
is the grade angle feed-forward gain,
is the nominal vehicle speed,
is the reference velocity,
is the velocity error,
is the difference between saturated and nominal control outputs, and
θ is the grade angle.
In
Section 3.1.2, we describe the vehicle model and the simulation scenario. The vehicle path and the lateral accelerations during the lane changing maneuver were obtained by choosing four relevant tuning parameters of the trajectory controllers: three parameters of the PI longitudinal controller (different longitudinal acceleration modes) and one for the Stanley lateral controller (different trajectories). The other controller parameters do not influence the lateral accelerations.
Table 2 presents some of the main parameters of the electric vehicle.
The main purpose of the research was the execution of several simulations. They were carried out to choose the right tuning parameters that will impose similar sensations to manual driving under identical conditions. In the first phase, we identified the parameters that will influence the trajectory of the autonomous vehicle and the resulting accelerations during the overtaking maneuver. Four tuning parameters were selected: proportional gain (Kp), integral gain (Ki), velocity feed-forward (Kff) for the longitudinal controller, and the position gain of forward motion (k) for the lateral controller.
After conducting simulations and testing several parameters, the values of the tuning parameters, as outlined in
Table 3, were selected. There were 4 or 5 different values for each parameter, resulting in 400 combinations. With these values, we could generate 400 unique overtaking trajectories and accompanying lateral acceleration curves.
Following the simulations, a series of different paths were obtained (shown with different colors in
Figure 5), as well as different lateral accelerations (shown with different colors in
Figure 6) and longitudinal accelerations (shown with different colors in
Figure 7). The obtained curves are shifted in time, even if the overtaking maneuver always starts in the same position because the longitudinal controller parameters differ for each run. Changing the tuning parameters allows us to obtain various behaviors, imposing different sensations on the driver.
3.3. Defining the Learning Algorithm and Selecting the Tuning Parameters
A neural network was designed and trained to select the proper tuning parameters matching the personal driving style [
34]. Due to the relatively small number of simulations (400), the learning algorithm is a two-layer neural network with feed-forward information propagation from input (accelerations) to output (tuning parameters). This algorithm calculates the tuning parameters for autonomous driving using recorded data of lateral acceleration of a similar maneuver under similar conditions (performed in the driving simulator). In other words, it can replicate and transfer a driver’s driving style to an autonomous vehicle.
The following structure of the neural network was used: thirty input parameters, which are lateral accelerations obtained in manual driving values obtained after the sampling process, twenty-five elements in the first hidden layer, and four elements in the output layer corresponding to the tuning parameters of the autonomous vehicle in the selected scenario (the four parameters in the longitudinal and lateral controller).
4. Experimental Results
In the first stage, the autonomous vehicle was analyzed during the double lane change overtaking maneuver (
Figure 8). Four hundred simulations were performed with the same reference path but there were different values for the four selected longitudinal and lateral acceleration tuning parameters.
In the second stage, the users were instructed to drive the car in the simulator. They were asked to accelerate up to the velocity of 20 m/s and then make a double lane change maneuver by overtaking a stationary vehicle on the road. To force the user to follow the path imposed on the autonomous vehicle and to start the overtaking process as close as possible to the stationary obstacle, another vehicle was introduced in the scenario, moving in the opposite direction (
Figure 9).
In the final stage, the neural network matched the lateral acceleration obtained at manual driving to the autonomous vehicle by selecting the right combination of the tuning parameters, thus obtaining the same lateral acceleration.
4.1. Autonomous Vehicle Simulations during the Overtaking Process
The lateral accelerations were recorded (
Figure 10: a—recorded data, b—sample determination) using different tuning parameters (
Table 3) for the longitudinal and lateral controllers.
Since the number of recorded data is very high (8000 values/second, and the entire process lasts for 25 s) and the duration of the overtaking process is different in each simulation, they were re-sampled. The process took three stages until the sampling goal was reached. The first stage involves selecting an interval that covers the lane change maneuvers from all 400 simulations (intervals from 13 s to 22 s;
Figure 10a). In the second stage, values close to zero were not considered; thus, the absolute threshold of 0.015 m/s
2 was chosen, below which the values were not considered. In the last stage, the remaining values were sampled by dividing them into 30 intervals of equal length (
Figure 10b) and selecting only the first value from each interval. Thus, through these samplings, the obtained curve closely follows the original curve and approximates the sensation offered for each tuning parameter combination during the double lane changing maneuver.
4.2. Driving Simulator Simulations
In the driving simulator (see
Figure 1), the user was instructed to drive the electric vehicle in his own preferred style and perform the double lane change maneuver, as mentioned before. The lateral accelerations during the experience were recorded.
The driver simulator produced lateral accelerations that needed to be re-sampled, as in the previous case, specifically the values during the overtaking maneuver.
Figure 11 depicts the lateral accelerations obtained while driving in the simulator and the re-sampled values. The goal is to identify the tuning parameters that will generate similar experiences during autonomous driving. To accomplish this, we must discover a curve that closely matches the lateral accelerations observed during the simulations with those gathered while driving in the simulator.
4.3. Testing the Selection Algorithm
Comparing autonomous driving (displayed in
Figure 6 and
Figure 10, where all lateral accelerations are shown) with manual driving (shown in
Figure 11: a—recorded data, b—sample determination), the start and end of the maneuver are clearly defined in the lateral acceleration curve for autonomous driving, while it is difficult to estimate them in the curve for manual driving. This is because in the simulator, users have different driving styles, and the start and end time varies from one user to another.
We resolved the discrepancy by varying the moment at which we considered the beginning and end of the overtaking maneuver. Thus, variations of 80 values for the beginning and 100 for the endpoint (on the recorded accelerations) were considered, resulting in 8000 combinations of re-sampled lateral acceleration sets.
For each set, the trained neural network will estimate the controllers’ four tuning parameters (Parameter 1 to Parameter 4).
Figure 12 presents the obtained results for the 8000 trials with different colors. The resulting values are filtered for valid combinations with each of the parameters belonging to the interval in
Table 3. Six combinations of the tuning parameters are obtained as valid ones that give lateral accelerations very close to those obtained with manual driving. The obtained results for the valid tuning parameters are presented in
Table 4, and the lateral accelerations in these six cases are in
Figure 13. These graphs are very close to the values shown in
Figure 10a, and the very few (400) cases that were taken into consideration can be used to explain the modest discrepancies.
5. Discussion
Eight users were asked to assess the hardware setup of the vehicle simulator and perform the lane changing maneuver. All of them have considered that the simulator is useful and can efficiently transfer individual driving styles to an autonomous vehicle. Negative aspects mentioned by the users refer to the lack of specific noise and the high-frequency feedback from the vehicle’s tires.
The described case study demonstrated the potential of driver-in-the-loop simulations in recognizing individual driving styles in a critical scenario and transferring them to an AV. This study used a series of lateral and longitudinal accelerations to impose different trajectories in autonomous driving in a double lane change maneuver. Different longitudinal and lateral controller parameters can impose different sensations on the passengers. In the same scenario, lateral accelerations in manual driving were recorded. A neural network was proposed to choose four controller parameters to impose lateral acceleration similar to manual driving. The neural network was trained on a relatively small training set (400 combinations of the controller parameters) and could select 6 combinations within the imposed limits to match the manual driving. These six cases closely follow the lateral accelerations obtained in the simulator.
Virtual automobiles and simulators can be used to research autonomous vehicles by allowing for the testing of different driving situations in a controlled and safe environment. This eliminates the difficulties of road tests and allows for an understanding of driving behavior due to the possibility of realistic interaction in a safe operating environment, generating visual and kinesthetic feedback.
6. Conclusions
This paper analyzes the literature on personalized driving styles and autonomous vehicles and cites several studies that have explored the relationship between human driving behavior and autonomous vehicles, including research on driving styles, driving patterns, and driver–vehicle interactions. This paper also discusses the potential of driver-in-the-loop simulations in recognizing individual driving styles for AVs. It provides the basis for further exploration of the broader implications of human–machine/AV interaction on road safety and passenger comfort.
The primary objective of this work is to detect personalized driving styles in safety-critical scenarios using driver-in-the-loop simulations. This research offers a method for bridging the safety gap between human driving behavior and AV predictability. Various users and an AV performing identical lane change moves were used for this study. The user’s personalized approach and the AV method were considered when changing the lateral and longitudinal controller parameters. This research also explores how driver-in-the-loop simulations could help AVs recognize unique driving styles. It also lays the groundwork for a more thorough investigation of the broader effects of human–machine/AV interaction on traffic safety and passenger comfort.
The main findings of this research are:
The user studies for identifying personal driving styles can be performed on a driver-in-the-loop simulator. This study used a MOOG 6DOF 2000E hexapod motion platform (Stewart platform—produced by MOOG in Elma, USA) and a Speedgoat real-time computing machine. The software tools employed in the research to build the simulation scenarios were RoadRunner 2022a, Unreal Engine 4.26, and the Driving Scenario Designer App from MATLAB 2022a.
The proposed study used the Stanley controller to steer the vehicle along different trajectories by modifying the lateral and longitudinal control. This can generate a family of trajectories and can be used to train a neural network.
By changing the tuning parameters in the controllers, various behaviors can be obtained, which will impose different sensations on the passengers, thus contributing to the choice of those parameters similar to manual human driving.
Personal driving style can be evaluated by measuring the lateral accelerations in the simulator. In this study, obstacle avoidance was considered the critical scenario.
The neural network replicates and transfers a driver’s driving style to an autonomous vehicle. The selected learning algorithm was a two-layer neural network with feed-forward information propagation from input to output. This algorithm calculated the tuning parameters for autonomous driving using a recording of a similar maneuver under similar conditions (performed in the driving simulator).
In the future, designing a fully autonomous vehicle system that can handle vehicle performance like a human in all possible conditions will be a challenge. To achieve this, as many criteria as possible must be determined that can be altered to turn an autonomous vehicle into a secure and comfortable mode of transportation. This research can be extended by increasing the sample size, or other parameters can be considered, such as time to collision, vehicle velocity, road curvature, or jerk. All these aspects of the human–machine interaction must be constructed using simulators and tested in various safe scenarios focusing on individual needs. However, many challenges remain in designing a fully autonomous system for driverless cars, including road conditions, sensors, actuators, algorithms, and powerful processors to execute software.
Author Contributions
Conceptualization, S.B. and C.A.; Methodology, I.-D.B. and C.A.; Software, I.-D.B., I.-A.R., A.-C.P. and C.A.; Validation, I.-D.B. and I.-A.R.; Investigation, A.-C.P.; Data curation, I.-D.B.; Writing—original draft, S.B. and C.A.; Writing—review & editing, I.-D.B., S.B., I.-A.R., A.-C.P. and C.A.; Supervision, S.B.; Project administration, C.A.; Funding acquisition, C.A. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by a grant from the Romanian Ministry of Research, Innovation and Digitization, CNCS/CCCDI—UEFISCDI, project number PN-III-P2-2.1-PED-2019-4366 (431PED) within PNCDI III.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Kirytopoulos, K.; Kazaras, K.; Papapavlou, P.; Ntzeremes, P.; Tatsiopoulos, I. Exploring driving habits and safety critical behavioral intentions among road tunnel users: A questionnaire survey in Greece. Tunn. Undergr. Space Technol. 2017, 63, 244–251. [Google Scholar] [CrossRef]
- Schrum, M.L.; Sumner, E.S.; Gombolay, M.C.; Best, A. MAVERIC: A Data-Driven Approach to Personalized Autonomous Driving. arXiv 2023, arXiv:2301.08595. [Google Scholar]
- Ding, W.; Xu, M.; Zhao, D. CMTS: A Conditional Multiple Trajectory Synthesizer for Generating Safety-Critical Driving Scenarios. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Virtual Events, 31 May–31 August 2020. [Google Scholar]
- Ye, M.; Pu, L.; Li, P.; Lu, X.; Liu, Y. Time-Series-Based Personalized Lane-Changing Decision-Making Model. Sensors 2022, 22, 6659. [Google Scholar] [CrossRef] [PubMed]
- Zhu, J.; Zhang, H. Personal Driving Style Learning for Autonomous Driving. U.S. Patent Application No. 16/825,886, 9 July 2020. [Google Scholar]
- Zhao, S.; Chen, G.; Hua, M.; Zong, C. An identification algorithm of driver steering characteristics based on backpropagation neural network. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2019, 233, 2333–2342. [Google Scholar] [CrossRef]
- Dai, C.; Zong, C.; Zhang, D.; Hua, M.; Zheng, H.; Chuyo, K. A Bargaining Game-Based Human–Machine Shared Driving Control Authority Allocation Strategy. IEEE Trans. Intell. Transp. Syst. 2023, 1–15. [Google Scholar] [CrossRef]
- Chandra, R.; Wang, M.; Schwager, M.; Manocha, D. Game-Theoretic Planning for Autonomous Driving among Risk-Aware Human Driver. In Proceedings of the International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 2876–2883. [Google Scholar] [CrossRef]
- SAE Levels of Driving Automation™ Refined for Clarity and International Audience. Available online: https://www.sae.org/blog/sae-j3016-update (accessed on 18 August 2023).
- Betz, J.; Wischnewski, A.; Heilmeier, A.; Nobis, F.; Stahl, T.; Hermansdorfer, L.; Lienkamp, M. A Software Architecture for an Autonomous Racecar. In Proceedings of the IEEE 89th Vehicular Technology Conference (VTC2019-Spring), Kuala Lumpur, Malaysia, 28 April–1 May 2019. [Google Scholar]
- AbdelHamed, A.; Tewolde, G.; Kwon, J. Simulation Framework for Development and Testing of Autonomous Vehicles. In Proceedings of the 2020 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Vancouver, BC, Canada, 9–12 September 2020. [Google Scholar]
- Voinea, G.-D.; Boboc, R.G.; Buzdugan, I.-D.; Antonya, C.; Yannis, G. Texting While Driving: A Literature Review on Driving Simulator Studies. Int. J. Environ. Res. Public Health 2023, 20, 4354. [Google Scholar] [CrossRef] [PubMed]
- Chu, H.; Zhuang, H.; Wang, W.; Na, X.; Guo, L.; Zhang, J.; Chen, H. A Review of Driving Style Recognition Methods from Short-Term and Long-Term Perspectives. IEEE Trans. Intell. Veh. 2023, 1–15. [Google Scholar] [CrossRef]
- Butnariu, S.; Girbacia, F.; Antonya, C. Transfer of Personal Driving Styles to Autonomous Vehicles. Eurasia Proc. Sci. Technol. Eng. Math. EPSTEM 2021, 16, 69–76. [Google Scholar] [CrossRef]
- Himmels, C.; Rock, T.; Venrooij, J.; Riener, A. Simulator fidelity influences the sense of presence in driving simulators. In Proceedings of the Adjunct Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Seoul, Republic of Korea, 17–20 September 2022; pp. 53–57. [Google Scholar]
- Hang, J.; Yan, X.; Li, X.; Duan, K.; Yang, J.; Xue, Q. An improved automated braking system for rear-end collisions: A study based on a driving simulator experiment. J. Saf. Res. 2022, 80, 416–427. [Google Scholar] [CrossRef] [PubMed]
- Winkler, S.; Kazazi, J.; Vollrath, M. How to warn drivers in various safety-critical situations—Different strategies, different reactions. Accid. Anal. Prev. 2018, 117, 410–426. [Google Scholar] [CrossRef] [PubMed]
- Sun, X.; Li, J.; Tang, P.; Zhou, S.; Peng, X.; Li, H.N.; Wang, Q. Exploring Personalized Autonomous Vehicles to Influence User Trust. Cogn. Comput. 2020, 12, 1170–1186. [Google Scholar] [CrossRef]
- Yusof, N.M.; Karjanto, J.; Terken, J.; Delbressine, F.; Hassan, M.Z.; Rauterberg, M. The exploration of autonomous vehicle driving styles: Preferred longitudinal, lateral, and vertical accelerations. In Proceedings of the 8th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, Ann Arbor, MI, USA, 24–26 October 2016; pp. 245–252. [Google Scholar]
- Vasile, L.; Seitz, B.; Staab, V.; Liebherr, M.; Däsch, C.; Schramm, D. Influences of Personal Driving Styles and Experienced System Characteristics on Driving Style Preferences in Automated Driving. Appl. Sci. 2023, 13, 8855. [Google Scholar] [CrossRef]
- Ding, W.; Xu, C.; Arief, M.; Lin, H.; Li, B.; Zhao, D. A Survey on Safety-Critical Driving Scenario Generation—A Methodological Perspective. IEEE Trans. Intell. Transp. Syst. 2023, 24, 6971–6988. [Google Scholar] [CrossRef]
- Unreal Engine. Available online: https://www.unrealengine.com/en-US (accessed on 18 August 2023).
- Furqan, M.; Suhaib, M.; Ahmad, N. Studies on Stewart platform manipulator: A review. J. Mech. Sci. Technol. 2017, 31, 4459–4470. [Google Scholar] [CrossRef]
- El-Badawy, A.; Youssef, K. On modeling and simulation of 6 degrees of freedom Stewart platform mechanism using multibody dynamics approach. ECCOMAS Multibody Dyn. 2013, 1, 751–759. [Google Scholar]
- Sen, S.; Dasgupta, B.; Mallik, A.K. Variational approach for singularity-free path-planning of parallel manipulators. Mech. Mach. Theory 2003, 38, 1165–1183. [Google Scholar] [CrossRef]
- Fang, Z.; Kemeny, A. An efficient Model Predictive Control-based motion cueing algorithm for the driving simulator. Simulation 2016, 92, 1025–1033. [Google Scholar] [CrossRef]
- Antonya, C.; Irimia, C.; Grovu, M.; Husar, C.; Ruba, M. Co-Simulation Environment for the Analyzis of the Driving Simulator’s Actuation. In Proceedings of the 7th International Conference on Control, Mechatronics and Automation (ICCMA), Delft, The Netherlands, 6–8 November 2019. [Google Scholar]
- Antonya, C.; Husar, C.; Butnariu, S.; Pozna, C.; Băicoianu, A. Driver-in-the-Loop Simulator of Electric Vehicles. In Proceedings of the Conference on Sustainable Urban Mobility, Skiathos, Greece, 31 August–2 September 2022; Springer Nature: Cham, Switzerland, 2022; pp. 135–142. [Google Scholar]
- Speedgoat. Mobile Real-Time Target Machine. Available online: https://www.speedgoat.com/products-services/real-time-target-machines/mobile-real-time-target-machine (accessed on 18 August 2023).
- Simulink Real-Time. Available online: https://www.mathworks.com/products/simulink-real-time.html (accessed on 18 August 2023).
- Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Mahoney, P. Stanley: The robot that won the DARPA Grand Challenge. J. Field Robot. 2006, 23, 661–692. [Google Scholar] [CrossRef]
- Hoffmann, G.M.; Tomlin, C.J.; Montemerlo, M.; Thrun, S. Autonomous Automobile Trajectory Tracking for Off-Road Driving: Controller Design, Experimental Validation and Racing. In Proceedings of the 2007 American Control Conference, New York, NY, USA, 11–13 July 2007. [Google Scholar]
- Predictive Driver. Available online: https://www.mathworks.com/help/vdynblks/ref/predictivedriver.html (accessed on 18 August 2023).
- Marian, S. Introducere în Rețele Neuronale—Teorie și Aplicații. Available online: https://code-it.ro/introducere-in-retele-neuronale-teorie-si-aplicatii/ (accessed on 18 August 2023).
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).