Next Article in Journal
A Novel Identification Approach Using RFECV–Optuna–XGBoost for Assessing Surrounding Rock Grade of Tunnel Boring Machine Based on Tunneling Parameters
Previous Article in Journal
Hypergraph of Functional Connectivity Based on Event-Related Coherence: Magnetoencephalography Data Analysis
Previous Article in Special Issue
Application of Variational Graph Autoencoder in Traction Control of Energy-Saving Driving for High-Speed Train
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Assessing Training Methods for Advanced Driver Assistance Systems and Autonomous Vehicle Functions: Impact on User Mental Models and Performance

1
School of Engineering, STEM College, RMIT University, P.O. Box 2476, Melbourne, VIC 3001, Australia
2
Law and Technology Group, Law School, La Trobe University, Melbourne, VIC 3086, Australia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(6), 2348; https://doi.org/10.3390/app14062348
Submission received: 8 January 2024 / Revised: 23 February 2024 / Accepted: 7 March 2024 / Published: 11 March 2024
(This article belongs to the Special Issue Intelligent Transportation Systems in Smart Cities)

Abstract

:
Understanding the complexities of Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicle (AV) technologies is critical for road safety, especially concerning their adoption by drivers. Effective training is a crucial element in ensuring the safe and competent operation of these technologies. This study emphasises the critical role of training methodologies in shaping drivers’ mental models, defined as an individual’s cognitive frameworks for understanding and interacting with ADAS and AV systems. Their mental models substantially influence their interactions with those technologies. A comparative analysis of text-based and video-based training methods has been conducted to assess their influence on participants’ performance and the development of their mental models of ADAS and AV functionalities. Performance is evaluated in terms of the accuracy and reaction time of the participants as they interacted with ADAS and AV functions in a driving simulation. The findings reveal that video-based training yielded better performance outcomes, more accurate mental models, and a deeper understanding of ADAS functionalities among participants. These findings are crucial for policy makers, automotive manufacturers, and educational institutions involved in driver training. They underscore the necessity of developing tailored training programs to facilitate the proficient and safe operation of increasingly complex automotive technologies.

1. Introduction

Road Traffic Accidents remain a persistent public health challenge, significantly contributing to injuries and loss of life worldwide. The World Health Organization (WHO) approximates that 1.35 million individuals are victims of road traffic fatalities globally each year [1]. Even in developed nations like Australia, where the road infrastructure is robust and well maintained, the threat of road accidents remains. Recent statistics reveal that more than 1200 lives are tragically lost annually on Australian roads [2,3]. Over 90% of these accidents have been identified by the National Road Safety Partnership Program (NRSPP) in Australia as being caused by human factors, highlighting a substantial behavioural dimension to the prevailing road safety issues [4]. The primary contributors to road accidents include speeding [5], distracted driving, intoxicated driving [6], and driver fatigue [6,7]. To address these challenges, Advanced Driver Assistance Systems (ADAS) such as Adaptive Cruise Control (ACC) have been developed. ACC assists in maintaining a safe distance from the vehicle ahead, reducing the driver’s workload and mitigating issues like speeding and the risk of rear-end collisions [8,9,10]. Similarly, Lane-Keeping Assist (LKA) aids in maintaining the vehicle within its lane, providing corrective steering inputs to prevent unintentional lane departures, thereby enhancing road safety [11].
In recent years, the introduction of ADAS and Autonomous Vehicles (AV) has served as a development milestone in the automotive industry, offering the potential to revolutionise global transportation systems [12,13]. Among the most promising advantages of AVs is their potential to substantially enhance road safety by reducing human-related driving errors, thereby potentially leading to a decrease in accidents [14]. ADAS and AV technologies have the potential to reduce road accidents, with certain studies projecting a reduction of up to 90% [15].
Moreover, the benefits of ADAS and AV technologies extend beyond just enhancing safety. They have the potential to reshape urban planning paradigms, mitigate traffic congestion [16], and improve traffic flow, laying the groundwork for innovative solutions like efficient parking strategies [17,18,19] and platooning [20,21]. The subsequent effects of ADAS and AV technologies indicate a significant impact on our existing transportation infrastructure and methods [18,22]. These technological advancements are expected to greatly benefit arterial road networks, which are essential channels for the distribution of goods, service delivery, and human mobility. Utilising a data-driven approach to educate drivers about the functionalities of ADAS and AV could play a key role in reducing accident rates, thereby enhancing the efficiency and safety of these networks [14,23].
The public has shown a positive response to the enhanced safety features provided by ADAS and AV technologies, as seen in a survey conducted in the USA, where a notable 92% of respondents expressed a preference for ACC function, and 90% showed interest in Collision Avoidance (CA) functions for their future vehicles [10]. Despite this growing interest, the adoption rates for some of the main functions remain low. For instance, ACC, a mature ADAS function, has only had a 26% adoption rate since its introduction in 1998. Similarly, the adoption rate for LKA, available since 2001, stands at a mere 9.8% [24]. Given the technology’s potential to reduce road accidents, identifying effective training methods for drivers of vehicles equipped with ADAS functions is crucial.
Recognising the gap in practical training methods, our study introduces a novel approach by utilising a carefully designed virtual environment that mirrors real-world driving conditions. Features such as realistic traffic patterns, road markings, and detailed surroundings elevate the authenticity of the simulated environment. This provides participants with an immersive experience that closely parallels actual driving scenarios. In the rapidly evolving field of automated transportation, effectively training users on ADAS and AV functions is extremely important because it directly affects the effectiveness of drivers in using those functions. This is especially beneficial for policy makers, educators, and industry leaders in developing policies and standards for educating drivers. Our research undertakes a comprehensive comparison of paper-based/text-based and video-based training methods. Our objective is to determine the most efficacious techniques for providing a clear understanding of these complex systems. By identifying the optimal training methods, our study can help drivers have a better understanding of ADAS and AV functions, thus can help to enhance their adoption rates. Proper training ensures that users construct accurate mental models, leading to the correct and optimal utilisation of these systems. Informed interaction reduces the likelihood of user errors and misunderstandings. Therefore, this study underscores the importance of proficient training methodologies in enhancing road safety. With thorough education and awareness, the adoption of automated transportation can foster safer roads and the potential to save lives.
The structure of this paper is as follows: Section 2 presents a comprehensive literature review, emphasising the challenges and significance of training users in ADAS and AV, with a focus on the development of mental models and the evaluation of current training methodologies. Section 3 presents the research methodology, experimental setup, and simulated driving environment. Section 4 outlines the participant recruitment process and demographic composition. Section 5 conducts a comparative analysis between video-based and text-based training methods, focusing on their respective influences on participant accuracy and response times during interactions with ADAS functions. Section 6 provides a comprehensive discussion comparing the efficacy of various training approaches, with particular emphasis on the merits of video-based instruction and its potential applications. Concluding remarks and recommendations for stakeholders are presented in Section 7, based on the insights and analyses garnered throughout this study.

2. Literature Review

The rapid advancement in automotive technology, particularly in ADAS and AV, is changing the landscape of vehicle operation and safety. These sophisticated systems offer promises of enhanced safety and efficiency, but their effectiveness is contingent upon proper user training and understanding [14]. The complexity and diversity of ADAS and AV technologies underscore the importance of effective training methodologies to ensure seamless interaction and user proficiency.
One of the primary challenges in this domain is the lack of standardisation across manufacturers regarding ADAS and AV functionalities [24]. This lack of uniformity can lead to inconsistencies in user experience and potentially compromise safety. For example, the differences in the activation and deactivation processes of the same ADAS function across different manufacturers may lead to confusion for users [24]. Furthermore, the absence of specialised training and evaluation mechanisms for users adds to the complexity, highlighting the need for a more unified approach to training [14,23]. However, the literature lacks comprehensive studies that compare the efficacy of these diverse training methodologies. Central to the user’s interaction with ADAS and AV technologies is the concept of mental models. Mental models in the context of ADAS and AV are critical for the safe and competent operation of these systems, due to the increased automation and complexity [25,26]. For example, the CA function plays a crucial role in reducing the driver’s workload. It alerts the driver and can automatically apply brakes when an imminent collision is detected, thereby contributing to accident prevention. Gaining a comprehensive understanding of the CA system’s capabilities and limitations is essential. Such knowledge is critical for drivers, helping them to avoid excessive dependence on technology while that they are prepared for timely manual intervention when necessary [9,10,11].
A mental model, in the context of automotive technology, refers to a driver’s understanding and conceptualisation of a system’s capabilities and limitations. It forms the basis for how drivers interact with and utilise the vehicle’s advanced features safely and effectively [14]. A well-defined and robust mental model correlates with the precision in activating ADAS functionalities and the promptness of their activations. When considering mental models in the automotive technology domain, two primary dimensions emerge. (i) Understanding ADAS functions: drivers need to understand the various features of ADAS and how they work in different scenarios. This includes knowing what the system can do and recognising when it might not work as expected [27]. (ii) Recognising system status: drivers need to be aware of whether the ADAS is currently active. This means knowing if one or more of its functions are on or if they are turned off or not in use [27]. To evaluate the robustness of these mental models, methodologies such as scoring-based assessments and simulation tests can be employed. For instance, a scoring system based on the accuracy of responses to questions about ADAS functions can serve as a quantifiable measure. Here, accurate responses represent a “strong” mental model, while incorrect or incomplete responses indicate a “weak” mental model [27,28]. In practical implications, a precise or ‘strong’ mental model can enhance the safe and effective deployment of ADAS and AV technologies. Conversely, a weak mental model can lead to higher risks of misuse, excessive reliance, and potentially hazardous situations [27,29].
In the context of evaluating the effectiveness of ADAS and AV training methodologies, it is crucial to select an appropriate simulation platform that aligns with this study’s objectives. In the realm of ADAS and AV technology training, various simulation platforms offer distinct advantages each with its unique features. For example, CARLA is renowned for its comprehensive automotive simulations and extensive capabilities in environmental modelling [30,31]. While CarSim is widely recognised for its realistic automotive and vehicle dynamics simulations [32,33]. The primary focus of CARLA and CarSim is on the comprehensive modelling of vehicular dynamics and the driving environment. While this is beneficial for various research contexts, it does not closely align with the specific objectives of our study. Our research emphasises understanding driver behaviour and interaction with ADAS and AV functions. Therefore, the York driving simulation software version 6.61, developed by York Computer Technologies Inc., Kingston, Canada, was chosen, specifically for its strengths in facilitating driver interaction with these functionalities and simulating human factors, which are paramount for assessing the development of accurate mental models in drivers. The York simulator includes built-in functions such as Autopilot, LKA, and CA, making it particularly suited for studies focusing on driver training and behaviour. This choice aligns with our research objectives, concentrating on evaluating different training methodologies’ effectiveness on participants’ comprehension and performance in ADAS and AV technologies.
The automotive sector presents diverse training methodologies, ranging from paper-based and video-based to demonstration-based and trial-and-error techniques [12,14,34,35,36,37,38]. Globally, billions are spent annually on skill and safety training, highlighting the importance of developing an effective training curriculum [39]. The essential question arises: are current training practices effective, or is there an imperative need to research and identify desirable methodologies according to the transport industry’s requirements? Studies across various fields, including healthcare, education, construction, sports, and computer science, have indicated a trend toward the superior efficacy of video-based training which could hold valuable insights for automotive training methodologies [40,41,42,43,44,45,46,47]. Studies across various fields suggest the efficacy of video-based training, but the transport industry lacks specific research on how these findings translate to ADAS and AV training. This transition is theoretically grounded in the cognitive load theory of multimedia learning, suggesting that video-based training could improve information absorption and its meaningful comprehension, thus potentially transforming the automotive training landscape [35,48]. This study significantly contributes to the field by providing empirical evidence on the effectiveness of video-based training over text-based methods in developing accurate mental models of ADAS and AV functionalities, thereby enhancing driver performance and safety.
This literature review underscores the criticality of comprehending the intricacies of ADAS and AV technologies, along with the significance of tailored training methodologies for their effective adoption and safe usage. Emphasising the diversity of users and the varying complexity of these systems, this review highlights the growing need for specific training approaches that accommodate different learning styles and preferences.

3. Materials and Methods

3.1. Experiment Setup

The experiment was carried out using a York driving simulator, integrated with a Logitech G27 racing wheel system, equipped with a set of pedals and a shifter module. A wide range of ADAS and AV functions, as detailed in Table 1, were allocated to distinct buttons located on both the steering wheel and the shifter module to replicate a realistic driving environment.
The experimental design involved manoeuvring an AV within a three-dimensional virtual environment. Participants had the option to control the vehicle manually or allow it to function autonomously. Additionally, the ADAS features could be switched between activated and deactivated states. The simulated AV featured automatic transmission, allowing the panel on the shifter module to be utilised exclusively for activating and deactivating ADAS functions. As a result, the shifter lever was intentionally disabled for the entirety of the experiment to avoid any operational confusion.
To enhance the ecological validity of the research, meticulous attention was devoted to the design elements, aimed at simulating a highly immersive and authentic driving scenario. This arrangement is graphically represented in Figure 1 and Figure 2. Figure 1 illustrates components “a” through “c” as the steering wheel, driving seat, and pedals, respectively. Figure 2 showcases the driving environment and components “d” through “h”, each corresponding to specific ADAS functions.
In reference to Figure 2, component “e” indicates the location of the LKA button, which is situated on the shifter module. In the simulation, when LKA is activated, its corresponding indicator is illuminated in green on the virtual dashboard, as illustrated in Figure 3. This function steers the vehicle, guiding it towards the centre of the lane. It will do the necessary steering. However, it has no control over the accelerator and brake and thus has no control over the velocity of the vehicle. Component “f” illustrates the positions of the ACC features on the side panel. When engaged, the ACC system autonomously modifies the vehicle’s speed to maintain a safe distance from preceding vehicles, thereby promoting both driver convenience and safety. The Autopilot-On (AP-On) feature is represented by component “g” and is conveniently located on the steering wheel. When the AP function is enabled, the system assumes full control of the vehicle, necessitating no driver intervention. It governs both the lateral movements and modulates the acceleration and deceleration of the vehicle. Finally, component “h” denotes the CA function integrated into the steering wheel. When activated, the CA system intervenes to mitigate potential frontal collisions.
According to the SAE J3016 standard [49,50], the LKA, ACC, and CA systems in our simulation are categorised as Level 2 (partial driving automation). These systems require the driver to remain engaged with the driving task and to monitor the environment, although they assist in both lateral and longitudinal vehicle motion control. The Autopilot feature, designed to represent more advanced AV functionalities, simulates aspects of Level 5 (full driving automation), aiming to demonstrate a future scenario where the vehicle achieves full automation, requiring no driver intervention.

3.2. Driving Scenario and Environment

The interface of the driving simulator is illustrated in Figure 3, with the red box representing the vehicle’s front section or bonnet. Immediately below this, the dashboard is displayed, incorporating both a speedometer and status indicators for the ADAS and AV functions. These indicators illuminate in green when the corresponding ADAS functions are activated; for example, as presented in Figure 3, the LKA is active. The three-dimensional virtual environment incorporates key elements commonly encountered on actual roadways. This includes varying traffic flows, which feature a representative mix of vehicle types such as passenger cars, commercial vehicles, and emergency vehicles in motion, which contribute to the realistic traffic pattern as demonstrated in Figure 4. Specifically, the traffic density within the urban areas is depicted as higher than that of freeway driving, reflecting the frequent encounters with emergency vehicles and pedestrian crossings. The presence of streetlights and road markings is designed to mirror urban and suburban settings, presenting a spectrum of driving scenarios. Pedestrians are strategically placed within the environment to emulate real-life pedestrian behaviours and interactions, prompting participants to make more sophisticated driving choices. Designated speed limits, architectural features, and natural scenery are also accurately depicted, contributing to the ecological validity of the simulator. These elements serve to enhance the ecological validity of the simulator, aiding participants in navigating their virtual surroundings [51].
Figure 5 provides an aerial perspective of the designated driving path, with a red circle indicating the vehicle’s starting point. The route consists of low-speed (60 km/h) and high-speed (120 km/h) segments, highlighted by red and green arrows, respectively. These segments serve to simulate the transition between urban and freeway driving conditions, offering participants a diversified driving experience. Participants commence in the low-speed area, passing through an intersection to enter the high-speed segment, then cross another intersection before returning to the initial low-speed zone. A series of pre-programmed triggers (T1 to T4) are strategically positioned along the route, each initiating specific events when the vehicle passes. The audio instructions associated with these triggers are outlined in Table 1 and are played to prompt participant actions.
In the current investigation, we conduct a comprehensive analysis of multiple variables, including steering angles (degree), patterns of acceleration and braking (m/s2), event trigger timestamps, and the specifics of button activations including their timing. In this study, the accuracy of participants in using ADAS and AV functionalities was quantified as a percentage. This percentage represents the proportion of participants who successfully executed the required actions with these systems during the experiment. Additionally, we assess participants’ response latencies, captured in seconds, following the initiation of triggered events.

4. Recruitment of Participants

The objective of this research was to engage a heterogeneous group, comprising both students and faculty of RMIT University, that reflects diversity in terms of age and gender. A multimodal recruitment strategy was employed, leveraging digital avenues through the university’s online platforms, while supplementing this with traditional paper-based advertisements strategically located throughout the RMIT Bundoora campus.
The resultant sample comprised 48 adult participants aged 20 to above 40 years old, as detailed in Table 2. These participants had varying levels of driving experience, ranging from one to more than six years. They were categorised into three different groups based on established road safety and insurance benchmarks for defining driver expertise [52,53], as demonstrated in Table 3. It was a prerequisite for participation that individuals hold a valid driving license. Efforts were made to ensure gender balance within the cohort; the final composition consisted of 26 males and 22 females. However, it is important to note that despite these efforts, the distribution of participants across age groups was not even. This uneven distribution in age groups represents a limitation of our study as it may have influenced this study’s findings related to age-related interactions with ADAS technologies. In addition to these considerations, we acknowledge a potential bias in our participant selection, as our sample primarily consisted of students and faculty members. This demographic focus might limit the generalisability of our findings to the broader population of ADAS and AV users.
To further control extraneous variables, the student participants were recruited from a diverse range of academic disciplines, thus achieving a heterogeneous sample in terms of educational background. While the overall sample size of 48 participants was adequate, the uneven age distribution limits our ability to draw conclusive results about age-specific patterns or impacts. This limitation should be considered when interpreting our findings.
Before their involvement in this study, all participants were comprehensively briefed regarding the research objectives and methodologies. Informed consent was duly obtained following ethical standards. The experimental protocol presented in this work was subject to a thorough review process and subsequently approved by the RMIT University Human Research Ethics Committee (Approval Number: EC 25022). This ensured adherence to established ethical guidelines and academic research standards.

Participant Registration and Training Session at RMIT Bundoora Campus

Participants arrived at the Autonomous Vehicle Lab located on the RMIT Bundoora Campus to begin the experiment. They first underwent a standard registration process, during which they were provided with an overview of this study’s objectives, methods, and significance. Everyone was then given a “Participant Information and Consent Form” that had been pre-approved by the RMIT University Human Research Ethics Committee. After carefully reading the details of the form, participants confirmed their willingness to participate by signing it.
In our study design, we strategically selected two vehicular functions for video-based training and two for paper-based instruction to ensure a comprehensive and equitable comparison. Specifically, we chose the AP-On function located on the steering wheel and the LKA positioned on the shifter module for video-based training. For paper-based training, we selected the CA function on the steering wheel and the ACC function found on the shifter module.
This approach was deliberate; by selecting one function from each location (the steering wheel and the shifter module) for each type of training, we fostered a balanced representation of functionalities in each training modality. This decision guarantees that our comparative analysis between the video and paper-based training results remains fair, maintaining an impartial reflection of the efficacy of each training method without being influenced by the functionalities’ locations on the simulator or inherent characteristics.
Following registration, participants watched an instructional video that presented a concise overview for this study. This video served multiple purposes: it provided a comprehensive overview of this study’s objectives, hardware setup, virtual driving environment, driving route, and served as a training module demonstrating two (out of four) key ADAS and AV functions; AP-On and LKA. The video further elaborated on the participants’ expected actions and necessary responses while navigating the simulated environment and responding to specific triggers. In doing so, it effectively communicated this study’s primary objectives and served a dual role as both an informative guide and a practical training tool.
Next, participants were given a comprehensive user manual resembling a real car owner’s guide. This user manual, extending across 4 pages and comprising approximately 1100 words in total, was organized into three sections. The first section introduced the features and operations of the driving simulator. The second provided in-depth information about all available ADAS functions, including procedures for activation and deactivation. The third section described the limitations of each ADAS feature. A time slot of 15 min was allotted for participants to read the manual, with a focus on understanding two of the ADAS functions namely, CA and ACC. According to [54], a non-native English speaker reads at an average rate of 139 words per minute. Therefore, the manual, being approximately 1100 words, could be comfortably read in around 8 min. This calculation allowed us to set a reading time of 15 min, giving participants ample opportunity to review the manual thoroughly, especially the two ADAS functions not covered in the instructional video. The decision to allocate 15 min for this activity was consistent with existing research recommendations [55]. Upon completion of the training, participants were invited to engage in a practical driving session within a simulated AV environment.

5. Results

The objective of this study is to reveal the critical importance of effective training for end-users of ADAS and autonomous driving functions. With the increasing complexity and diversity of these systems, user familiarity and proficiency become vital in realising their full potential and ensuring safety. The data are analysed to compare the effectiveness of two distinct training methods: video-based instruction and text-based user manuals. Performance measurements in this study focused on two key areas. The first was participants’ accuracies, which were determined by evaluating the percentage of times they correctly activated or deactivated an ADAS function following instructions. These instructions were associated with pre-programmed triggers (T1 to T4) along the driving route, as detailed in Table 1. Participants’ accuracy was assessed based on their ability to correctly respond to these specific audio instructions by activating or deactivating the relevant ADAS function. The second was response latencies, or the time taken by the participants to respond to an audio instruction by either turning on or turning off the correct ADAS function. Together, these metrics provide insights into how effectively participants understood and remembered the ADAS/AV functions and utilised them properly and promptly in responding to the simulated events.

5.1. Accuracy Analysis

This study aimed to compare the effectiveness of two different training methods for participants learning the functionalities of ADAS. Two functions, AP-On and LKA, were taught through video-based training, while two others, CA and ACC, were explained through user manuals.
Steering wheel controls (AP-On and CA): The results indicate a significant difference in the accuracy levels between the two functions located on the steering wheel. The AP-On function, taught through video, achieved a perfect accuracy rate of 100%, while the CA function, taught through the user manual, demonstrated a lower accuracy of 79%, as demonstrated in Table 4. This variation in results may indicate that visual and interactive teaching methods such as videos may enhance comprehension and retention of the functionality, leading to higher accuracy in identifying and using the control.
Shifter module controls (LKA and ACC): Similarly, the two functions located on the shifter module, LKA and ACC, showed accuracy levels of 77% and 68%, respectively. This pattern follows the trend observed with the steering wheel controls, where the function taught through video, LKA had a higher accuracy rate compared to the one taught through the user manual, ACC, as shown in Table 4.
The consistent trend observed across both sets of controls suggests that the video-based training method may be more effective in instructing participants on the specific ADAS functions when compared to the user manual method. This suggests that the more interactive and visual nature of the video tutorials may foster a better understanding and may ease the learning curve.

5.2. Reaction Time Analysis

The initial step in our data analysis involved conducting a normality assessment to determine if the data follows a normal distribution. For this purpose, we utilised both the Shapiro–Wilk and Anderson–Darling tests. The Shapiro–Wilk test, introduced by [56], is widely recognised for its efficacy in evaluating the normality of a dataset. The Anderson–Darling test, developed by [57], assesses the data’s alignment with a specific distribution (in this instance, the normal distribution) by placing added emphasis on the tails. This phase was crucial as it directed our subsequent choice of either parametric or non-parametric statistical tests, based on the data’s distributional properties.
Data for the ADAS functions trained through video (AP-On and LKA) and those trained through the paper-based user manual (CA and ACC) were both found to deviate from a normal distribution, as confirmed by both normality tests. Given these results, traditional parametric tests like Analysis of Variance (ANOVA), which require data to follow a normal distribution, were not suitable for this study. Therefore, we chose to use the Mann–Whitney U test, a non-parametric method recommended for analysing datasets that do not meet the normality assumption [58]. The null hypothesis for this test asserts that the distributions of the two groups are identical, meaning there is no difference between them. When the null hypothesis is not true, it suggests that there is a statistically significant difference between the two groups. A p-value below a predetermined threshold p < 0.05 would lead us to reject the null hypothesis, indicating that one group tends to have higher or lower values than the other. Thus, a statistically significant result from the Mann–Whitney U test suggests that the observed differences between the groups are not attributable to random variability, but rather signify an underlying disparity between them [58]. These conclusions are drawn with careful consideration of the methodological choices employed and the distributional characteristics inherent to the data.

5.2.1. Comparison of AP-On and CA Functions

In this section, we compared the response times of Auto AP-On and CA. Both functions are activated via controls on the steering wheel, but they were trained through different methods. AP-On through video-based training and CA through a paper-based user manual. Our findings reveal significant patterns in response times, as observed through both descriptive statistics and inferential analysis.
For the AP-On function, the median response time was recorded at 3.16 s, with a relatively narrow range of 1.60 to 5.22 s. Furthermore, the data demonstrate a standard deviation of 0.96 s, indicating less variability in the response times among participants for this function. On the contrary, the CA function displayed a median response time of 4.22 s and had a broader range, ranging from 2.02 to 8.84 s. The standard deviation for the CA function is also high at 1.78 s. As illustrated in Figure 6, these descriptive statistics suggest a more consistent and potentially more rapid responsiveness for the AP-On function compared to the CA function.
To robustly substantiate these observations, we conducted a Mann–Whitney U test, comparing the response times for AP-On and CA. The analysis revealed statistically significant differences between the two functions (p < 0.05), with a resulting p-value of 0.0135. This outcome confirms that the difference in response times between AP-On and CA is statistically significant, which also implicates the training method as a contributing factor to this variation.

5.2.2. Comparison of LKA and ACC Functions

In this section, we compared the response times of LKA and ACC. Both functions are activated via buttons located on the shifter module. However, LKA was trained through video instruction, whereas ACC was taught using a text-based user manual. This divergence in training methods set the groundwork for examining their potential impact on user responsiveness.
The descriptive analysis reveals that, the LKA function demonstrated a median response time of 4.04 s, with a range from 2.82 to 7.88 s and a standard deviation of 1.22. The ACC function, in comparison, showed a higher median response time of 5.32 s, extending over a range from 2.93 to 8.34 s, with a greater standard deviation of 1.59. As illustrated in Figure 7, responses to the LKA function were generally faster and more consistent among participants compared to the ACC function.
To validate the significance of the observed differences in response times between LKA and ACC, we conducted a Mann–Whitney U test. This analysis resulted in a p-value of 0.02, indicating that the variations in response times are statistically significant. This finding provides concrete evidence supporting the hypothesis that different training methods can significantly affect user interaction efficacy with ADAS functions.
The significance of these results is manifold. Firstly, the observable difference in median response times between LKA and ACC highlights the importance of investigating the most effective training modalities for each specific ADAS function. Secondly, the variance in standard deviations suggests that the training method may influence the speed and the uniformity of responses among participants.

6. Discussion

This research contributes valuable insights into the field of automotive technology, particularly in training users to interact with complex vehicle systems. Through a carefully and rigorously designed experimental framework, we evaluated the effectiveness of video-based and text-based instructional approaches on user interaction with complex vehicle systems.
Our findings indicate that using video-based methods to train on ADAS and AV functions led to faster and more consistent response times compared to using user manuals. These findings are consistent with previous research that highlighted the effectiveness of video-based training for understanding complex tasks [35,59]. The superiority of video-based training over paper-based instruction is evidenced by measurable improvements in accuracy and reaction time, indicating a more robust mental model among participants. This observation aligns with [27], which asserts that stronger mental models contribute to more effective and efficient user performance.
Despite the distinct nature of the functions under study, such as AP-On versus CA or LKA versus ACC, the activation mechanism was uniform across pairs: a straightforward single-button press. This consistent methodology of operation, combined with their comparable placements (either on the steering wheel or the shifter module), ensured a level comparison ground. Thus, even though the functions were distinct, the fundamental process for activation remained consistent. This similarity ensures that our comparisons remain valid, as the primary variable in our experiment was the training method, not the operational complexity or location of the functions. Therefore, our findings offer a credible assessment of the training methods’ efficacy, eliminating potential biases arising from varied function complexities or placements.
While text-based manuals provide a comprehensive depth of information, they come with inherent challenges. Consistent with existing literature, our results showed that user manuals often require a higher level of pre-existing technical knowledge and can be cumbersome to study effectively [38,60]. This was reflected in a broader distribution of response times and, in some instances, lower accuracy for the CA and ACC functions.
Statistical rigour was added to these observations through the Mann–Whitney test, as presented in Figure 5 and Figure 6. These statistical findings further emphasise the need for training methods specifically tailored to the complexities and demands of each ADAS function.
In the context of the growing adoption of ADAS technologies in modern vehicles, our study highlights the importance of selecting the appropriate training methodologies. The advantages of video-based training, as shown in this research, offer promising avenues for creating more effective and safer user training programs. However, it is important to consider the limitations of this study. The results may be specific to the ADAS functions assessed and the applied experimental conditions. Therefore, future research should focus on establishing the applicability of these findings to other control systems and ADAS functions. Explorations into hybrid, interactive, or artificial intelligence-driven training methodologies could also be beneficial for refining training techniques.
Additionally, it is important to consider that our study was conducted under well-lit daytime conditions, which did not consider the potential impacts of fatigue and different lighting conditions, such as those encountered during nighttime driving. These factors are significant in real-world scenarios and merit further investigation. Future studies should aim to investigate a variety of environmental conditions, including those that induce different levels of fatigue, to provide a more comprehensive understanding of how these variables affect interactions with ADAS technologies. Another important limitation to consider is the influence of participants’ prior experience with in-car systems. While our study analysed the effect of different training methods, we did not focus on the impact of pre-existing familiarity with such systems. This factor could affect the results and should be more explicitly addressed in future research.
Moreover, while our research lays foundational groundwork for understanding how different training mediums impact user interactions with ADAS and future AV functions, other variables should not be overlooked. These include the inherent complexity of the functions, technological familiarity among users, and other uncontrolled factors that may impact the observed outcomes. Addressing these variables forms an avenue for future research to further optimise training methods for improving both user performance and the safety of advanced automotive technologies. Lastly, it is crucial to recognise that this study was executed within a simulated environment. Therefore, the real-world applicability of these findings necessitates further research involving practical, on-road conditions to validate the outcomes observed in a controlled setting.

7. Conclusions

This study provides valuable contributions to the automotive technology sector, specifically in training users for ADAS and AV. Our empirical data demonstrate that, compared to traditional text-based manuals, video-based training methods improved user performance both in terms of response time and accuracy. While video-based methods were more effective for quick comprehension and application, text-based manuals showed limitations, especially in terms of user engagement and the breadth of response times.
The results highlight the need for a tailored approach to ADAS and AV training methods, given the unique demands of different functionalities. While our study provides foundational insights into how training methods affect user interaction with complex vehicle systems, it also indicates the necessity for future research to confirm these findings, particularly in real-world settings. This research sets the stage for future investigations into optimising training modules to enhance user performance and automotive safety.

Author Contributions

Conceptualisation, M.M., C.-T.C., M.F. and J.Z.; methodology, M.M., C.-T.C., M.F. and J.Z.; software, M.M.; validation, M.M., C.-T.C., M.F. and J.Z.; formal analysis, M.M. and C.-T.C.; resources, C.-T.C., M.F. and J.Z.; writing—original draft preparation, M.M. and C.-T.C.; writing review and editing, M.M., C.-T.C., M.F. and J.Z.; visualisation, M.M.; supervision, C.-T.C., M.F. and J.Z.; project administration, C.-T.C., M.F. and J.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study is approved by the RMIT University Human Research Ethics Committee on 22 February 2022 (Approval Number: EC 25022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in this study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References

  1. World Health Organization. Global Status Report on Road Safety 2018, Switzerland, 2018. Available online: https://apps.who.int/iris/handle/10665/276462 (accessed on 15 July 2023).
  2. Australian Road Safety Foundation. Expansion of the RoadSet Program, Australia. 2022. Available online: https://treasury.gov.au/sites/default/files/2022-03/258735_australian_road_safety_foundation.pdf (accessed on 13 July 2023).
  3. Commonwealth of Australia. Road Trauma Australia Annual Summaries. Australian Government Department of Infrastructure, Transport, Regional Development and Communications, Australia. 2020. Available online: https://www.bitre.gov.au/sites/default/files/documents/road_trauma_australia_2019_statistical_summary.pdf (accessed on 13 July 2023).
  4. NRSPP Australia. “Human Error in Road Accidents.” Monas University Accident Research Centre. Available online: https://www.nrspp.org.au/resources/human-error-in-road-accidents/ (accessed on 10 September 2023).
  5. Lehtonen, E.; Malhotra, N.; Starkey, N.J.; Charlton, S.G. Speedometer monitoring when driving with a speed warning system. Eur. Transp. Res. Rev. 2020, 12, 16. [Google Scholar] [CrossRef]
  6. Zhang, N.; Yang, C.; Fard, M. Measured increases in steering entropy may predict when performance will degrade: A driving simulator study. Transp. Res. Part F Traffic Psychol. Behav. 2022, 91, 87–94. [Google Scholar] [CrossRef]
  7. Kettwich, C.; Schrank, A.; Avsar, H.; Oehl, M. A Helping Human Hand: Relevant Scenarios for the Remote Operation of Highly Automated Vehicles in Public Transport. Appl. Sci. 2022, 12, 4350. [Google Scholar] [CrossRef]
  8. Woo, H.; Madokoro, H.; Sato, K.; Tamura, Y.; Yamashita, A.; Asama, H. Advanced Adaptive Cruise Control Based on Operation Characteristic Estimation and Trajectory Prediction. Appl. Sci. 2019, 9, 4875. [Google Scholar] [CrossRef]
  9. McCall, J.C.; Trivedi, M.M. Driver Behavior and Situation Aware Brake Assistance for Intelligent Vehicles. Proc. IEEE 2007, 95, 374–387. [Google Scholar] [CrossRef]
  10. Eichelberger, A.H.; McCartt, A.T. Toyota drivers’ experiences with Dynamic Radar Cruise Control, Pre-Collision System, and Lane-Keeping Assist. J. Saf. Res. 2016, 56, 67–73. [Google Scholar] [CrossRef]
  11. Murphey, Y.L.; Kolmanovsky, I.; Watta, P. AI-Enabled Technologies for Autonomous and Connected Vehicles; Springer Nature: Cham, Switzerland, 2022. [Google Scholar]
  12. Alanazi, F. A Systematic Literature Review of Autonomous and Connected Vehicles in Traffic Management. Appl. Sci. 2023, 13, 1789. [Google Scholar] [CrossRef]
  13. Fagnant, D.J.; Kockelman, K. Preparing a nation for autonomous vehicles: Opportunities, barriers and policy recommendations. Transp. Res. Part A Policy Pract. 2015, 77, 167–181. [Google Scholar] [CrossRef]
  14. Murtaza, M.; Cheng, C.-T.; Fard, M.; Zeleznikow, J. Preparing drivers for the future: Evaluating the effects of training on drivers’ performance in an autonomous vehicle landscape. Transp. Res. Part F Traffic Psychol. Behav. 2023, 98, 280–296. [Google Scholar] [CrossRef]
  15. Boelhouwer, A.; Beukel, A.v.D.; van der Voort, M.; Martens, M. Should I take over? Does system knowledge help drivers in making take-over decisions while driving a partially automated car? Transp. Res. Part F Traffic Psychol. Behav. 2018, 60, 669–684. [Google Scholar] [CrossRef]
  16. Kim, S.; Oh, J.; Seong, M.; Jeon, E.; Moon, Y.-K.; Kim, S. Assessing the Impact of AR HUDs and Risk Level on User Experience in Self-Driving Cars: Results from a Realistic Driving Simulation. Appl. Sci. 2023, 13, 4952. [Google Scholar] [CrossRef]
  17. Rui, Y.; Wang, S.; Wu, R.; Shen, Z. Research on Truck Lane Management Strategies for Platooning Speed Optimization and Control on Multi-Lane Highways. Appl. Sci. 2023, 13, 4072. [Google Scholar] [CrossRef]
  18. Taplin, T.; Harrison, K.; Robinson, C. Regulation of Autonomous Vehicles, California. 2022. Available online: https://berkeleyca.gov/sites/default/files/documents/2022-10-11%20Item%2015%20Regulation%20of%20Autonomous%20Vehicles.pdf (accessed on 18 June 2023).
  19. Kim, W.; Yang, H.; Kim, J. Blind Spot Detection Radar System Design for Safe Driving of Smart Vehicles. Appl. Sci. 2023, 13, 6147. [Google Scholar] [CrossRef]
  20. Rad, S.R.; Farah, H.; Taale, H.; van Arem, B.; Hoogendoorn, S.P. The impact of a dedicated lane for connected and automated vehicles on the behaviour of drivers of manual vehicles. Transp. Res. Part F Traffic Psychol. Behav. 2021, 82, 141–153. [Google Scholar] [CrossRef]
  21. Tsugawa, S.; Jeschke, S.; Shladover, S.E. A Review of Truck Platooning Projects for Energy Savings. IEEE Trans. Intell. Veh. 2016, 1, 68–77. [Google Scholar] [CrossRef]
  22. Faisal, A.; Kamruzzaman, M.; Yigitcanlar, T.; Currie, G. Understanding autonomous vehicles: A systematic literature review on capability, impact, planning and policy. J. Transp. Land Use 2019, 12, 45–72. [Google Scholar] [CrossRef]
  23. Murtaza, M.; Cheng, C.-T.; Fard, M.; Zeleznikow, J. Supporting Driver Training—From Vehicles with Advanced Driver Assistance Systems to Fully Autonomous Vehicles. In Proceedings of the Autonomous Vehicle Technology conference-APAC21, Melbourne, Australia, 3–5 October 2022; FISITA and SAE: Melbourne, Australia, 2022. Available online: https://www.fisita.com/library/apac-21-136 (accessed on 13 July 2023).
  24. Murtaza, M.; Cheng, C.-T.; Fard, M.; Zeleznikow, J. The importance of transparency in naming conventions, designs, and operations of safety features: From modern ADAS to fully autonomous driving functions. AI Soc. 2022, 38, 983–993. [Google Scholar] [CrossRef]
  25. Victor, T.W.; Tivesten, E.; Gustavsson, P.; Johansson, J.; Sangberg, F.; Aust, M.L. Automation Expectation Mismatch: Incorrect Prediction Despite Eyes on Threat and Hands on Wheel. Hum. Factors 2018, 60, 1095–1116. [Google Scholar] [CrossRef] [PubMed]
  26. Merriman, S.E.; Revell, K.M.; Plant, K.L. What does an Automated Vehicle class as a hazard? Using online video-based training to improve drivers’ trust and mental models for activating an Automated Vehicle. Transp. Res. Part F Traffic Psychol. Behav. 2023, 98, 1–17. [Google Scholar] [CrossRef]
  27. JGaspar, G.; Carney, C.; Shull, E.; Horrey, W.J. The Impact of Driver’s Mental Models of Advanced Vehicle Technologies on Safety and Performance [Supporting Datasets]. 2020. Available online: https://rosap.ntl.bts.gov/view/dot/56626 (accessed on 13 July 2023).
  28. Benson, A.; But, J.; Gaspar, J.; Carney, C.; Horrey, W.J. Advanced vehicle technology: Mapping mental model accuracy and system exposure to driver behavior. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2021, 65, 1072–1076. [Google Scholar] [CrossRef]
  29. Nandavar, S.; Kaye, S.-A.; Senserrick, T.; Oviedo-Trespalacios, O. Exploring the factors influencing acquisition and learning experiences of cars fitted with advanced driver assistance systems (ADAS). Transp. Res. Part F Traffic Psychol. Behav. 2023, 94, 341–352. [Google Scholar] [CrossRef]
  30. Dosovitskiy, A.; Ros, G.; Codevilla, F.; Lopez, A.; Koltun, V. CARLA: An Open Urban Driving Simulator. In Proceedings of the 1st Annual Conference on Robot Learning, Mountain View, CA, USA, 13–15 November 2017; PMLR: Mountain View, CA, USA, 2017; pp. 1–16. [Google Scholar]
  31. Nwadiuto, J.C.; Okuda, H.; Suzuki, T. Driving Behavior Modeling Based on Consistent Variable Selection in a PWARX Model. Appl. Sci. 2021, 11, 4938. [Google Scholar] [CrossRef]
  32. CarSim. Car Sim Mechanical Simulation. Available online: https://www.carsim.com/ (accessed on 15 February 2024).
  33. Allen, M. Vehicle Dynamics International. 2023. Available online: https://www.vehicledynamicsinternational.com/ (accessed on 15 February 2024).
  34. Chen, H.-Y.W.; Guo, Z.; Ebnali, M. Is text-based user manual enough? A driving simulator study of three training paradigms for conditionally automated driving. Transp. Res. Part F Traffic Psychol. Behav. 2023, 95, 355–368. [Google Scholar] [CrossRef]
  35. Zahabi, M.; Razak, A.M.A.; Shortz, A.E.; Mehta, R.K.; Manser, M. Evaluating advanced driver-assistance system trainings using driver performance, attention allocation, and neural efficiency measures. Appl. Ergon. 2020, 84, 103036. [Google Scholar] [CrossRef] [PubMed]
  36. Boelhouwer, A.; van den Beukel, A.P.; van der Voort, M.C.; Hottentot, C.; de Wit, R.Q.; Martens, M.H. How are car buyers and car sellers currently informed about ADAS? An investigation among drivers and car sellers in the Netherlands. Transp. Res. Interdiscip. Perspect. 2020, 4, 100103. [Google Scholar] [CrossRef]
  37. Greenwood, P.M.; Lenneman, J.K.; Baldwin, C.L. Advanced driver assistance systems (ADAS): Demographics, preferred sources of information, and accuracy of ADAS knowledge. Transp. Res. Part F Traffic Psychol. Behav. 2022, 86, 131–150. [Google Scholar] [CrossRef]
  38. Viktorová, L.; Šucha, M. Learning about advanced driver assistance systems—The case of ACC and FCW in a sample of Czech drivers. Transp. Res. Part F Traffic Psychol. Behav. 2019, 65, 576–583. [Google Scholar] [CrossRef]
  39. Merriman, S.E.; Plant, K.L.; Revell, K.M.; Stanton, N.A. A new approach for Training Needs Analysis: A case study using an Automated Vehicle. Appl. Ergon. 2023, 111, 104014. [Google Scholar] [CrossRef] [PubMed]
  40. Alexander, K.P. The Usability of Print and Online Video Instructions. Tech. Commun. Q. 2013, 22, 237–259. [Google Scholar] [CrossRef]
  41. Buch, S.V.; Treschow, F.P.; Svendsen, J.B.; Worm, B.S. Video- or text-based e-learning when teaching clinical procedures? A randomized controlled trial. Adv. Med. Educ. Pract. 2014, 5, 257–262. [Google Scholar] [CrossRef]
  42. Ji, T.A.; Butterworth, S. Using Video and Paper–Based Educational Resources to Teach Common Surgical Techniques to Pre–Clerkship Medical Students: Results from a Simulation–Based Training Workshop. UBC Med. J. 2019, 11, 16–18. [Google Scholar]
  43. Lloyd, S.A.; Robertson, C.L. Screencast Tutorials Enhance Student Learning of Statistics. Teach. Psychol. 2012, 39, 67–71. [Google Scholar] [CrossRef]
  44. Nimmerichter, A.; Weber, N.J.R.; Wirth, K.; Haller, A. Effects of Video-Based Visual Training on Decision-Making and Reactive Agility in Adolescent Football Players. Sports 2015, 4, 1. [Google Scholar] [CrossRef] [PubMed]
  45. Noetel, M.; Griffith, S.; Delaney, O.; Sanders, T.; Parker, P.; Cruz, B.d.P.; Lonsdale, C. Video Improves Learning in Higher Education: A Systematic Review. Rev. Educ. Res. 2021, 91, 204–236. [Google Scholar] [CrossRef]
  46. Ragazou, V.; Karasavvidis, I. Effects of Signaling and Practice Types in Video-Based Software Training. Educ. Sci. 2023, 13, 602. [Google Scholar] [CrossRef]
  47. van der Meij, H.; van der Meij, J. A comparison of paper-based and video tutorials for software learning. Comput. Educ. 2014, 78, 150–159. [Google Scholar] [CrossRef]
  48. Mayer, R.E. The promise of multimedia learning: Using the same instructional design methods across different media. Learn. Instr. 2003, 13, 125–139. [Google Scholar] [CrossRef]
  49. SAE International. SAE Levels of Driving Automation™ Refined for Clarity and International Audience; SAE International: Warrendale, PE, USA, 2021; Available online: https://www.sae.org/blog/sae-j3016-update (accessed on 20 December 2023).
  50. Kelechava, B. SAE Levels of Driving Automation; American National Standard Institute: Washington, DC, USA. Available online: https://blog.ansi.org/sae-levels-driving-automation-j-3016-2021/ (accessed on 15 February 2024).
  51. Kolekar, S.; de Winter, J.; Abbink, D. Human-like driving behaviour emerges from a risk-based driver model. Nat. Commun. 2020, 11, 4850. Available online: https://www.nature.com/articles/s41467-020-18353-4 (accessed on 20 December 2023). [CrossRef]
  52. Commonwealth of Massachusetts. Safe Driver Insurance Plan (SDIP) and Your Auto Insurance Policy. Available online: https://www.mass.gov/info-details/safe-driver-insurance-plan-sdip-and-your-auto-insurance-policy#:~:text=An%20experienced%20driver%20is%20someone,recognized%20for%20Basic%20insurance%20discounts (accessed on 25 November 2023).
  53. Robbins, C.; Chapman, P. How does drivers’ visual search change as a function of experience? A systematic review and meta-analysis. Accid. Anal. Prev. 2019, 132, 105266. [Google Scholar] [CrossRef]
  54. Brysbaert, M. How many words do we read per minute? A review and meta-analysis of reading rate. J. Mem. Lang. 2019, 109, 104047. [Google Scholar] [CrossRef]
  55. Merriman, S.E.; Revell, K.M.; Plant, K.L. Training for the safe activation of Automated Vehicles matters: Revealing the benefits of online training to creating glaringly better mental models and behaviour. Appl. Ergon. 2023, 112, 104057. [Google Scholar] [CrossRef] [PubMed]
  56. Shapiro, S.S.; Wilk, M.B. An Analysis of Variance Test for Normality (Complete Samples). Biometrika 1965, 52, 591. [Google Scholar] [CrossRef]
  57. Anderson, T.W.; Darling, D.A. Asymptotic theory of certain “goodness of fit” criteria based on stochastic processes. Ann. Math. Stat. 1952, 23, 193–212. [Google Scholar] [CrossRef]
  58. Field, A. Discovering Statistics Using IBM SPSS Statistics; Sage: London, UK, 2013. [Google Scholar]
  59. Ebnali, M.; Hulme, K.; Ebnali-Heidari, A.; Mazloumi, A. How does training effect users’ attitudes and skills needed for highly automated driving? Transp. Res. Part F Traffic Psychol. Behav. 2019, 66, 184–195. [Google Scholar] [CrossRef]
  60. Oviedo-Trespalacios, O.; Tichon, J.; Briant, O. Is a flick-through enough? A content analysis of Advanced Driver Assistance Systems (ADAS) user manuals. PLoS ONE 2021, 16, e0252688. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Experiment setup: [a: steering wheel, b: driving seat, c: pedals].
Figure 1. Experiment setup: [a: steering wheel, b: driving seat, c: pedals].
Applsci 14 02348 g001
Figure 2. ADAS functions’ locations: [d: Driving environment, e: LKA, f: ACC, g: AP-On, h: CA].
Figure 2. ADAS functions’ locations: [d: Driving environment, e: LKA, f: ACC, g: AP-On, h: CA].
Applsci 14 02348 g002
Figure 3. A snapshot of the simulator interface: [i: Rear mirror view, j: front of the simulated AV, k: Shows LKA function is ON, l: right side mirror, m: speedometer, n: left side mirror view].
Figure 3. A snapshot of the simulator interface: [i: Rear mirror view, j: front of the simulated AV, k: Shows LKA function is ON, l: right side mirror, m: speedometer, n: left side mirror view].
Applsci 14 02348 g003
Figure 4. A snapshot of the simulator interface: [o: pedestrian crossing (magenta colour), p: other car (), q: emergency vehicle, r: truck].
Figure 4. A snapshot of the simulator interface: [o: pedestrian crossing (magenta colour), p: other car (), q: emergency vehicle, r: truck].
Applsci 14 02348 g004
Figure 5. An overview of the simulated driving scenario [red and green arrows indicate the high-speed and low-speed zones, respectively].
Figure 5. An overview of the simulated driving scenario [red and green arrows indicate the high-speed and low-speed zones, respectively].
Applsci 14 02348 g005
Figure 6. AP-On and CA reaction time comparison between the video vs. user manual training methods. “+” shows the outliners.
Figure 6. AP-On and CA reaction time comparison between the video vs. user manual training methods. “+” shows the outliners.
Applsci 14 02348 g006
Figure 7. LKA and ACC reaction time comparison between the video vs. user manual training methods. “+” shows the outliners.
Figure 7. LKA and ACC reaction time comparison between the video vs. user manual training methods. “+” shows the outliners.
Applsci 14 02348 g007
Table 1. Instructions for each trigger.
Table 1. Instructions for each trigger.
Trigger NoInstructionFunction Type
T1Turn on the Autopilot functionAV
T2Turn on the Lane-Keeping Assist functionADAS
T3Turn on the Collision Avoidance functionADAS
T4Turn on the Adaptive Cruise Control functionADAS
Table 2. Participant division—age group.
Table 2. Participant division—age group.
Training TypeNumber of
Participants
Age Group
(20–30 Years)
Age Group
(30–40 Years)
Age Group
(Above 40 Years)
Video/Text-based48211512
Table 3. Participant division—driving experience.
Table 3. Participant division—driving experience.
Training TypeNumber of
Participants
Driving Experience
Novice Driver (1–3 Years)
Driving Experience
Intermediate Driver (4–6 Years)
Driving Experience
(Experienced Driver (Above 6 Years)
Video/Text-based48151320
Table 4. Participants’ average accuracies after being trained using different methods.
Table 4. Participants’ average accuracies after being trained using different methods.
Group Division—Training MethodNo. of Participants in Each GroupAP-On Response Accuracy in %LKA Response Accuracy in %CA Response Accuracy in %ACC Response Accuracy in %
Video based4810077NANA
User manual48NANA7968
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Murtaza, M.; Cheng, C.-T.; Fard, M.; Zeleznikow, J. Assessing Training Methods for Advanced Driver Assistance Systems and Autonomous Vehicle Functions: Impact on User Mental Models and Performance. Appl. Sci. 2024, 14, 2348. https://doi.org/10.3390/app14062348

AMA Style

Murtaza M, Cheng C-T, Fard M, Zeleznikow J. Assessing Training Methods for Advanced Driver Assistance Systems and Autonomous Vehicle Functions: Impact on User Mental Models and Performance. Applied Sciences. 2024; 14(6):2348. https://doi.org/10.3390/app14062348

Chicago/Turabian Style

Murtaza, Mohsin, Chi-Tsun Cheng, Mohammad Fard, and John Zeleznikow. 2024. "Assessing Training Methods for Advanced Driver Assistance Systems and Autonomous Vehicle Functions: Impact on User Mental Models and Performance" Applied Sciences 14, no. 6: 2348. https://doi.org/10.3390/app14062348

APA Style

Murtaza, M., Cheng, C. -T., Fard, M., & Zeleznikow, J. (2024). Assessing Training Methods for Advanced Driver Assistance Systems and Autonomous Vehicle Functions: Impact on User Mental Models and Performance. Applied Sciences, 14(6), 2348. https://doi.org/10.3390/app14062348

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop