1. Introduction
In the current era of development, there has been a steady paradigm shift in transportation and mobility. This paradigm shift is changing everything, from the fuel used to how vehicles are driven [
1]. In this paradigm shift, many novel technologies have emerged to develop intelligent transportation and sustainable urban mobility. These technologies are progressively focusing on making vehicles autonomous and capable of communicating and cooperating [
1]. As a result, in a prolonged period, substantial radical transformations are expected in mobility as a service in the prospect of future mobility solutions for smart and sustainable development. In addition, urban regions will have a faster and higher level of development, and cities will be expected to convert into smart cities where the vehicles will communicate with the urban infrastructure and driving users will be able to interact with them [
2].
Autonomous vehicles (AVs) have begun to appear on city roads. These vehicles have a significant role to play in the future of sustainable and smart transportation for urban regions [
3]. Sustainable and smart transportation systems significantly mitigate the adverse effects of urban development on the environment, economy, and society [
4]. The widespread adoption of AVs around the globe can decrease environmental degradation by controlling emissions and minimizing energy consumption. It can also provide economic and social benefits by improving the efficiency, safety, and accessibility of transport services [
5]. They have unique capabilities and are equipped to provide a safe travel mode by eliminating human driving errors [
6].
Contrary to humans, AVs perform driving tasks tirelessly without any distractions. Autonomous driving has recently moved from the “may be possible” domain to “has happened practically.” Beyond the safety, security, and entertainment for the driving users, AVs also contribute one step forward in smart and sustainable development. They are an emerging technology that provide better services and performance for users via automatic driving skills. AVs do not require humans to drive them. The backbone of AVs’ development is the revolutionary growth in sensors and communication technologies [
7]. Various types of sensors and communication modules, namely radio detection and ranging, light detection and ranging, ultrasonic, camera, and global navigation satellite systems, are used in AVs to perceive the surrounding environment and gather related information [
8]. Powerful computers with specialized software, machine learning (ML) systems, artificial intelligence models, complex algorithms, and hard-coded rules are used to process the captured data and make logical decisions to accurately perform the driving task in a complex environment like humans [
9]. After processing, the computer directs the actuators to act for uninterrupted driving. This system of self-driving is called ADAS.
Besides ADAS, which comprises the sensors and communication modules mentioned above, a VCS is used in AVs to interact with them [
10]. Driving users of AVs communicate with VCSs to perform a variety of hand-free functions. Some functions include navigation (setting a destination, changing routes, and searching for points of interest); climate control (temperature change, fan speed, and airflow); media and entertainment (operating infotainment systems such as changing the volume, skipping tracks, or switching between radio stations; communication (phone calls or sending and receiving emails); vehicle settings (headlight and windshield wiper controls); vehicle status (vehicle’s current speed, fuel level, door and window locks and other status information); and emergency assistance (calling for help or requesting roadside assistance have been successfully incorporated into AVs using VCSs) [
11]. Several researchers have also implemented vehicle control functions using voice commands through a VCS. Several researchers have also implemented vehicle control functions, including turn signals, gear selection, engine control lane changes, or taking an exit due to divergence using voice commands through VCSs [
11]. The VCS plays a critical role in the safety of driving users in situations where the AV has lost control due to malfunctioning or faults in the hardware of the installed computer, sensors, and other associated modules. In this uncertain and risky situation, driving users can control the AV using voice notes to perform driving tasks such as changing speeds, directions, and lanes, as well as braking, to reach a safe condition. Furthermore, driving users need manual control over AVs to perform these tasks in some situations, like changing lanes or taking an exit due to divergence. These tasks can also be performed with the help of voice commands using a VCS. Therefore, it is crucial to determine the exact voice note used to instruct different actuators in a risky situation. The current VCSs face various challenges in interpreting actual commands from voice notes due to listening issues and time-constrained and uncertain response problems. The study addresses these issues using an input recognition model based on natural language processing (NLP). Overall, the proposed model improves AV controls, directly enhancing the safety of driving users in risky situations.
AVs provide various functions and processes for users, ensuring their safety and security. The identification, classification, detection, and analysis processes are widely used in AVs to provide appropriate services for users [
12]. Identifying voice commands is a complicated task in AVs, which is necessary for various methods and analysis processes. A VCS is used in AVs to find the exact command from the voice note, which is used in the analysis process [
13]. VCSs provide an accurate set of voice notes, ensuring a high accuracy rate in the data processing system. VCSs use intelligent techniques to learn how to interact with people and recognize users’ voice commands while travelling in AVs, providing uninterrupted and accurate user services [
14]. A VCS is installed in AVs to capture users’ voice commands and securely store data for later use. The voice user interface method is used in AVs, utilizing a user mental model to identify the exact voice commands [
15]. Artificial intelligence and the big data analysis process are used in AVs to fetch related data for identification. The user mental model is used to determine how the user thinks and find the exact meaning of voice commands to perform tasks or services in AVs [
7].
NLP is an interactive process between computers and human language. It is a branch of computer science that provides an accurate understanding of data for analysis. Human languages are separated into fragments to find the grammatical structure and give the correct meaning of the sentence, which plays a vital role in the data processing system [
16]. When analyzing a large amount of data, NLP provides a better set of data or a way for computers to reduce latency. AVs are an emerging technology that offer better services and performance for users via automatic driving skills [
17]. AVs ensure user safety by providing various services and functions to enhance the system’s feasibility. AVs use NLP to offer an accurate communication process for users, reducing the rate of AV accidents [
18]. NLP provides features such as text format, text structure, and sentence size to improve classification and identification rate accuracy. NLP determines the format of text and structure to identify the text’s actual meaning and content, providing an accurate set of data for the data processing system in AVs. NLP uses a process called “knowledge discovery” that identifies the meaning of the text and improves the feasibility of AVs [
19,
20].
ML is a subset of artificial intelligence used to improve the accuracy rate of the prediction, detection, and analysis processes in various fields. ML techniques are used in AVs to enhance the performance of the system [
21]. NLP provides a better interaction process between computers and humans, increasing the system’s feasibility. ML-based NLP methods are widely used in AVs to improve the accuracy rate of identifying the exact meaning of a user’s communication [
22]. A neural network is used in NLP to identify the pattern and structure of the text, which produces an accurate set of data for the analysis process. The identified patterns are converted into vectors using a network, which produces the actual meaning of text and sentences for AVs [
23]. The produced set of data is used in various AV processes, which improves users’ safety and reduces the accident rate. Combined ML techniques are also used in the NLP process, providing an exact dataset for the analysis process. Support-vector machines and deep-learning algorithms are combined to form a new technique to perform NLP in AVs, enhancing the system’s performance [
24,
25].
The sustainability of a transportation system depends on its users’ safety, with human life being the most valuable resource. Ensuring the safety of individuals is a priority and should be a fundamental aspect of any sustainable transportation system. The ultimate objective of a sustainable and safe road transportation system is to eradicate fatalities, severe injuries, and permanent harm by systematically addressing the issue and reducing the inherent hazards of the entire road transportation system [
26]. Road user behaviour is a crucial aspect of transportation and mobility as it is considered the main contributing factor in most road crashes. Therefore, understanding and addressing the behaviour of road users is crucial in reducing the number of accidents and fatalities on the roads [
27]. Road user behaviour involves understanding the human factors, such as physical [
28], psychological [
29], cognitive [
30], infrastructure [
31], climate [
31], and technological factors influencing road users’ actions [
32]. Many studies have shown that driver personality traits, such as age, gender, sleeping hours, working hours, reckless driving, distracted driving, and road user education, are associated with increased risk-taking behaviour on the road [
33].
Additionally, cognitive factors, such as driving stress and decision-making abilities, have been found to play a role in road user safety [
34]. Infrastructure and climate conditions include the design and infrastructure of the road network [
35] and weather conditions [
31]. Technological factors refer to advanced technologies promoting safe behaviour, such as ADASs [
36,
37], navigation systems [
38], and AVs [
39]. Nowadays, researchers have focused on the relationship between technology and road user behaviour [
38]. With the increasing prevalence of ADASs and AVs, there is a growing need to understand how these technologies can promote safe road user behaviour. Many studies have investigated the effectiveness of ADASs and AVs in reducing the crash rate and accidents and improving driver performance [
37,
40]. Automated driving has the potential to revolutionize road transportation by increasing safety, improving traffic flow, and providing mobility for all [
40]. Furthermore, AVs have eliminated the impact of humans and related factors by removing human involvement in driving tasks [
41]. Therefore, the study has proposed a natural language processing-based input recognition model to improve AV controls and quality, which ultimately contributes to road user safety.
In the context of AVs, NLP can be used to improve the accuracy of voice commands given to the vehicle. Therefore, applying NLP reduces errors and misunderstandings and improves the AVs’ overall functionality. ML algorithms train the vehicle to recognize specific voice commands and make decisions based on those commands. Consequently, ML algorithms improve the safety of AVs by reducing the risk of accidents caused by human error. Research in the field of VCS application has focused on combining NLP and ML techniques to improve the functionality and safety of AVs. Many studies have proposed methods for enhancing the VCS in AVs, utilizing NLP techniques for input recognition and ML techniques to enhance the performance of AVs. For example, some studies proposed a real-time traffic reporting system using NLP for social media networks, and others proposed a visualizing natural language interaction for a conversational in-vehicle information system. Overall, using NLP and ML techniques in AVs can improve the accuracy of voice commands, enhance AVs’ performance, and increase AVs’ safety by reducing the risk of accidents caused by human error.
VCSs in AVs play a crucial role in improving road user safety. These systems allow for hand-free and vehicle control functions, reducing potential risks. Using NLP and ML techniques, the system can interpret and respond to spoken commands, such as navigation instructions, climate control adjustments, or infotainment system controls, vehicle control functions and driving functions. Additionally, in the case of a malfunction or failure in the vehicle hardware, the VCS can act as a fail-safe mechanism, allowing the driving users to take control of the vehicle using voice commands to safely bring the vehicle to a stop or navigate to a safe location. As a result, VCSs can greatly improve safety in critical situations where manual intervention is necessary. The integration of VCSs with ADASs in AVs significantly contributes to road user safety by providing a fail-safe mechanism. The proposed variation continuous input recognition model (VCIRM) is a novel approach for continuously interpreting spoken commands or input. It allows for variations in how a command is spoken, such as accents, speed, and phrasing. In contrast to traditional input recognition models, which may only recognize specific, pre-determined phrases or commands, the proposed VCIRM can more accurately understand and respond to spoken commands, even if they are spoken differently than originally anticipated. Additionally, it increases the flexibility of the system, allowing it to respond to a wider range of user inputs. This model is frequently used in NLP and speech recognition systems, such as those used in AVs, to improve the accuracy of voice commands and enhance the performance and safety of the vehicle.
The paper is divided into five sections.
Section 1 provides the research background and rationale behind the research.
Section 2 reviews the existing literature in the field and highlights the gap in the current state of the art that the proposed model aims to address.
Section 3 describes the design and implementation of the proposed VCIRM, as well as the techniques used to train and evaluate it.
Section 4 presents the results of experiments conducted to assess the performance of the proposed model, comparing it to existing models and discussing the results. Finally,
Section 5 summarizes the key findings of the study and highlights the importance of the proposed model in the context of driving users’ safety.
2. Related Works
In recent years, the field of AVs has seen significant research focused on improving these vehicles’ functionality and safety through NLP and ML techniques. NLP is used to extract meaning and structure from human language, while ML involves using algorithms and statistical models to analyze large amounts of data and make predictions. These techniques can be combined to improve the accuracy of voice commands given to AVs and enhance the performance and safety of these vehicles. This literature survey aims to provide an overview of the various studies conducted in this field, highlighting the recent developments and current state of research on NLP and ML for AVs.
Wan et al. [
42] introduced an automated NLP-based framework (ANLPF), which is a real-time traffic reporting system using NLP for social media networks. The proposed method performs a text-mining process to find the exact meaning and content of the text, providing accurate data for drivers and users. A question-answering model is used to extract information or data for users, which plays a vital role in identifying traffic flow on roads. The proposed traffic reporting system is more accurate than other methods in regard to giving users information. Braun et al. [
43] proposed visualizing natural language interactions for a conversational in-vehicle information system. The proposed method improves the speech-based interaction process in the in-vehicle system. A certain set of keywords is given to understand the exact content of the text, which enhances the interaction process for the users. The attractiveness of the interface is increased by using icons and symbols that provide accurate detail about the interaction process. The proposed method improves the visualization of the interaction process, which increases the accuracy rate in the prediction and detection processes. Solorio et al. [
44] introduced an off-the-shelf home automation component-based semi-autonomous utility vehicle. The proposed method is a voice-activated automated system that uses hardware and software elements for interaction. The proposed approach is mostly used in web and smart applications to enhance control and command over vehicles, improving the system’s performance. A speaker and voice recognizer are used in a vehicle to provide accurate information and services for users.
Choi et al. [
45] developed an active-beacon-based driver sound separation (ABDSS) system using the concept of an active beacon for AVs. Voice command plays a vital role in this system, which provides actual and optimal voice commands for interaction and service processes. The proposed system would separate the driver’s voice from other voice commands so that services in AVs would be more accurate. Voice signals are identified using a distinguishing process that enhances the efficiency and feasibility of the system. Riaz et al. [
46] introduced an emotion-inspired cognitive agent scheme for spectrum mobility in cognitive-radio sites. The proposed scheme improves the efficiency of spectrum mobility using the fear factor. The proposed scheme increases mobility’s speed and accuracy rate using the fuzzy logic algorithm. Experimental results show that the proposed agent increases the system’s performance and spectral mobility rate. Saradi et al. [
47] proposed a voice-based motion control scheme for the robotic vehicle using a visible light-fidelity communication process. In this system, an artificial neural network is trained for the interaction data to control the motion of the proposed vehicle. Here, the light fidelity process increases data bandwidth and efficiency, providing better user service and communication. The proposed scheme significantly improves the accuracy rate in the interaction process, enhancing the system’s feasibility and reliability.
Sachdev et al. [
48] introduced a voice-controlled AV using the Internet of Things to determine the user’s exact location, position, and direction via a voice-controlled remote sensing system. The Internet of Things provides necessary information about an AV using surveillance cameras and a global positioning system. The AV follows the user’s voice commands, reducing accidents and latency rates in providing services. The proposed method improves the overall performance and efficiency of the system. Ni et al. [
49] proposed a domain-specific natural language information brokerage for AVs. The proposed method works as a task helper, providing necessary services for users at the appropriate time. A question-answering mechanism is used in the proposed approach to utilize essential data to provide accurate user service. The proposed method improves accuracy in delivering relevant and precise services that are of high quality. Zhang et al. [
50] introduced a lightweight vehicular voice cloud evaluation system (LVVCES) for AVs. First, voice signals are sent to the cloud to find the user commands needed for providing services for the users. The tester is used to identify the optimal solution and data for the analysis process, which reduce unwanted problems and threats in the communication process. The proposed system increases the overall quality of experience of the AV, enhancing the system’s performance. Katsikeas et al. [
51] proposed a vehicle modelling and simulation language for AVs. The proposed method is used to provide better security for vehicles from vehicular cyberattacks, and it uses a vehicle-to-vehicle (V2V) approach to improve AVs’ communication and authorization processes. The proposed method is also used for risk management and threat modelling for AVs, which increase the system’s efficiency.
Wang et al. [
52] introduced a distributed dynamic route guidance system for a cooperative vehicle infrastructure using short-term forecast data. Short-term forecast data are used in the distributed dynamic route guidance system for the prediction and detection processes. The proposed method reduces threats and problems in the prediction and analysis processes, which increase the system’s performance. The results of the experiments show that the proposed guidance system makes a cooperative vehicle infrastructure system more efficient and possible. Asmussen et al. [
53] proposed a socio-technical AV model using ranked-choice stated preference data. The proposed model is used to determine the AV’s mobility rate, speed, accuracy, and control rate for the users. The proposed socio-technical model provides an optimal dataset for further processing and operation in an AV. The socio-technical model determines users’ precise voice and text commands to provide services.
Zheng et al. [
54] introduced a new V2V communication process for AVs. The proposed method promotes cooperative lane changes in a V2V communication system, which enhance the communication process for the users. In lane changes, the collision trigger time is used to improve the communication process in AVs. Experimental results show that the proposed method improves the performance and safety of users from attackers. Totakura et al. [
55] focused on developing self-driving cars using convolutional neural networks and identifying and addressing potential drawbacks. The developed model for self-driving cars was trained using data from the Asphalt-8 game, while a separate convolutional neural network model for voice-command prediction was trained with the voices of a child, man, and woman. The accuracy of both models was found to be 99%, and they were tested on the same game for optimal results. This research demonstrates the effectiveness of using a convolutional neural network model in self-driving cars and highlights the importance of addressing drawbacks to ensure safe and sustainable road user behaviour.
3. Proposed Variation Continuous Input Recognition Model
Variations in the AV interactive gesture and voice control processing were experienced with the safety and driver assistance. The VCS system uses automatic systems trained and loaded with several pre-defined comments and functions. These functions instruct the driver to perform the safety driving actions, which ensure overall safety. Amid the challenges in interactive voice systems, NLP features in AVs use quality control and data availability to identify user requirements and satisfy different driers who use driving support. The driving supports of users from adaptive cruise control, autonomous emergency braking, electronic stability control, blind-spot detection, V2V communication, vehicle guidance system, voice recognition, and control require distinguishable services. Therefore, regardless of the interactive system voice input and detection of the vehicles, data availability of indistinguishable and non-distinguishable data for training is a prominent deciding factor.
Figure 1 illustrates the schematic diagram of the proposed VCIRM.
The proposed VCIRM focuses on the listening span and data readiness of available data toning through a linear training process. In this approach, internal controls or external driving supports are administrable for driving users and their trainable and non-trainable data based on the response lapses. AV driving users can access interactive voice input by detecting perfect voice recognition, identifying the user’s requirements, and responding using NLP. The proposed VCIRM model operates between the vehicles and driving users. In this model, distinguishable and non-distinguishable data for the available internal controls and driving support are feasible for achieving response lapses for the different users and vehicles. This voice input recognition model also aims to provide split-less responses and maximize data availability. The proposed model operates in two forms, distinguishable and non-distinguishable, concurrently. The non-distinguishable data differ from trainable and non-trainable data to handle different internal controls or external driving supports, as shown in
Figure 1. The introducing operations of the interactive voice input of AVs driving users are keen about the objective function shown in Equations (1a,b).
In Equations (1a,b), the variables represent interactive voice input detection of a driving users, requirements, responses, and distinguishable data, respectively. In the next consecutive representation, the variables ,, anddenote response time, user requirement accepting time, and input responding time. The third objective of this technique is to minimize the distinguishable data using the variable . If denotes the set of driving users, then the number of voice input detection in the user requirement accepting time is , whereas the user requirements are. Based on the overall AV driving users of , are the admittable process for detection.
Voice input detection and perfect recognition processes are reliable using toning and training of the upcoming data. In this research, toning and training data variations are essential to identify non-trainable additional data. The demanding requirement is the linear input of the driving users; the remaining time needed for distinguishable data is the helping factor for improving the training rate. The detection of the voice input data assigned for the available is functional using a linear learning process. Later, depending upon the detection of the interactive voice system, the non-distinguishable process is the augmenting feature. From this detection process, listening span and data readiness are the prevailing instances for determining various constraints. The pre-modelling of data and the availability requirements for training are essential in the following section.
3.1. Case 1: Distinguishable Data Detection
In this distinguishable data, the detection of
for all
driving users based on
is the considering factor. The distinguishable data detection process is illustrated in
Figure 2.
Via indistinguishable data processing, the common interactive inputs are segregated from the unfamiliar (unrecognisable) inputs. The
and
and
are differentiated. This differentiation is performed to accept
for the toning process. From this processing,
is the time required for responding to inputs. The ratio of
from
is required for the consecutive classification of
, as shown in
Figure 2. The probability of distinguishable data
consecutively is given in Equations (2a,b).
where,
From Equations (2a,b), the sequential detection of voice input follows the constant probability of
such that there are no uncertain responses. Therefore,
is as estimated in Equations (1a,b). Hence, the detection of distinguishable user requirements for
follows Equation (3).
However, the distinguishable data detection for , as in Equation (3), is valid for both the conditions and handling to ensure time-constrained listening responses. With the converging process of perfect recognition to reduce the problem of the constraint , the distinguishable data is descriptive using detection or perfect recognition. Therefore, the identifiable constraint is , and denotes the trainable data, which is less to satisfy Equation (1a,b). The contrary output in this Case 1 is the prolonging . Therefore, the response time results in a lower response rate.
3.2. Case 2: Non-Distinguishable Data Detection
In a non-distinguishable data detection process, the uncertain condition of
is high. Hence, the internal control/external driving support of users is time-constrained. In addition to the constrained time of
, the trainable and remaining information are considered metrics for this case. The non-distinguishable data detection process is presented in
Figure 3.
The
is identified as non-distinguishable, from which the non-consecutive sequences are segregated. Based on the
,
and
are cross-validated for extracting
. This extraction is performed to prevent an anonymous
, a distinct interval (before classification). Therefore, for the varying
, the process is unanimously pursued, preventing uncertainty, as presented in
Figure 3. The probability of non-distinguishable data
is given by Equations (4) and (5a,b).
where,
Based on the above Equations, the variable denotes the interactive voice input detection operation for. For all the detection processes, the uncertainty in assigning information to is the training data problem. As in the above constraint, voice detection requires a greater response time, thereby increasing the training rate.
According to the analysis of Cases 1 and 2, the variation condition of uncertainties based on Case 1, and training data, and the responding time are the identifiable conditions. These conditions are addressable using linear learning to mitigate the issues through the toning process. The following section represents the toning process for the distinguishable data.
3.3. Distinguishable Data Using the Toning Process
The decision for toning (matching) distinguishable data relies on a linear learning paradigm. It supports data availability for both discrete and continuous sequences. Case 1 (continuous/distinguishable) and Case 2 (distinct/non-distinguishable) processes are toning with the resolving instances using linear input. The matching process depends on various factors for analyzing the trainable data and uncertainty probabilities during interactive systems detection. Therefore, the above cases for voice input detection are different; they follow distinguishable procedures through the toning process. The toning process for continuous and distinct identifications is represented in
Figure 4.
In the toning process,
is induced for
and
,
data for analysis. The non-
data are trained in the
instance for improving
with
. This operation is performed
to
, such that the data availability-based validation is performed, as depicted in
Figure 4. The toning is prescribed for both Cases 1 and 2 by computing the
available probability and detection of voice data for a constrained time. The first toning relies on maximum training data
and
, as given in Equations (6a,b).
In the computation of Equations (6a,b), the main goal is to address the linear training
and
to reduce the responding time. Therefore, the actual
is given in Equation (6c).
Therefore, the uncertainty is , and this internal control is the responding time training instances of. The excluding is , which is the obtaining sequences. Hence, the response time is demandingly high. The remaining is estimated using Equations (6a–c). Therefore, the next is essential for detecting the remaining user requirements. In this case of distinguishable processing, (or) is the data availability irrespective of the users and vehicles. In the next section of interactive voice input detection, minimizing [as from Equation (6a)] is discussed to reduce training data and response lapses.
3.4. Non-Distinguishable Data Using Linear Input
The non-distinguishable data process follows either of the
as in the above section. It is different for both
in the first instance to obtains no more
, whereas the next instance, which obtains non-trainable data as
, retains user requirements. Based on the discussion in the previous section, the detection of distinguishable data for
is reliable, and it does not require lapse/response time. The listening span
of a
in this detection is the deciding factor, and it differs for each
, depending on the availability of processing
. This time is evaluated using Equation (7) for both
in Equation (6a,b,c).
In Equation (7),
and the final estimation of the listening span (i.e.,)
is the maximum
and response lapse (increase) for handling
user requirements. Therefore, the detection of distinguishable data of all
increases both
and
. The problem is the data readiness of distinguishable/non-distinguishable data until
. The remaining
is re-trained with a prolonged response time. The process of interactive system detection with the consideration of
is independently analysable for Cases 1 and 2 in the previous section. In
Figure 5, the learning representations for Cases 1 and 2 considerations are presented.
The conventional representation achieves a maximum of
, where
. For the continuous process,
and one
are required such that
occurs in a limited sequence of 1 to
. Contrarily,
, the
and
sequence validations are required for mitigating
from the
interval, as shown in
Figure 5. The detection for Cases 1 and 2 are discussed in the following sections.
3.5. Detection for Case 1 Vehicle
Let
denote the probability of distinguishable data detecting for a
; hence,
In Equation (8), the probability of detecting the span identification, linear input, and non-trainable data is idle. For Case 1, or , or both, where the detection of . Therefore, the data availability remains high as . As per the condition of , the voice input data availability for is zero, as no user requirements are extracted for. Hence, the data availability of the previously detected is retained. That is the detected based on , which is alone considerable for increasing data availability. The remaining/lapse vehicles in this detection case are zero, as of various is capable of extracting , consecutively.
The detection of information follows the conventional toning of and , in which is neglected if . Hence, the condition as no additional training/distinguishable data processing instances of . The sequential AVs, as per Equation (4), generate appropriate internal controls or external driving support for the , as in Equation (7). The condition of as the detection of driving experience responds and. Thus, the interactive voice detection of satisfies the LHS of Equation (6a), with the minimum possible consideration of as , as in Equation (7). The response lapses indistinguishable information, extracting remaining vehicle processing for training data based on perfect recognition.
3.6. Detection for Case 2 Vehicles
The remaining
that is not toning under
, which is detected to the distinguishable to prevent response lapses and prolong the training rate. The difference is assigned
to
, which is first computed from the previous detection, where
in
as in Equation (9).
The number of remaining user requirements (i.e.,) and are assigned based on the process sequentially, where a series of detection is addressed and responded to from various based on . Therefore, the voice input detection of relies on multiple to meet the lapse-less responses with distributed. Rather than consecutive processing to make the wait for the next in the available the concurrent depends on whether vehicles are detectable, confining the additional response time for training data.
This voice input detection process, as mentioned above, depends on the available
without obtaining additional response lapses based on two concurrent processes of
detection. The
matched under
in the previous
. The detection follows shared
over
and
, such that
. Here, the response time of AVs is the sum of (including)
in two or more
that does not increase
. Therefore, the response rate is shared between the condition of
and
driving user (without increasing the uncertainty) and reducing
other than training the
. The remaining
is served in this manner, reducing the response lapse of pending users.
Figure 6 illustrates the time requirements and classification factors for varying
.
In
Figure 6, the analysis of time and
factors for varying
is presented. As
increases, the accepting time increases, and, hence, the response time increases as well. It increases the
. Hence,
or
permits further response. The regressive process outwits
for independent processing in
, such that
is performed in
. Contrarily, if the data availability is high, then
is reduced, wherein
is high. This process happens due to the training iterations performed in validating
such that
is classified. Based on the
suggested for handling
and
, the process is verified. This verification increases
compared to
; the latter is high before training and data availability. Therefore, the training iterations
improve data availability for the match
, and, hence, the distinguishable sequences increase. In
Figure 7, the
% for varying
and inputs are presented.
In
Figure 7, the analysis for
over varying
and inputs are presented. The proposed model increases
based on
and
for
. Regressive learning generates
and
instances for
, such that detection (n) is performed from the regressive classification. Therefore, as
increases, inputs and
increase, for which
and
are the corresponding operations. The joint
and
achieves high
in
. An analysis of data availability and uncertainty for varying training iterations and inputs is presented in
Figure 8.
The identified are validated based on , such that is extended for . This process is performed for and . Hence, the data are augmented for . Therefore, availability is maximized, such that unclassifiable inputs reduce uncertainty. As the iterations increase, the discriminant validates the available detection (n) for improving . Therefore, the uncertainty ceases from classified instances. Hence, the availability is ensured.