Autonomous driving is a mission-critical operation in the sense that on-time processing is very important for driving a car in a real-world environment. Specifically, autonomous driving is not a simple task but more like a set of tasks consisting of various jobs including image processing, machine learning, motor control, and so on. For autonomous vehicles to safely drive in complex environments, autonomous cars should ensure deadlines for sensors systems and car-controlling algorithms including machine learning modules, which are known to be very computationally intensive.
However, the deadlines and runtimes of these above modules are far more complicated in the real world. For example, the required deadlines of each algorithm in autonomous vehicles can vary in different scenarios. In the case of driving in a downtown area with a low speed limit, a slower but more accurate prediction might be required than for high-speed highways. Also, there are huge gaps between the worst and average run times of each module, especially in real-world dynamic environments. This implies that current realtime systems focusing on static and strict deadlines might not be suitable for such dynamic and fast moving autonomous driving environments.
To address this issue, we first (1) discover realtime factors hindering level 5 self-driving by examining detailed effects of these factors and then (2) propose an environment-driven solution to balance the trade-off between runtime accuracy and end-to-end delay.
3.2. Proposed Solution
In this section, we propose a solution to address the issues mentioned in the previous section. Traditional realtime scheduling and algorithms estimate a runtime of each module and deadlines for the operations are given by the system. In this paper, we propose a new framework by changing the directions of the process operation, i.e., the environment-driven control of autonomous cars. For safe level 5 driving, we first examine the environmental factors that we cannot control at all, and then we set up internal factors (sensing frequency, image resolution, prediction rate, car speed, and so on). By considering the trade-off between runtime accuracy and delay, we can set up its feasible deadlines and the internal parameters of the sensors and algorithms. We call the proposed framework a environment-driven harmonized approach for autonomous driving.
Specifically, we formally formulate the observation in the previous section as follows. The objective of the following formulations is to obtain the allowable speed of a target self-driving car and to identify sensing/prediction frequency and image resolution levels. To obtain these driving parameters, we need to figure out the allowable maximum end-to-end response times of the driving operation. As mentioned in
Figure 1, the end-to-end response delay includes execution times of sensing, prediction, and motor control processes. Since these processes might be interrupted and preempted by other operating system processes and communications, the end-to-end response delay can be dynamically extended. Considering these dynamic realtime issues, we set up the
value for each environmental level. Note that, in this paper, we consider a simple camera-based autonomous driving car just like a Tesla autopilot car [
6] as a target system. This work can be easily extended for other types of self-driving cars equipped with more sensing modules such as LiDARs, GPS, and so on.
The worst case end-to-end delay,
, of an operation flow from input factors captured by sensors to a motor control via the driving algorithms shown in
Figure 1, which can be obtained from the following mechanism. This mechanism was originally developed for estimating the worst case end-to-end delay of sensor networks [
21,
22]. We adopted this verified theory for estimating an end-to-end delay of autonomous driving operation flow. We consider the scenario that, at each operation module, there can be multiple inputs of flow to wait to be processed. In this case, the worst case time
to flush the first
q inputs is calculated as follows:
This equation is understood as follows: the required time for processing q images is , where is an execution slot time assigned to a module for processing images. So, can be the required number of s. Hence, to finish processing the first q images can be obtained by adding along the processing pipeline.
Now, let us look at the example in
Figure 3. In this example, we assume a scenario in which data are transmitted to each module,
. When
is a period of a module
,
processes the first image input
at
, and the second one
is at
not at
, because other operations are processed. The second image is delayed until
. In this way, the first image is finished at
while second and third ones are finished at
and
, respectively. In that way, the
qth image is scheduled to be generated at
and hence its end-to-end delay can be calculated as follows:
So, the worst case end-to-end delay,
, can be considered the largest value among all possible values of
. Note that, in this formulation, it is already proved [
21] that the maximum value
Q exists with
, where
Q is the first integer that satisfies
. The worst case delay,
, can be calculated as follows:
Now, using the above formulas, we design an admission control module with the following parameter constraints. These constraints allow us to control internal factors such as image resolution and detection period to determine whether a given parameter is acceptable or not for supporting end-to-end deadlines in the current environmental scenario, while maintaining the accuracy of autonomous driving. For example, maintaining a high-enough autonomy for self-driving on rainy days may require higher resolution images, which can increase processing times and end-to-end delays. But at the same time, driving in the rain may require a shorter period of operation for quick reaction, which can reduce end-to-end delays. Additionally, driving on rainy days requires a longer marginal delay considering the braking distance. Overall, in a given environment, these controllable internal factors and driving speed must be considered in a harmonized way. To address these issues, the following constraints are developed:
Based on these constraints, we can also achieve various objectives considering our system design goals. For example, we can minimize total utilization for potential availability or we can maximize image resolution for higher accuracy.
Note that the above constraints in Equation (4) are parameter filters, which cause polynomial computation complexity. However, the additional optimizing process in Equation (
5) with these constraints is converted to an ILP problem. An ILP-based method used in the proposed admission control module is known to be NP-complete [
23,
24]. However, it is also proved that it can be solved by a pseudo-polynomial algorithm [
25,
26] with some given minimum and maximum values. The proposed method uses this heuristic ILP solver [
27] to reduce the search space.
The main notations used in Equations (4) are presented in
Table 1. Here, we introduced three types of delay notations—
,
, and
.
is the end-to-end response time of the current internal parameters, while
is a desired maximum response delay to support autonomous driving for given environmental factors.
is a marginal delay to accommodate driving speed. The braking distance, also called the stopping distance, is the distance a vehicle covers from the time of the full application of its brakes until it has stopped moving. Considering the breaking distance, we set up additional delay factors for each driving speed level [
28].
In the above equations, the proposed approach keeps the utilization of ECU below its utilization bound [
29] so that it can handle a critical and urgent situation (such as the sudden appearance of pedestrian, cars, or anything dangerous) quickly with the highest priority without scheduling problems. So, depending on the current core utilization, the execution time and period (=rate) of a task can be bounded to some extent. Also,
is determined as the desired maximum speed level of a car and is defined as 1 (=low), 2 (=medium), 3 (=high), or 4 (=very high) (these levels can be easily extended to accommodate a real-world driving scenario. Also, there can be more internal factors affecting execution time. For simplicity of explanation, we here examine image resolution and sensing frequency). Periods (i.e., a reverse of frequencies) of sensing, prediction, and motor control operations are denoted as
. The execution time of each process,
, is also bounded depending on a desired image resolution to satisfy desired accuracy, which is represented as
. Environmental factors affect the required image resolution, which in turn affects the expected execution time of each process. Note that execution times of operations can be further delayed by preemption from OS-related high priority processes. In this study, VxWorks [
30,
31] is adopted as a realtime operating system in the developed RC car. VxWorks is one of the main embedded realtime operating systems (RTOSs). In Vxworks, periodic tasks such as sensing operations are scheduled by round-robin scheduling while non-periodic tasks are scheduled by static priority-base preemptive scheduling. To reflect Vxwork scheduling-related additional delay, execution times calculated in this study implicitly include the potential preemption delay as well.
The procedure of the proposed algorithm is well illustrated in Algorithm 1. In the following, we explain the details of each function.
Algorithm 1:Environment-driven harmonized approach. |
- 1:
procedureProposed Algorithm - 2:
Collect (=Environmental factors) - 3:
Setup end-to-end delay based on (= ) - 4:
Run - 5:
Set internal factors, , image resolution, sensing/prediction frequency - 6:
while Driving do - 7:
Monitor environmental changes and autonomy levels - 8:
if environments change then continue to line 2; - 9:
end if - 10:
if then adjust internal factors. - 11:
end if - 12:
end while - 13:
end procedure
|
<line 2>: Collect environmental factors such as weather, town complexity, road conditions, and so on.
<line 3>: As a proactive operation, we first set up a desired end-to-end response (delay) time, considering the collected environmental information. Specifically, is determined by reflecting the effects of bad weather and crowded complex driving circumstances in this paper.
<line 4>: Run described in Equation (4).
<line 5>: Using output values of the above optimizing operation, set up maximum allowable speed, image resolution, sensing, and prediction frequencies. Note that calculates multiple values or bounds for each internal factor. We pick the median value at first and tune them based on driving feedback.
<line 7–9>: During driving, a self driving car keeps gathering information about environmental factors. When it detects meaningful changes, repeat the procedures from line 2.
<line 10>: If the self-driving level has fallen below a threshold, internal factors are adjusted. We configure the desired end-to-end delay from sensors to the controlled motor to support safe autonomous driving.