1. Introduction
Connected and autonomous vehicle systems contain a set of vehicles that can automatically coordinate their routing, owing to vehicle-to-vehicle communications and a centralized scheduler (computer). The vehicles are controlled by the centralized scheduler residing in the network (e.g., a base station in the case of cellular systems) or a distributed scheduler, where the scheduler is autonomously selected by the vehicles. Such vehicle systems can coordinate the vehicles in order to speed up the traffic, minimize traffic jams, and they also take into account environment aspects by reducing emissions.
Vehicle routing and scheduling problems have been intensively investigated over the last decades. There exist a huge number of papers dealing with particular aspects or also covering relationships with related research fields, see, for instance, refs. [
1,
2,
3,
4,
5] to name only a few papers. A complete special issue with 16 papers on different aspects of vehicle routing and scheduling has been edited by Repoussis and Gounaris [
6]. For surveys in this field, the reader is referred to the papers [
7,
8,
9]. During the last years, new challenges arose in connection with autonomous driving of vehicles.
Vehicle-to-vehicle (V2V) communications can be used as a potential solution to many problems arising in the area of traffic control. Several approaches have been developed including the modeling of a complete autonomous driving system as a multi-agent system, where the vehicles interact to ensure an autonomous functionality such that emergency braking and traffic jam are avoided as much as possible. Nowadays, vehicle systems are developed towards fully connected and fully autonomous systems. Vehicular communication technologies have been considered, for instance, in the papers [
10,
11].
In [
12], the authors dealt with the optimization of the departure times, travel routes, and the longitudinal trajectories of connected and autonomous vehicles and also the control of the signal timings at the intersections to obtain a stable traffic flow in such a way that the vehicles do not need to stop before they enter an intersection. In addition, the vehicle queues before the intersections should not become too large. In this paper, all the data (i.e., the departure times, travel routes, and signal timings) are determined by a central controller, but the trajectories of the vehicles are fixed by distributed roadside processors, which constitute a hierarchical traffic management scheme together.
In [
13], the authors considered the control of the required lane changes of a system of autonomous vehicles on a road segment with two lanes before the vehicles arrive at a given critical position. This paper presents an algorithm that realizes the lane change of an individual vehicle in the shortest possible time. Then, this algorithm is iteratively used to manage all the lane changes which are necessary on the road segment under consideration in such a way that traffic safety is maintained.
In the paper by [
14], the problem of scheduling a CAV, which crosses an intersection, was considered with the goal to optimize the traffic flow at the intersection. In addition, a solution algorithm was presented.
In [
15], the problem of scheduling the time phases of a traffic light was considered with the objective to improve the traffic flow by reducing the waiting time of the traveling vehicles at the indicated road intersections.
To the best of our knowledge, in this paper, simplified CAV problems are formulated as classical scheduling problems for the first time. Scheduling solution algorithms can be used as a part of solution algorithms for real-world CAV problems. In the future, more sophisticated NP-hard scheduling models can be considered.
The aim of the paper is to encourage the researchers to consider CAV problems as scheduling problems and to use classical methods of scheduling theory to investigate the complexity and to solve them.
Although the solution algorithms presented have a large running time and are somehow still useless in practice, we have shown that some CAV problems are polynomially solvable. Solution algorithms can be advanced in future research.
In this paper, we consider four scheduling problems that arise in connection with CAVs. The remainder of this paper is as follows. In each of the
Section 2,
Section 3,
Section 4 and
Section 5, we consider one of these problems. The problem with a road of two lanes and one lane closure is considered in
Section 2.
Section 3 deals with the case of a turn to a main road.
Section 4 considers the case of a road with three lanes and a lane closure on the middle lane.
Section 5 deals with a crossroad having dividing lanes. For each of these cases, an appropriate scheduling problem is formulated and a solution algorithm is given. Finally,
Section 6 gives a few concluding remarks.
2. A Road with Two Lanes and One Lane Closure
In this section, we consider a road with two lanes, where two sets of CAVs,
and
, are given. The CAVs from the set
go on lane 1, and the CAVs from the set
go on lane 2. Both lanes have the same direction. On lane 2, there is a lane closure and the CAVs from the set
have to move to lane 1, see
Figure 1.
We have to find a sequence of passing the lane closure by the CAVs from the sets and in order to minimize a given objective function, e.g., the total passing time.
We assume that
a maximal feasible speed of the CAVs is given. The CAVs either go with the maximal feasible speed or brake in order to let another CAV change the lane.
an acceleration is not taken into account.
the time needed to change the lane is not taken into account, i.e., it is equal to zero.
the CAVs have the same length.
the safe distance between two CAVs is the same for all vehicles.
Since the problems investigated in this paper have not been considered in the literature so far, these assumptions are made for the first time to obtain simplified models. In real-world problems, these assumptions can be false, although solution algorithms for simplified models can be used to solve sub-problems or to obtain lower or upper bounds.
The same problem arises, e.g., on railway sections and in automated warehouses of logistics companies with autonomous robot transporters. This simplified problem can be formulated as a single machine scheduling problem as follows.
Given a set of n jobs that have to be processed on a single machine from time 0 on. For each job j, a processing time , a release date , a due date , and a weight are given. The machine can process no more than one job at a time. The processing time p can be computed from the maximal feasible speed and the length of a CAV. The value corresponds to the earliest time when the processing of the vehicle can start, resulting from the position of the CAV j on the road.
For example, let the speed of each car be 60 km per hour and the length of the closed road section be 100 m. Thus, a car needs 6 s to pass the road closure. Therefore, we assume (in seconds). Let a car j be 0 m from the beginning of the road closure and a car i be 200 m behind. So, we assume and according to their speed and the distances. Moreover, let the starting time of any schedule computed be 05:00:00 a.m. which we consider as time 0. Let the due date for car i be 05:00:15, then we assume . It is obvious that the tardiness of car i is no less that seconds in any feasible schedule.
We call a feasible schedule active if one cannot reduce the objective function value by shifting a single job to an earlier time without violating the constraints. Without loss of generality, we consider only active schedules in this paper.
A schedule is uniquely determined by a permutation of the CAVs of the set N. Let be the starting time of job j in the schedule . Then, is the completion time of job j in the schedule . A precedence relation can be defined as follows. For the jobs from the set , we have , where and means that the processing of job j precedes the processing of job i. Thus, there is a chain of jobs on lane 1. Analogously, a chain of jobs is defined for the set .
For the single machine scheduling problem of minimizing total completion time, the goal is to find an optimal schedule
that minimizes
Here, the completion time of a job is equal to the time when the vehicle passes the closed lane segment. We denote this problem by
according to the traditional three-field notation
for scheduling problems proposed by [
16], where
describes the machine environment,
gives the job characteristics and further constraints, and
describes the optimization criterion.
Let
be the tardiness of job
j in the schedule
. If
, then job
j is tardy and we have
; otherwise,
.
Subsequently, we consider also the following objective functions:
It is known that the problems
and
with an arbitrary number of chains are NP-hard, see [
17]. This has been proven by a reduction from the 3-partition problem.
In [
18], polynomial time dynamic programming algorithms have been presented to solve the problems
and
.
In an optimal schedule for the problem , the jobs are processed in non-decreasing order of the values . This can be easily proven by contradiction. Assume that we have an optimal schedule , where . Then, for the schedule , we have . For an illustration of the concepts introduced above, we consider the following small Example 1.
Example 1. Let . Moreover, the values , and are given. For the chosen job sequence , we obtainandThus, we obtainFor the job sequence , we obtain We note that there exists a set of possible completion times of all jobs with since:
without loss of generality, we consider only active schedules, where no job can be processed earlier without loss of feasibility;
there are no more than n different values ;
all processing times are equal to p and thus, for any job , its completion time is equal to .
The problems can be solved by a dynamic program (DP). In the DP, we consider the jobs one by one, where . Thus, at each stage k of the dynamic program, we consider a single job . Moreover, at each stage we consider all states and the corresponding best partial solutions (sequences of jobs) stored at the previous stage. The meaning of the above triplet is as follows. Here, is the value of the considered objective function for the partial solution. denotes the completion time of job in the corresponding partial solution. Finally, describes the position of a job, and this means that job is processed between the jobs and . For each job and a state , we compute new states , where , and is the completion time of job in a new partial solution, where job is scheduled after job . If, at any stage, there are two states and with and , we only keep the state . After the last stage, we have to select the best found complete solution among all states generated.
Let us explain a state
by means of Example 1. In Algorithm 1, at stage
we have
. Thus, we consider the partial schedule
and the corresponding state
. For the objective function
, we have
For the objective function
, we have
Algorithm 1: A pseudo-code of Algorithm 1 is presented below. |
- 1.
; - 2.
FOR EACH DO
- 2.1
; - 2.2
FOR EACH DO
- 2.2.1
Let ; - 2.2.2
FOR EACH DO
- 2.2.2.1
Calculate for the resulting partial solution, if job is processed after , according to the partial solution corresponding to state ; - 2.2.2.2
Add to . If in , there is a state with and , then exclude the state from ; - 2.2.2.3
If is the last job in the set , then schedule all unscheduled jobs from the set at the earliest possible time.
- 2.3
;
- 3.
Select the best found complete solution among all states generated.
|
Theorem 1. The problems can be solved in time by a dynamic program.
Proof. Dynamic programming is a mathematical optimization method, where a complicated problem is split into simpler sub-problems in a recursive manner.
In Algorithm 1, each state (here we skip the upper index for simplicity of the notations), calculated at stage , divides the problem into two sub-problems. In the first sub-problem, all jobs and all jobs are considered. In the second sub-problem, all jobs and all jobs are considered. Let be an optimal solution for the first sub-problem and be an optimal solution for the second one. Then is an optimal solution corresponding to state .
The proof of optimality of Algorithm 1 can be performed by induction. Let, for an instance of a problem, be an optimal solution and be the optimal value of the considered objective function. Moreover, let be the position of job in the job sequence. This means that job is processed before job and job is processed after it, where means that job is processed before job and means hat job is processed after job . Let be the objective function value for the job sub-sequence .
Next, we prove the following Property (*).
Property (*): At each stage k of Algorithm 1, for each job the state and the corresponding partial job sequence will be considered, where and .
For stage 1, Property (*) holds since we consider and save in the set of states all possible positions for job . Let Property (*) hold for stage , i.e., the state is contained in the set of states.
Then, according to step [2.2.2] of Algorithm 1, the state will be considered and stored in the set of states.
So, according to step [3.] of Algorithm 1, the job sequence will be constructed with the objective function value and the problems can be solved by a dynamic program.
There are stages and states at each stage, since with , and . At the next stage, for each state generated at the previous stage in step [2.2.2], we need to consider new states.
To perform all steps [2.2.2.1] in [2.2.2], we need operations. In step [2.2.2.1], to construct a partial solution, we need to schedule the jobs into the partial solution, i.e., to calculate their starting times according to , the release dates, and the processing time. We need operations to calculate the starting time of each job, and we calculate the starting time for a job only once in step [2.2.2], e.g., the starting time of job is calculated only once in [2.2.2]. So, to perform all steps [2.2.2.1] in the cycle [2.2.2], we need operations, and to perform all steps [2.2.2.1] in Algorithm 1, we need operations.
To perform step [2.2.2.2] in time, we additionally keep in a 2-dimensional array with rows corresponding to all possible values and columns corresponding to all possible values . We initiate the array in step [2.1]. So, to check a dominated value, we need time. Thus, we need operations to perform all steps [2.2.2.1] in Algorithm DP.
To perform all steps [2.2.2.3] in Algorithm 1, we need time, since it is performed only for the last job in the set .
So, the running time of Algorithm 1 is .
The theorem has been proven. □
We conjecture that there are other solution algorithms with a running time less than . Here, our motivation is only to show that the problems are polynomially solvable. For real-world problems, there are more parameters and constraints that have to be taken into consideration, and they can be possibly solved by other methods than dynamic programming.
5. A Crossroad with Dividing Lines
In this section, we consider a crossroad with dividing lines and four sets,
, of CAVs. They share four sectors of a crossroad denoted by
(see
Figure 5). We have to find an optimal sequence of passing these sectors.
We can formulate the following job shop scheduling problem with four machines. There are four sets, , of jobs and four machines corresponding to the sectors . Each job j consists of two operations. For each job , its first operation has to be processed on machine and its second one has to be processed on machine . For each job , its first operation has to be processed on machine and its second one has to processed on machine . For each job , its first operation has to be processed on machine and its second one has to be processed on machine . For each job , its first operation has to be processed on machine and its second one has to be processed on machine . The processing times of the operations are equal to p. Precedence relations can be given as chains of jobs.
If the lengths of the dividing lines are equal to 0, then the second operation of a job j should be processed immediately after the first one. Otherwise, for each of the sets , there are four buffers of limited capacities, namely jobs for the corresponding set of jobs. At any moment, for the set , there can be up to jobs for which the first operation is completed and the second one is not yet started. We denote these problems by .
The problems can be solved by a branch-and-bound (B&B) algorithm. The search (rooted) tree is constructed by the following branching rule. For any node of the tree, we consider the following 8 possible branches:
Schedule the first unscheduled possible operation for a job on machine at the earliest possible starting time. If there is no such an operation, skip this branch.
Schedule the first unscheduled possible operation for a job on machine at the earliest possible starting time.
Schedule the first unscheduled possible operation for a job on machine at the earliest possible starting time.
Schedule the first unscheduled possible operation for a job on machine at the earliest possible starting time.
Schedule the first unscheduled possible operation for a job on machine at the earliest possible starting time.
Schedule the first unscheduled possible operation for a job on machine at the earliest possible starting time.
Schedule the first unscheduled possible operation for a job on machine at the earliest possible starting time.
Schedule the first unscheduled possible operation for a job on machine at the earliest possible starting time.
Thus, there are up to branches for each node to be considered. Since there are operations, where , there are no more than levels in the search tree. Thus, we have no more than nodes to be considered. If some of the values are equal to 0, we have fewer nodes. As an example, if each of them is equal to 0, then we have only nodes.
Moreover, we can use the following trivial upper and lower bounds for the problem .
Upper bound. To construct a feasible solution, we use a list scheduling algorithm. In this algorithm, we consider the unscheduled operations one-by-one according to a non-decreasing order of the release dates of the corresponding jobs. We schedule the next unscheduled operation at the earliest possible starting time according to the current partial schedule. To order the set of jobs, we need operations. In addition, we need operations to construct a feasible solution.
Lower bound. Consider a set of unscheduled operations
. For each of them, we calculate the earliest possible starting time according to the current partial schedule without taking into account the other unscheduled operations. In such a way, we obtain a schedule
that can be infeasible. Let
be the makespan (i.e., the maximal completion time of an operation assigned to the machine) for machine
,
be the idle time on machine
between the operations of the set
, and
be the total overlap time, where more than one operation is processed at the same time. Moreover, let
Similarly, we can define
for
. Then,
is a lower bound. It is easy to check that we need
operations to calculate this bound.
If we use an upper bound and lower bound, then the B&B algorithm requires operations.