Next Article in Journal
Cubic q-Bézier Triangular Patch for Scattered Data Interpolation and Its Algorithm
Previous Article in Journal
An Algorithm to Find the Shortest Path through Obstacles of Arbitrary Shapes and Positions in 2D
Previous Article in Special Issue
An Improved Negotiation-Based Approach for Collecting and Sorting Operations in Waste Management and Recycling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Connected and Autonomous Vehicle Scheduling Problems: Some Models and Algorithms

by
Evgeny R. Gafarov
1,† and
Frank Werner
2,*,†
1
V.A. Trapeznikov Institute of Control Sciences, Russian Academy of Sciences, Profsoyuznaya St. 65, Moscow 117997, Russia
2
Faculty of Mathematics, Otto-von-Guericke-Universität Magdeburg, PSF 4120, 39016 Magdeburg, Germany
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2024, 17(9), 421; https://doi.org/10.3390/a17090421
Submission received: 23 August 2024 / Revised: 15 September 2024 / Accepted: 20 September 2024 / Published: 21 September 2024
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)

Abstract

:
In this paper, we consider some problems that arise in connected and autonomous vehicle (CAV) systems. Their simplified variants can be formulated as scheduling problems. Therefore, scheduling solution algorithms can be used as a part of solution algorithms for real-world problems. For four variants of such problems, mathematical models and solution algorithms are presented. In particular, three polynomial algorithms and a branch and bound algorithm are developed. These CAV scheduling problems are considered in the literature for the first time. More complicated NP-hard scheduling problems related to CAVs can be considered in the future.
MSC:
90 B 35; 90 C 27; 68 Q 25; 68 W 40
JEL Classification:
D8; H51

1. Introduction

Connected and autonomous vehicle systems contain a set of vehicles that can automatically coordinate their routing, owing to vehicle-to-vehicle communications and a centralized scheduler (computer). The vehicles are controlled by the centralized scheduler residing in the network (e.g., a base station in the case of cellular systems) or a distributed scheduler, where the scheduler is autonomously selected by the vehicles. Such vehicle systems can coordinate the vehicles in order to speed up the traffic, minimize traffic jams, and they also take into account environment aspects by reducing emissions.
Vehicle routing and scheduling problems have been intensively investigated over the last decades. There exist a huge number of papers dealing with particular aspects or also covering relationships with related research fields, see, for instance, refs. [1,2,3,4,5] to name only a few papers. A complete special issue with 16 papers on different aspects of vehicle routing and scheduling has been edited by Repoussis and Gounaris [6]. For surveys in this field, the reader is referred to the papers [7,8,9]. During the last years, new challenges arose in connection with autonomous driving of vehicles.
Vehicle-to-vehicle (V2V) communications can be used as a potential solution to many problems arising in the area of traffic control. Several approaches have been developed including the modeling of a complete autonomous driving system as a multi-agent system, where the vehicles interact to ensure an autonomous functionality such that emergency braking and traffic jam are avoided as much as possible. Nowadays, vehicle systems are developed towards fully connected and fully autonomous systems. Vehicular communication technologies have been considered, for instance, in the papers [10,11].
In [12], the authors dealt with the optimization of the departure times, travel routes, and the longitudinal trajectories of connected and autonomous vehicles and also the control of the signal timings at the intersections to obtain a stable traffic flow in such a way that the vehicles do not need to stop before they enter an intersection. In addition, the vehicle queues before the intersections should not become too large. In this paper, all the data (i.e., the departure times, travel routes, and signal timings) are determined by a central controller, but the trajectories of the vehicles are fixed by distributed roadside processors, which constitute a hierarchical traffic management scheme together.
In [13], the authors considered the control of the required lane changes of a system of autonomous vehicles on a road segment with two lanes before the vehicles arrive at a given critical position. This paper presents an algorithm that realizes the lane change of an individual vehicle in the shortest possible time. Then, this algorithm is iteratively used to manage all the lane changes which are necessary on the road segment under consideration in such a way that traffic safety is maintained.
In the paper by [14], the problem of scheduling a CAV, which crosses an intersection, was considered with the goal to optimize the traffic flow at the intersection. In addition, a solution algorithm was presented.
In [15], the problem of scheduling the time phases of a traffic light was considered with the objective to improve the traffic flow by reducing the waiting time of the traveling vehicles at the indicated road intersections.
To the best of our knowledge, in this paper, simplified CAV problems are formulated as classical scheduling problems for the first time. Scheduling solution algorithms can be used as a part of solution algorithms for real-world CAV problems. In the future, more sophisticated NP-hard scheduling models can be considered.
The aim of the paper is to encourage the researchers to consider CAV problems as scheduling problems and to use classical methods of scheduling theory to investigate the complexity and to solve them.
Although the solution algorithms presented have a large running time and are somehow still useless in practice, we have shown that some CAV problems are polynomially solvable. Solution algorithms can be advanced in future research.
In this paper, we consider four scheduling problems that arise in connection with CAVs. The remainder of this paper is as follows. In each of the Section 2, Section 3, Section 4 and Section 5, we consider one of these problems. The problem with a road of two lanes and one lane closure is considered in Section 2. Section 3 deals with the case of a turn to a main road. Section 4 considers the case of a road with three lanes and a lane closure on the middle lane. Section 5 deals with a crossroad having dividing lanes. For each of these cases, an appropriate scheduling problem is formulated and a solution algorithm is given. Finally, Section 6 gives a few concluding remarks.

2. A Road with Two Lanes and One Lane Closure

In this section, we consider a road with two lanes, where two sets of CAVs, N 1 and N 2 , are given. The CAVs from the set N 1 go on lane 1, and the CAVs from the set N 2 go on lane 2. Both lanes have the same direction. On lane 2, there is a lane closure and the CAVs from the set N 2 have to move to lane 1, see Figure 1.
We have to find a sequence of passing the lane closure by the CAVs from the sets N 1 and N 2 in order to minimize a given objective function, e.g., the total passing time.
We assume that
  • a maximal feasible speed of the CAVs is given. The CAVs either go with the maximal feasible speed or brake in order to let another CAV change the lane.
  • an acceleration is not taken into account.
  • the time needed to change the lane is not taken into account, i.e., it is equal to zero.
  • the CAVs have the same length.
  • the safe distance between two CAVs is the same for all vehicles.
Since the problems investigated in this paper have not been considered in the literature so far, these assumptions are made for the first time to obtain simplified models. In real-world problems, these assumptions can be false, although solution algorithms for simplified models can be used to solve sub-problems or to obtain lower or upper bounds.
The same problem arises, e.g., on railway sections and in automated warehouses of logistics companies with autonomous robot transporters. This simplified problem can be formulated as a single machine scheduling problem as follows.
Given a set N = N 1 N 2 of n jobs that have to be processed on a single machine from time 0 on. For each job j, a processing time p j = p > 0 , a release date r j 0 , a due date d j 0 , and a weight w j > 0 are given. The machine can process no more than one job at a time. The processing time p can be computed from the maximal feasible speed and the length of a CAV. The value r j corresponds to the earliest time when the processing of the vehicle can start, resulting from the position of the CAV j on the road.
For example, let the speed of each car be 60 km per hour and the length of the closed road section be 100 m. Thus, a car needs 6 s to pass the road closure. Therefore, we assume p = 6 (in seconds). Let a car j be 0 m from the beginning of the road closure and a car i be 200 m behind. So, we assume r j = 0 and r i = 12 according to their speed and the distances. Moreover, let the starting time of any schedule computed be 05:00:00 a.m. which we consider as time 0. Let the due date for car i be 05:00:15, then we assume d i = 15 . It is obvious that the tardiness of car i is no less that 6 + 12 15 = 3 seconds in any feasible schedule.
We call a feasible schedule active if one cannot reduce the objective function value by shifting a single job to an earlier time without violating the constraints. Without loss of generality, we consider only active schedules in this paper.
A schedule is uniquely determined by a permutation π of the CAVs of the set N. Let S j ( π ) be the starting time of job j in the schedule π . Then, C j ( π ) = S j ( π ) + p is the completion time of job j in the schedule π . A precedence relation can be defined as follows. For the jobs from the set N 1 = { j 1 , j 2 , , j n 1 } , we have j 1 j 2 j n 1 , where  n 1 = | N 1 | and j i means that the processing of job j precedes the processing of job i. Thus, there is a chain of jobs on lane 1. Analogously, a chain of jobs is defined for the set N 2 = { i 1 , i 2 , , i n 2 } .
For the single machine scheduling problem of minimizing total completion time, the goal is to find an optimal schedule π * that minimizes
C j = j N C j ( π ) .
Here, the completion time of a job is equal to the time when the vehicle passes the closed lane segment. We denote this problem by 1 | 2 c h a i n s , p j = p , r j | C j according to the traditional three-field notation α | β | γ for scheduling problems proposed by [16], where α describes the machine environment, β gives the job characteristics and further constraints, and  γ describes the optimization criterion.
Let
T j ( π ) = max { 0 , C j ( π ) d j }
be the tardiness of job j in the schedule π . If  C j ( π ) > d j , then job j is tardy and we have U j = 1 ; otherwise, U j = 0 .
Subsequently, we consider also the following objective functions:
w j C j = w j C j ( π ) total weighted completion time , T j = T j ( π ) total tardiness , w j T j = w j T j ( π ) total weighted tardiness ; w j U j = w j U j ( π ) weighted number of tardy jobs , C m a x = max C j ( π ) makespan
It is known that the problems 1 | c h a i n s , p j = p , r j | w j C j and 1 | c h a i n s , p j = p , r j | w j T j with an arbitrary number of chains are NP-hard, see [17]. This has been proven by a reduction from the 3-partition problem.
In [18], polynomial time dynamic programming algorithms have been presented to solve the problems 1 | p j = p , r j | T j and 1 | p j = p , r j | w j U j .
In an optimal schedule for the problem 1 | 2 c h a i n s , p j = p , r j | C j , the jobs are processed in non-decreasing order of the values r j . This can be easily proven by contradiction. Assume that we have an optimal schedule π = ( , j 2 , j 1 , ) , where r j 1 < r j 2 . Then, for the schedule π = ( , j 1 , j 2 , ) , we have C j ( π ) C j ( π ) . For an illustration of the concepts introduced above, we consider the following small Example 1.
Example 1. 
Let N 1 = { 1 , 2 } , N 2 = { 3 , 4 } . Moreover, the values p = 2 , r 1 = 0 , r 2 = 3 , r 3 = 1 , r 4 = 4 , and d 1 = d 2 = 10 , d 3 = 3 , d 4 = 6 are given. For the chosen job sequence π = ( 1 , 3 , 2 , 4 ) , we obtain
S 1 ( π ) = 0 , S 3 ( π ) = 2 , S 2 ( π ) = 4 , S 4 ( π ) = 6
and
C 1 ( π ) = 2 , C 3 ( π ) = 4 , C 2 ( π ) = 6 , C 4 ( π ) = 8 .
Thus, we obtain
j = 1 4 C j ( π ) = 20 a n d j = 1 4 T j ( π ) = 1 + 2 = 3 .
For the job sequence π = ( 3 , 4 , 1 , 2 ) , we obtain
j = 1 4 T j ( π ) = 0 .
We note that there exists a set Θ of possible completion times of all jobs with | Θ | n 2 since:
  • without loss of generality, we consider only active schedules, where no job can be processed earlier without loss of feasibility;
  • there are no more than n different values r j ;
  • all processing times are equal to p and thus, for any job j N , its completion time is equal to r i + l p , i N , l n .
The problems 1 | 2 c h a i n s , p j = p , r j | f , f { w j C j , w j T j , w j U j } can be solved by a dynamic program (DP). In the DP, we consider the jobs i 1 , i 2 , , i n 2 N 2 one by one, where i 1 i 2 i n 2 . Thus, at each stage k of the dynamic program, we consider a single job i k , k = 1 , 2 , , n 2 . Moreover, at each stage k > 1 we consider all states ( f k 1 , C m a x k 1 , p o s k 1 ) and the corresponding best partial solutions (sequences of jobs) stored at the previous stage. The meaning of the above triplet is as follows. Here, f k 1 is the value of the considered objective function for the partial solution. C m a x k 1 = C i k 1 Θ denotes the completion time of job i k 1 in the corresponding partial solution. Finally, p o s k 1 { 0 , 1 , 2 , , n 1 } describes the position of a job, and this means that job i k 1 is processed between the jobs j p o s N 1 and j p o s + 1 N 1 , 0 < p o s < n 1 . For each job i k and a state ( f k 1 , C m a x k 1 , p o s k 1 ) , we compute new states ( f , C m a x , p o s ) , where p o s p o s , and  C m a x is the completion time of job i k in a new partial solution, where job i k is scheduled after job j p o s N 1 . If, at any stage, there are two states ( f , C m a x , p o s ) and ( f , C m a x p o s ) with f f and C m a x C m a x , we only keep the state ( f , C m a x , p o s ) . After the last stage, we have to select the best found complete solution among all states generated.
Let us explain a state ( f 1 , C m a x 1 , p o s 1 ) by means of Example 1. In Algorithm 1, at stage k = 1 , we have i k = 3 . Thus, we consider the partial schedule π = ( 1 , 3 ) and the corresponding state ( f 1 , C m a x 1 , p o s 1 ) . For the objective function C j , we have
( f 1 , C m a x 1 , p o s 1 ) = ( C 1 ( π ) + C 3 ( π ) , C 3 ( π ) , 1 ) = ( 6 , 4 , 1 ) .
For the objective function T j , we have
( f 1 , C m a x 1 , p o s 1 ) = ( T 1 ( π ) + T 3 ( π ) , C 3 ( π ) , 1 ) = ( 1 , 4 , 1 ) .
Algorithm 1: A pseudo-code of Algorithm 1 is presented below.
1.
S t a t e s S e t = { ( 0 , 0 , 0 ) } ;
2.
FOR EACH i k N 2 DO
2.1
N e w S t a t e s S e t = { } ;
2.2
FOR EACH ( f k 1 , C m a x k 1 , p o s k 1 ) S t a t e s S e t DO
2.2.1
Let P o s i t i o n s L i s t = { p o s k 1 , p o s k 1 + 1 , , n 1 } ;
2.2.2
FOR EACH p o s P o s i t i o n s L i s t DO
2.2.2.1
Calculate f for the resulting partial solution, if job i k is processed after j p o s , according to the partial solution corresponding to state ( f k 1 , C m a x k 1 , p o s k 1 ) ;
2.2.2.2
Add ( f , C m a x , p o s ) to N e w S t a t e s S e t . If in N e w S t a t e s S e t , there is a state ( f , C m a x , p o s ) with f f and C m a x C m a x , then exclude the state ( f , C m a x , p o s ) from N e w S t a t e s S e t ;
2.2.2.3
If i k is the last job in the set N 2 , then schedule all unscheduled
jobs from the set N 1 at the earliest possible time.
2.3
S t a t e s S e t : = N e w S t a t e s S e t ;
3.
Select the best found complete solution among all states generated.
Theorem 1. 
The problems 1 | 2 c h a i n s , p j = p , r j | f , f { w j C j , w j T j , w j U j } can be solved in O ( n 5 ) time by a dynamic program.
Proof. 
Dynamic programming is a mathematical optimization method, where a complicated problem is split into simpler sub-problems in a recursive manner.
In Algorithm 1, each state ( f , C m a x , p o s ) (here we skip the upper index for simplicity of the notations), calculated at stage k 1 , divides the problem into two sub-problems. In the first sub-problem, all jobs j { j 1 , j 2 , , j p o s } and all jobs i { i 1 , , i k 2 } are considered. In the second sub-problem, all jobs j { j p o s + 1 , , j n 1 } and all jobs i { i k , , i n 2 } are considered. Let π be an optimal solution for the first sub-problem and π be an optimal solution for the second one. Then π = ( π , i k 1 , π ) is an optimal solution corresponding to state ( f , C m a x , p o s ) .
The proof of optimality of Algorithm 1 can be performed by induction. Let, for an instance of a problem, π * = ( π 1 , i k , π 2 ) be an optimal solution and f * be the optimal value of the considered objective function. Moreover, let p o s k { 0 , 1 , , n 1 , n 1 + 1 } , k = 1 , , n , be the position of job i k N 2 in the job sequence. This means that job j p o s k is processed before job i k and job j p o s k + 1 is processed after it, where j 0 means that job i k is processed before job j 1 and j n 1 + 1 means hat job i k is processed after job j n 1 . Let f be the objective function value for the job sub-sequence ( π 1 , i k ) .
Next, we prove the following Property (*).
Property (*): At each stage k of Algorithm 1, for each job i k the state ( f k , C i k , p o s k ) and the corresponding partial job sequence ( π k , i k ) will be considered, where C i k ( π * ) = C i k and f = f k .
For stage 1, Property (*) holds since we consider and save in the set of states all n 1 + 1 possible positions for job i 1 . Let Property (*) hold for stage k 1 , i.e., the state ( f k 1 , C i k 1 , p o s k 1 ) is contained in the set of states.
Then, according to step [2.2.2] of Algorithm 1, the state ( f k , C i k , p o s k ) will be considered and stored in the set of states.
So, according to step [3.] of Algorithm 1, the job sequence π will be constructed with the objective function value f = f * and the problems 1 | 2 c h a i n s , p j = p , r j | f , f { w j C j , w j T j , w j U j } can be solved by a dynamic program.
There are O ( n ) stages and O ( n 3 ) states ( f , C m a x , p o s ) at each stage, since C m a x Θ with | Θ | n 2 , and p o s { 0 , 1 , , n 1 } . At the next stage, for each state generated at the previous stage in step [2.2.2], we need to consider O ( n ) new states.
To perform all steps [2.2.2.1] in [2.2.2], we need O ( n ) operations. In step [2.2.2.1], to construct a partial solution, we need to schedule the jobs j p o s k 1 + 1 , j p o s k 1 + 2 , , j p o s into the partial solution, i.e., to calculate their starting times according to C m a x k 1 , the release dates, and the processing time. We need O ( 1 ) operations to calculate the starting time of each job, and we calculate the starting time for a job only once in step [2.2.2], e.g., the starting time of job j p o s k 1 + 1 is calculated only once in [2.2.2]. So, to perform all steps [2.2.2.1] in the cycle [2.2.2], we need O ( n ) operations, and to perform all steps [2.2.2.1] in Algorithm 1, we need O ( n 5 ) operations.
To perform step [2.2.2.2] in O ( 1 ) time, we additionally keep P o s i t i o n s L i s t in a 2-dimensional array with O ( n 2 ) rows corresponding to all possible values C m a x and O ( n ) columns corresponding to all possible values p o s . We initiate the array in step [2.1]. So, to check a dominated value, we need O ( 1 ) time. Thus, we need O ( n 5 ) operations to perform all steps [2.2.2.1] in Algorithm DP.
To perform all steps [2.2.2.3] in Algorithm 1, we need O ( n 5 ) time, since it is performed only for the last job in the set N 2 .
So, the running time of Algorithm 1 is O ( n 5 ) .
The theorem has been proven. □
We conjecture that there are other solution algorithms with a running time less than O ( n 5 ) . Here, our motivation is only to show that the problems are polynomially solvable. For real-world problems, there are more parameters and constraints that have to be taken into consideration, and they can be possibly solved by other methods than dynamic programming.

3. Turn to a Main Road

There is a set N 1 of CAVs going along a main road and a set N 2 of CAVs turning into the main road from a side road (see Figure 2). In contrast to the problems 1 | 2 c h a i n s , p j = p | f , f { C m a x , w j C j , w j T j , w j U j } , we have now p j = p 1 , j N 1 and p j = p 2 , j N 2 . We denote these problems by 1 | 2 c h a i n s , p j { p 1 , p 2 } , r j | f , f { C m a x , w j C j , w j T j , w j U j } .
These problems can be solved by the same Algorithm 1. In Algorithm 1, we describe the states in the same way: ( f , C m a x , p o s ) . For any job j N in an active schedule for these problems, its completion time is equal to r i + l p 1 + v p 2 , i N , l n , v n . Thus, we have | Θ | n 3 , and the running time of Algorithm 1 is O ( n 6 ) .
Figure 2. Turn to a main road.
Figure 2. Turn to a main road.
Algorithms 17 00421 g002

4. A Road with Three Lanes and a Road Closure on the Middle Lane

In addition to the problems 1 | 2 c h a i n s , p j = p , r j | f , f { C m a x , w j C j , w j T j , w j U j } , there are an additional lane 3 and a subset N 3 of jobs (see Figure 3). The jobs of the set N 1 should be processed on machine M 1 , and the jobs of the set N 3 should be processed on machine M 3 . The jobs of the set N 2 can be processed on any of these two machines. Precedence relations among the jobs of the set N 3 can be defined as a chain of jobs.
Figure 3. A road with three lanes and a closure on the middle lane.
Figure 3. A road with three lanes and a closure on the middle lane.
Algorithms 17 00421 g003
We denote these problems by P 2 | d e d i c a t e d , 3 c h a i n s , p j = p , r j | f , f { C m a x , w j C j , w j T j , w j U j } . These problems can be solved by a modified Algorithm 1, where we consider the positions p o s between the jobs of the set N 1 and between the jobs of the set N 3 .
We illustrate the dynamic programming algorithm for this problem by the following Example 2.
Example 2. 
Let N 1 = { 1 , 2 } , N 2 = { 3 , 4 } , N 3 = { 5 , 6 } . Moreover, the values p = 2 , r 1 = 0 , r 2 = 3 , r 3 = 1 , r 4 = 4 , r 5 = 1 , r 6 = 4 , and d 1 = 2 , d 2 = 5 , d 3 = 3 , d 4 = 6 , d 5 = 3 , d 6 = 8 are given. We consider the minimization of total tardiness, i.e., f = T j . Denote by M 1 the machine, where the jobs from the set N 1 have to be processed and by M 3 the machine, where the jobs from the set N 3 have to be processed. The initial positions of the jobs and an optimal schedule are presented in Figure 4.
In Algorithm 1, at each stage we consider all states
( p o s 1 , C m a x 1 , p o s 3 , C m a x 3 , f )
stored at the previous stage. Here, p o s 1 is the first possible position for the current job on machine M 1 , C m a x 1 denotes the completion time of the last job scheduled on machine M 1 , p o s 3 gives the first possible position for the current job on machine M 3 , and C m a x 3 denotes the completion time of the last job scheduled on machine M 3 .
In the following Table 1, all states computed in the two stages for the jobs j = 3 and j = 4 are presented. In the first column S 1 , the index numbers of the states are given. In the second column S 2 , we present the original state from which the current state is computed. P 1 represents p o s 1 , C 1 denotes C m a x 1 , P 3 represents p o s 3 , and C 3 denotes C m a x 3 . π 1 gives the corresponding job sequence on machine M 1 and π 3 describes the corresponding job sequence on machine M 3 . In the columns C j , j = 1 , , 6 , the corresponding completion times are given. In the columns T j , j = 1 , , 6 , the corresponding tardiness values are given. In the last column, the resulting total tardiness values f = T j are presented.
An optimal solution is found in state 16. Let us consider an extended instance with an additional job 7 in the set N 2 , where 3 4 7 . Then, in Algorithm DP, we will have 3 stages. At the state 15, we will have ( p o s 1 , C m a x 1 , p o s 3 , C m a x 3 , f ) = ( 1 , 4 , 0 , 6 , 2 ) , π 1 = ( 1 , 3 ) and π 2 = ( 4 ) . At the state 16, we will have ( p o s 1 , C m a x 1 , p o s 3 , C m a x 3 , f ) = ( 1 , 4 , 1 , 6 , 2 ) , π 1 = ( 1 , 3 ) and π 2 = ( 5 , 4 ) . So, we only keep the state 16, since p o s 3 is larger for state 16 and the other parameters of the states are the same.

5. A Crossroad with Dividing Lines

In this section, we consider a crossroad with dividing lines and four sets, N 1 , N 2 , N 3 , N 4 , of CAVs. They share four sectors of a crossroad denoted by M 1 , M 2 , M 3 , M 4 (see Figure 5). We have to find an optimal sequence of passing these sectors.
We can formulate the following job shop scheduling problem with four machines. There are four sets, N 1 , N 2 , N 3 , N 4 , of jobs and four machines corresponding to the sectors M 1 , M 2 , M 3 , M 4 . Each job j consists of two operations. For each job j N 1 , its first operation has to be processed on machine M 1 and its second one has to be processed on machine M 2 . For each job j N 2 , its first operation has to be processed on machine M 2 and its second one has to processed on machine M 4 . For each job j N 3 , its first operation has to be processed on machine M 3 and its second one has to be processed on machine M 1 . For each job j N 4 , its first operation has to be processed on machine M 4 and its second one has to be processed on machine M 3 . The processing times of the operations are equal to p. Precedence relations can be given as chains of jobs.
If the lengths of the dividing lines are equal to 0, then the second operation of a job j should be processed immediately after the first one. Otherwise, for each of the sets N 1 , N 2 , N 3 , N 4 , there are four buffers of limited capacities, namely b 1 , b 2 , b 3 , b 4 jobs for the corresponding set of jobs. At any moment, for the set N 1 , there can be up to b 1 jobs for which the first operation is completed and the second one is not yet started. We denote these problems by J 4 | 4 c h a i n s , p j = p , r j | f , f { C m a x , w j C j , w j T j , w j U j } .
The problems J 4 | 4 c h a i n s , p j = p , r j | f , f { C m a x , w j C j , w j T j , w j U j } can be solved by a branch-and-bound (B&B) algorithm. The search (rooted) tree is constructed by the following branching rule. For any node of the tree, we consider the following 8 possible branches:
  • Schedule the first unscheduled possible operation for a job j N 1 on machine M 1 at the earliest possible starting time. If there is no such an operation, skip this branch.
  • Schedule the first unscheduled possible operation for a job j N 3 on machine M 1 at the earliest possible starting time.
  • Schedule the first unscheduled possible operation for a job j N 1 on machine M 2 at the earliest possible starting time.
  • Schedule the first unscheduled possible operation for a job j N 2 on machine M 2 at the earliest possible starting time.
  • Schedule the first unscheduled possible operation for a job j N 3 on machine M 3 at the earliest possible starting time.
  • Schedule the first unscheduled possible operation for a job j N 4 on machine M 3 at the earliest possible starting time.
  • Schedule the first unscheduled possible operation for a job j N 2 on machine M 4 at the earliest possible starting time.
  • Schedule the first unscheduled possible operation for a job j N 4 on machine M 4 at the earliest possible starting time.
Thus, there are up to 2 3 = 8 branches for each node to be considered. Since there are 2 n operations, where n = | N 1 N 2 N 3 N 4 | , there are no more than 2 n levels in the search tree. Thus, we have no more than ( 2 3 ) 2 n = 2 6 n nodes to be considered. If some of the values b 1 , b 2 , b 3 , b 4 are equal to 0, we have fewer nodes. As an example, if each of them is equal to 0, then we have only 2 3 n nodes.
Moreover, we can use the following trivial upper and lower bounds for the problem J 4 | 4 c h a i n s , p j = p , r j | C m a x .
Upper bound. To construct a feasible solution, we use a list scheduling algorithm. In this algorithm, we consider the unscheduled operations one-by-one according to a non-decreasing order of the release dates of the corresponding jobs. We schedule the next unscheduled operation at the earliest possible starting time according to the current partial schedule. To order the set of jobs, we need O ( n log n ) operations. In addition, we need O ( n ) operations to construct a feasible solution.
Lower bound. Consider a set of unscheduled operations N . For each of them, we calculate the earliest possible starting time according to the current partial schedule without taking into account the other unscheduled operations. In such a way, we obtain a schedule π that can be infeasible. Let C M 1 ( π ) be the makespan (i.e., the maximal completion time of an operation assigned to the machine) for machine M 1 , I T M 1 ( π ) be the idle time on machine M 1 between the operations of the set N , and O T M 1 ( π ) be the total overlap time, where more than one operation is processed at the same time. Moreover, let
C M 1 ( π ) = C M 1 ( π ) + max { 0 , O T M 1 ( π ) I T M 1 ( π ) } .
Similarly, we can define C M j ( π ) for j = 2 , 3 , 4 . Then,
L B 1 = max { C M 1 ( π ) , C M 2 ( π ) , C M 3 ( π ) , C M 4 ( π ) }
is a lower bound. It is easy to check that we need O ( n ) operations to calculate this bound.
If we use an upper bound and lower bound, then the B&B algorithm requires O ( n 2 6 n ) operations.

6. Concluding Remarks

In this note, four models of scheduling problems for CAVs have been given. Three of them can be solved by a dynamic programming algorithm in polynomial time. For the fourth problem, a B&B algorithm has been presented. The investigations in this paper are only a first step in the development of scheduling algorithms for CAV problems. The presented algorithms can handle only very special problems. However, they can be potentially used as a part of more complex algorithms for more general CAV problems (e.g., considering more lanes, different speeds of the cars, or a closure of more lanes). For real-world problems related to the scheduling of CAVs, fast metaheuristics and online algorithms can be developed in the future. Finally we give two specific research questions that are worth considering:
  • Are the problems J 4 | 4 c h a i n s , p j = p , r j | f , f { C m a x , w j C j , w j T j , w j U j } NP-hard or can they be solved in polynomial time?
  • Are there problems with CAVs having equal processing times and a fixed number of chains of jobs, which is an NP-hard problem?

Author Contributions

Conceptualization, E.R.G. and F.W., investigation, E.R.G. and F.W., visualization, E.R.G.; writing—original draft preparation, F.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data are available from the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ullrich, C. Integrated Machine Scheduling and Vehicle Routing with Time Windows. Eur. J. Oper. Res. 2012, 1, 152–165. [Google Scholar]
  2. Afifi, S.; Dang, D.-C.; Moukrim, A. Heuristic Solutions for the Vehicle Routing Problem with Time Windows and Synchronized Visits. Optim. Lett. 2016, 10, 511–525. [Google Scholar] [CrossRef]
  3. Norouzi, N.; Sadegh-Amalnick, M.; Tavakkoli-Moghaddam, R. Modified Particle Swarm Optimization in a Time-Dependent Vehicle Routing Problem: Minimizing Fuel Consumption. Optim. Lett. 2017, 11, 121–134. [Google Scholar] [CrossRef]
  4. Nasiri, M.M.; Rahbari, A.; Werner, F.; Karimi, R. Incorporating Supplier Selection and Order Allocation into the Vehicle Routing and Multi-Cross-Dock Scheduling Problem. Intern. J. Prod. Res. 2018, 56, 6527–6552. [Google Scholar] [CrossRef]
  5. Rahbari, A.; Nasiri, M.M.; Werner, F.; Musavi, M.; Jolai, F. The Vehicle Routing and Scheduling Problem with Cross-Docking for Perishable Products under Uncertainty: Two Robust Bi-objective Models. Appl. Math. Modell. 2018, 70, 605–625. [Google Scholar] [CrossRef]
  6. Repoussis, P.P.; Gounaris, C.E. Special Issue on Vehicle Routing and Scheduling: Recent Trends and Advances. Optim. Lett. 2013, 7, 1399–1403. [Google Scholar] [CrossRef]
  7. Bunte, S.; Kliewer, N. An Overview on Vehicle Scheduling Models. Public Transport. 2009, 1, 299–317. [Google Scholar] [CrossRef]
  8. Han, M.; Wang, Y. A Survey for Vehicle Routing Problems and its Derivatives. IOP Conf. Ser. Mater. Sci. Eng. 2018, 452, 042024. [Google Scholar] [CrossRef]
  9. Mor, A.; Speranza, M.G. Vehicle Routing Problems over Time: A Survey. 4OR 2020, 18, 129–149. [Google Scholar] [CrossRef]
  10. Bazzal, M.; Krawczyk, L.; Govindarajan, R.P.; Wolff, C. Timing Analysis of Car-to-Car Communication Systems Using Real-Time Calculus: A Case Study. In Proceedings of the 2020 IEEE 5th International Symposium on Smart and Wireless Systems within the Conferences on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS-SWS), Dortmund, Germany, 17–18 September 2020; pp. 1–8. [Google Scholar]
  11. Şahin, T.; Khalili, R.; Boban, M.; Wolisz, A. Reinforcement Learning Scheduler for Vehicle-to-Vehicle Communications Outside Coverage. In Proceedings of the 2018 IEEE Vehicular Networking Conference (VNC), Taipei, Taiwan, 5–7 December 2018; pp. 1–8. [Google Scholar]
  12. Qian, G.; Guo, M.; Zhang, L.; Wang, Y.; Hu, S.; Wang, D. Traffic scheduling and control in fully connected and automated networks. Transp. Res. Part C Emerg. Technol. 2021, 126, 103011. [Google Scholar] [CrossRef]
  13. Atagoziev, M.; Schmidt, E.G.; Schmidt, K.W. Lane change scheduling for connected and autonomous vehicles. Transp. Res. Part C Emerg. Technol. 2023, 147, 103985. [Google Scholar] [CrossRef]
  14. Ma, M. Optimal Scheduling of Connected and Autonomous Vehicles at a Reservation-Based Intersection. Ph.D. Thesis, University of Louisville, Louisville, KY, USA, 2022. [Google Scholar]
  15. Bani Younes, M.; Boukerche, A. An Intelligent Traffic Light scheduling algorithm through VANETs. In Proceedings of the 39th Annual IEEE Conference on Local Computer Networks Workshops, Edmonton, AB, Canada, 8–11 September 2014; pp. 637–642. [Google Scholar]
  16. Graham, R.L.; Lawler, E.L.; Lenstra, J.K.; Rinnooy Kan, A.H.G. Optimization and Approximation in Deterministic Machine Scheduling: A Survey. Ann. Discr. Math. 1979, 5, 287–326. [Google Scholar]
  17. Lenstra, J.K.; Rinnooy Kan, A.H.G. Complexity results for scheduling chains on a single machine. Eur. J. Oper. Res. 1980, 4, 270–275. [Google Scholar] [CrossRef]
  18. Baptiste, P. Scheduling equal-length jobs on identical parallel machines. Discret. Appl. Math. 2000, 103, 21–32. [Google Scholar] [CrossRef]
Figure 1. A road with two lanes and one lane closure.
Figure 1. A road with two lanes and one lane closure.
Algorithms 17 00421 g001
Figure 4. The initial positions of the jobs and an optimal schedule for Example 2.
Figure 4. The initial positions of the jobs and an optimal schedule for Example 2.
Algorithms 17 00421 g004
Figure 5. A crossroad with dividing lines.
Figure 5. A crossroad with dividing lines.
Algorithms 17 00421 g005
Table 1. Calculations of Algorithm 1 for Example 2.
Table 1. Calculations of Algorithm 1 for Example 2.
r 1 r 2 r 3 r 4 r 5 r 6 d 1 d 2 d 3 d 4 d 5 d 6
031415253638
S 1 S 2 P 1 C 1 P 3 C 3 π 1 π 3 C 1 C 2 C 3 C 4 C 5 C 6 T 1 T 2 T 3 T 4 T 5 T 6 f
Stage j = 3
1 03 3 3 0000000
2 14 1, 3 2 4 0010001
3 27 1, 2, 3 257 0040004
4 03 3 3 0000000
5 57 5, 3 5 3 0020002
6 69 5, 6, 3 9 370060006
Stage j = 4
7106 3, 4, 1, 25, 6810363765000011
8117 3, 1, 4, 25, 65937373401008
9129 3, 1, 2, 45, 65739373203008
1010 053, 1, 24, 5, 6573681032005212
1110 563, 1, 25, 4, 65736383200005
1210 693, 1, 25, 6, 45739373203008
1321 1, 3, 4, 25, 62846370310004
1422 1, 3, 2, 45, 62648370112004
1521 061, 3, 24, 5, 626468100110529
1621 561, 3, 25, 4, 62646380110002
1721 691, 3, 25, 6, 42649370113005
1832 291, 2, 3, 45, 62579370043007
1932 051, 2, 34, 5, 62579111300438520
2032 561, 2, 35, 4, 6257931100430310
2132 691, 2, 35, 6, 42579370043007
22406 4, 1, 23, 5, 6810365765002013
23416 1, 4, 23, 5, 62836570300205
24427 1, 2, 43, 5, 62537570001203
254 061, 23, 4, 5, 625368100000527
264 571, 23, 5, 4, 62537590001214
274 691, 23, 5, 6, 42539570003205
285065 4, 1, 25, 3, 6911573776210016
295175 1, 4, 25, 3, 62957370421007
305275 1, 2, 45, 3, 62557370021003
315 5 1, 25, 3, 4, 62557390021014
325 6 1, 25, 3, 6, 42559370023005
3360116 4, 1, 25, 6, 31315911371110650032
346166 1, 4, 25, 6, 32139113708650019
3562116 1, 2, 45, 6, 3259113700650011
366 6111, 25, 6, 3, 4259113700650011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gafarov, E.R.; Werner, F. Connected and Autonomous Vehicle Scheduling Problems: Some Models and Algorithms. Algorithms 2024, 17, 421. https://doi.org/10.3390/a17090421

AMA Style

Gafarov ER, Werner F. Connected and Autonomous Vehicle Scheduling Problems: Some Models and Algorithms. Algorithms. 2024; 17(9):421. https://doi.org/10.3390/a17090421

Chicago/Turabian Style

Gafarov, Evgeny R., and Frank Werner. 2024. "Connected and Autonomous Vehicle Scheduling Problems: Some Models and Algorithms" Algorithms 17, no. 9: 421. https://doi.org/10.3390/a17090421

APA Style

Gafarov, E. R., & Werner, F. (2024). Connected and Autonomous Vehicle Scheduling Problems: Some Models and Algorithms. Algorithms, 17(9), 421. https://doi.org/10.3390/a17090421

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop