1. Introduction
Coordination between different stages in a supply chain is an important issue affecting the overall efficiency of the manufacturing process. Poor coordination of the decisions taken at different stages in supply chain, including scheduling, batching and delivery stages, may result in a poor overall performance [
1]. In a supply chain scheduling problem, the batching and delivery activities are to be combined. The orders are released by a supplier to a manufacturer over time, and once processed, they need to be delivered to a customer. To minimize the overall costs, two or more orders can be delivered together in a batch. For example, in some applications, a number of products can be processed by a single batch machine simultaneously (e.g., burn-in operations in semiconductor manufacturing, see Lee and Martin-Vega [
2]), or they can be grouped and delivered into a single batch (e.g., in multistage flow shops where the products grouped into batches can be transported between the machines). A batch, formed in accordance with given restrictions, is processed by a batch machine, which can handle a number of jobs or orders simultaneously, in contrast to a machine in traditional scheduling problems that can process at most one job at a time. In the capacitated batch scheduling problems only a constant number of orders may simultaneously be handled by a batch machine, unlike the uncapacitated version in which a batch machine may handle unlimited number of jobs. Such an assumption is realistic in practice, for example, for vehicles of large capacity when the size of the completed orders is negligible (these can be integrated chips). In the commonly studied models, either the number of batch machines is unlimited or there is a single batch machine or there are two or more parallel batch machines. In the first case, with a sufficient number of batch machines, each batch is completely specified by a set of the orders and by its starting/delivery time. If there is a single batch machine, then each next batch may be assigned to the machine only when it becomes idle, i.e., it completes the delivery of the last batch assigned to it. The case with parallel batching machines is an extension of the previous model in which, in addition, each batch is to be assigned to a particular batch machine.
Overall efficiency of scheduling, batching and delivery decisions in supply chain scheduling problems can be measured in different ways. Here, the measure introduced by Hall and Potts [
1] is considered. It combines two criteria in one objective function, the total batch delivery cost and a scheduling cost, which is determined by the total flow time of the orders before their delivery. There is a sufficient number of batch machines and batch delivery time is a constant
D. The delivery cost does not depend on the number of the orders in the batch so that there is no restriction on the number of orders that can be assigned to a single batch. The orders are released by a supplier to the manufacturer over time and are to be delivered to a customer in batches. One or more orders can simultaneously be released by the supplier. In this model the processing time of each order is negligible and the emphasis is given on the batching and delivery activities with the aim to minimize the total flow time and the delivery cost of all the released orders. In
Figure 1 the problem is represented schematically. A supplier releases the orders over time to a manufacturer. The manufacturer forms batches from the already released orders and delvers them to a customer (there is a single customer and a single manufacturer). Note that these activities are present in nearly all the supply chain scheduling problems, therefore it is of a primary importance to arrange them in a most efficient possible way.
The purpose of this work is the development of an algorithmic framework leading to polynomial-time algorithms for the above basic supply chain scheduling problem which run faster than the existing ones. The proposed framework gives a new insight to more complex supply chain scheduling problems and can be extended for the solution of these problems as well (see
Section 5 and
Section 6). Hall and Potts [
1] have studied wide class of the related general supply chain scheduling problems allowing two or more suppliers, manufacturers and customers. The time complexity of the dynamic programming algorithms that they propose is polynomial in
n with the degree of the polynomial being dependent on the number of suppliers, manufacturers and customers. In a model from [
1] that fits our setting there are
H customers and a single batch for each customer is created (
Section 3.1 in [
1]). The proposed algorithm can be adopted to the basic setting with a single customer but multiple deliveries (in [
1] a single delivery for each customer is accomplished); with
H deliveries, the resultant time complexity is
(hence the use of this algorithm for the basic problem would not be beneficial). More recently, Averback and Baysan [
3] have studied a model, similar to ours but with a restriction that only one order is released at each order release time. The authors present an algorithm that also uses dynamic programming, verifying basically all possible ways of splitting the solution into the batches, with overall time complexity
. This dynamic programming method can easily be adopted to our setting (which allows two or more simultaneously release orders) yielding an
time complexity. The authors in [
3] also consider the online setting, in which, by each order release (assignment) time
t all orders released within the interval
,
, are known. However, here it is shown that it suffices to consider only the
Ts with
. At the same time, for larger values of
T an optimal linear time algorithm is proposed which is better than the earlier known
algorithm (see
Section 2.2).
Remind that the time efficiency of a computational algorithm for a given problem
P is measured in terms of its (worst-case) time complexity [
4], which is a function representing the maximum number of elementary operations that the algorithm will require for the generation of a solution to any instance of problem
P, where the argument of this function is the length of this instance (the number of computer words used in the representation of this instance, stored in the binary encoding in computer memory). Polynomial-time algorithms (ones with polynomial time complexity) are the most time efficient ones, fastest polynomial-time algorithms are those with lower degree polynomials in their time complexity expression.
computer words are required to represent an instance of our supply chain scheduling problem, where
n is the total number of orders (each order has a single parameter, its release time, and we need to represent one additional parameter, the delivery cost
D). Hence,
is the length of the input in the time complexity expressions of the above mentioned algorithms.
Besides the offline setting, the online setting of the basic supply chain scheduling problem is considered, similar to that from the above mentioned reference [
3]: at each assignment (an order release) time
t, the information on the next order released within the following
T time units, i.e., within the interval
, is known, where
T is a constant. Our setting is similar to but is less restrictive than that from [
3] which assumes that by time
t all the orders released within the interval
,
, are known. It is shown that there is no benefit in waiting for more than
time units at any assignment time in the online setting, i.e., potentially beneficial values for
T are
, and propose three linear-time offline algorithms which also work for the online case with
. These algorithms are optimal for both the offline and the above online cases for
. For the case
, an important real-life scenario with the so-called homogeneously released orders is studied. This problem naturally arises in many industries and hence has an independent practical significance. It addresses a typical situation when the same number
of orders simultaneously become ready for the delivery and the manufacturer repeatedly releases the orders within the same fixed time lapse
. As a practical example that fits this setting, consider a food manufacturing process where a supplier dispatches row products to the manufacturer twice a day, say at 8 a.m. (for the day shift) and at 8 p.m. (for the night shift). These products, after processing, are delivered to customers (food distributors or supermarkets). Although the delivery times are flexible, the manufacturer wishes to reduce the cost by delivering the finished products in batches.
In more formal terms, orders are released at each order release time and these times are evenly distributed, being separated between each other by magnitude , which is the distance between each neighboring pair of release times in the scheduling horizon. It is shown that for the proposed model, the total delay of the orders can be calculated with an almost constant cost, and, as a consequence, we arrive at a very fast algorithm that optimally solves this problem in steps, where is the maximum order release time (in the offline setting) and and are functions of and . Here, represents the number of deliveries (k is a fraction of ). Observe that for the construction of a feasible solution with deliveries (which is the number of deliveries in an optimal solution), at least steps are required, and hence the proposed algorithm has “an almost” best possible running time that gives obvious potential tangible benefits. An implementation of the algorithm may also yield intangible benefits for the workers of the manufacturing production due to reduced delivery costs that may give them a sense of the satisfaction.
The paper is organized as follows. In the next section, the studied supply chain scheduling problem is described and a short overview of some related work is given. In
Section 3, basic concepts, definitions and properties are introduced based on which three algorithms are proposed. The sufficient conditions when they construct an optimal solution are given. In
Section 4, a practical setting with the homogeneously released orders is considered and a fast optimal solution method is presented. In
Section 5, it is shown how the proposed framework can be adopted to more extended models considering, as an example, a two-stage supply chain scheduling model. In
Section 6, final comparative analysis and remarks are given.
3. Methodology
3.1. Materials
In this section, first, the general scheduling strategy that the proposed algorithms apply is described, and then the relevant definitions are introduced. Similar scheduling strategies can be applied to different supply chain scheduling problems (see
Section 5). Recall that in these problems, a supplier releases the orders over time. We temporarily assign the released over time orders to the so-called pending batch. For example, in
Figure 1, the orders released at the first five release times are temporarily assigned to the corresponding pending batches (correspondingly, five different pending batches are successively formed, where each newly formed one extends the previous pending batch). Once a decision is made to deliver the orders from the (last) bending batch all together in Batch 1, these orders are assigned to that batch and are dispatched; the latter pending batch disappears and the orders released at the next order release time are assigned to a new pending batch. Note that since the orders from a pending batch are not delivered, a pending batch is not a normal batch: neither the delivery time of the orders assigned to that batch nor the subset(s) of the orders from that batch which will simultaneously be delivered in a single batch is defined.
At every decision point t (which is an order release time), there is a set of the already delivered orders and a set of the pending orders, ones from the current pending batch . We denote by , , all the distinct release times of the pending orders from batch (here ). We shall refer to each as a (potential) splitting point by time t. We denote by the last splitting point (which is the maximum order release time in the off-line setting).
At splitting point the current pending batch is said to be extended if the new pending batch is formed in which the orders of batch and the orders released at time are included.
Alternatively, pending batch is said to be split at point , , if the orders released by time are delivered in a (normal) batch at time and the orders released after that time up to (including) time are assigned to the new pending batch . So a splitting point is a time moment at which the current pending batch may potentially split into two distinct batches: the first of these batches is a normal newly created batch and the second one is the new pending batch.
If there occurs the actual splitting of pending batch at some point (i.e., the algorithm under the consideration takes such decision), then that point is said to become active at time . In this case we split pending batch into two batches: the first one is a normal batch containing all the yet undelivered orders released by time and is delivered at time , and the second one is a new pending batch containing the remaining orders from batch (ones released after time ) and the orders released at time .
Lemma 1. Every batch in an optimal solution is delivered at some splitting point and all the orders released at that point are delivered in the same batch.
Proof. Let B be a batch in solution , and r be the maximum release time of an order from that batch. Batch B cannot be delivered before time r, and if it is delivered after time r then the delivery cost can obviously be decreased by delivering this batch at time r. In particular, all the orders released at time r must be delivered in batch B as the total delay of these orders will then be 0. □
Let
, where we will use
for the length of interval
,
. Let, further,
(
, respectively) be the number of the orders in pending batch
having the release time no larger than
(having the release time
, respectively), and let
be the number of the orders in pending batch
released between times
and
including ones released at these times; here
,
, and in the latter case,
.
is the total delay of the orders from the pending batch
if they are delivered in a single batch at time
. More generally, if we consider only the orders released from time
, then
will be the total delay of the orders from pending batch
released between time moments
and
(including these time moments) if they are delivered in a single batch at time
(note that for such deliveries the total delay of the orders released at time
is 0).
Clearly, the last delivery occurs at the release time
of the latest released order in (off-line) optimal solution
. Besides this active splitting point in solution
, we may have other predictable active splitting points. A splitting point
is called terminal if either
(in the offline setting) or
Lemma 2. If is a terminal splitting point then there is a batch that delivers the orders released at time at time in an optimal solution (this batch may also contain orders released before time ).
Proof. By Lemma 1 all the orders released at some splitting point are to be delivered together in one batch. By contradiction, suppose the orders released at time are delivered at time or later. Since is a terminal point, the total delay of these orders will be no less than D (whereas it will be 0 if these orders are delivered at time ). The lemma obviously follows since the cost of the delivery of a batch at time containing all the orders release at time is D. □
As a simple consequence of the above lemma, we obtain that for a problem instance in which the distance between any two neighboring splitting points is at least D, a single batch in solution delivers all the orders released at each splitting point, and this applies to both, offline and online settings.
Corollary 1. In the online version of the problem, there is no benefit in waiting for more than D time units for the next incoming order. Hence, for any , the online setting is reduced to the offline case. Therefore, the potentially interesting values for T in the online setting are ones that fall in the interval .
Proof. Immediately follows from Lemma 2 (which assures that if the distance from splitting point r to the next splitting pint is at least D, then all orders released at time r can be delivered at that time). □
3.2. Methods
In this section, we describe our algorithms and give the conditions when they provide an optimal solution to our problem.
Algorithm 0. According to Lemma 2, our first Algorithm 0 forms our initial solution as follows. Iteratively, at each splitting point , it extends the pending batch with the orders released at time forming in this way the new pending batch unless is a terminal splitting point. In the latter case the algorithm creates a batch that delivers at time the orders from pending batch and the orders released at time . In this way, Algorithm 0 naturally partitions the incoming orders into the segments, with each segment the corresponding terminal point being associated. For that segment, Algorithm 0 creates a single batch (the segment batch) delivering all orders released within that segment at the corresponding terminal splitting point. Since a segment can be solutiond independently from the other segments, from here on, we concentrate our attention to a single segment to which we shall refer to as the segment and will denote by the splitting points from that segment ( being the terminal splitting point of that segment).
Lemma 3. - (i)
Algorithm 0 has a best possible time complexity of .
- (ii)
Every active splitting point in solution σ is also an active splitting point in an optimal solution .
- (iii)
Solution σ is optimal if every splitting point is terminal. Otherwise, it remains optimal if for every terminal splitting point in it, where is the earliest splitting point (the minimum order release time) in the segment containing terminal splitting point .
Proof. Algorithm 0 works on
iterations with a constant cost at each of these iterations (the orders released at each splitting point can be included into a batch with a constant cost and condition (
3) can also be verified in a constant time at that point). This implies that time complexity of the algorithm is
. It is also a best possible time complexity, as any feasible solution must contain the orders released at each of the
splitting points and hence all of them need to be considered in the linear fashion. Claim (ii) and the first statement of claim (iii) immediately follow from the construction of solution
and Lemma 2.
We show the second statement of claim (iii). Let
. Note that inequality (
4) essentially says that if the orders released up to point
in the segment containing point
in pending batch
are delivered at time
, then their total delay will be no greater than
D, the cost of the creation of a new separate batch at or before time
. In particular, the total delay of the orders from that segment of pending batch
released between points
and
within the interval
would also be no greater than
D (in case the corresponding new batch is created). Consider a feasible solution
obtained from solution
by delivering a separate batch at point
. If that solution is optimal, it delivers a batch also at time
(claim (ii)). So in solution
, the first batch delivers the orders released by time
at that time, and second batch delivers the orders released between times
and
at time
. We have
where
stands for the total delay of the orders from pending batch
released between points
and
within the interval
. We have
Thus we showed that the terminal splitting point is the only active splitting point of the corresponding segment of optimal solution . Since this holds for any segment of solution , it is optimal. □
From here on in this paper, we assume that none of the optimality condition from (iii) in Lemma 3 is satisfied. Note that since all the active splitting points of the initial solution are also active splitting points in an optimal solution , if solution possesses additional active splitting points then it may contain two or more batches for the jobs of some segment.
Algorithm 1. Our next algorithm employs the concept of the relative total delay that we define now. Consider the pending batch containing the orders released between pints
and
(including these points, for some positive integer
). Here, we assume that the orders released before time
are already delivered. These orders may be delivered in a single batch at time
or alternatively, they can be delivered together with the orders released at the next splitting point
in a single batch at time
. We may compare these alternatives employing the following definition of the relative total delay
Lemma 4. Suppose k is the minimum integer such thatThen in an optimal solution , the orders released between points and are not delivered at time (i.e., they are delivered at one or more points from set in solution ). Proof. By the way of contradiction, suppose the orders released between points
and
are delivered at time
in solution
. Consider an alternative solution
obtained from
by delivering these orders in a single batch at time
. It is easily seen that the reduction in the total delay of the latter orders in solution
(compared to that in solution
) is
, whereas the cost of the newly created batch that dispatches at time
in solution
is
D, i.e.,
Then by condition (
5)
and hence
cannot be an optimal solution. □
Algorithm 1 relies on the above lemma and on Lemma 2. Iteratively, if the next splitting point
is terminal, it creates a batch that delivers at that time the orders from pending batch
and the orders released at time
. Otherwise, it verifies condition (
5): if the condition is not satisfied, the orders released at time
are added to the current pending batch; otherwise, a new batch in which these orders (the ones released between points
and
) are delivered at time
is created. Importantly, Algorithm 1, retains the time complexity of Algorithm 0 (this can be seen similarly as in Lemma 3).
Algorithm 2. From Lemma 4, if condition (
5) for point
is satisfied, the orders released between points
and
are delivered at one or more splitting points from set
in solution
. Algorithm 1 creates a single delivery at point
. An optimal solution may contain more than one active splitting points from set
. Our next algorithm considers such possibility employing a different condition for creating each next active splitting point. Let
be the release time of the earliest released yet undelivered order from the current pending batch
.
Algorithm 2 verifies if the total delay of the (yet undelivered) orders from pending batch
(released between times
and
) if delivered at time
will be less than batch delivery cost
D:
If the condition is satisfied, then it is repeatedly verified for the next splitting point. Otherwise, we call point (the latest splitting point for which the inequality is satisfied) a semi-terminal splitting point.
Whenever splitting point
is a semi-terminal or terminal point in pending batch
, Algorithm 2 creates a new batch that delivers all the orders from batch
at time
. Thus, Algorithm 2 creates all the active splitting points that Algorithm 0 creates and it also creates additional splitting points in each segment if the condition (
4) in Lemma 3 is not satisfied.
4. Applications
In this section, an important special case of our problem which naturally arises in a number of industries and hence has also the practical significance is studied. First the problem is introduced and then the performance of the proposed algorithm is studied. The special case addresses a typical situation when the same number of orders are released at each splitting point which are evenly separated between each other. We call such orders homogeneously released orders; more formally, there are positive integer constants
and
such that for each splitting point
(and
)
and
It is less costly to calculate the total order delay in a feasible solution to this problem. In particular, it is easy to see that the total delay of the orders released at points
if they are delivered in a single batch at time
(for a non-negative integer
k) will be
Likewise, the relative total delay
Lemma 5. If for an instance with the homogeneously released orders then all the splitting points are terminal. Hence, Algorithm 0 finds an optimal solution for that instance.
Proof. The first claim easily follows and the second one follows from Lemma 3. □
Suppose now that
, let
and let
It is easy to see that Algorithm 1, applied to an instance with the homogeneously released orders, will deliver the orders released at points in a single batch at time , the orders released at points in a single batch at time , and so on, unless one of these points is terminal; then it will deliver all the orders from the current pending batch in a separate batch at that splitting point. Algorithm 2 works similarly—we just replace with in its description. Notice that the time complexities of Algorithms 1 and 2 are reduced to and , respectively.
Since every segment can be dealt with independently from the other segments, without loss of generality we shall concentrate our attention to a single segment with being the maximum order release time in that segment. Note that and hence for any .
Lemma 6. In optimal solution to an instance with the homogeneously released orders the number of batches is not less than and it is not more than .
Proof. By the way of contradiction, suppose first that the number of batches in solution
(in the segment) is less than
. Then there must exist two neighboring active splitting points
and
in solution
with
. Hence,
However, then clearly, solution
can be improved by creating a new batch that delivers the orders released between points
and
(including these endpoints) at time
, a contradiction.
Suppose now that the number of batches in solution
(in the segment) is more than
. Then there must exist three neighboring active splitting points
and
in solution
such that
. Hence,
Let
be the solution obtained from solution
by eliminating the batch starting at time
in
and delivering at time
the orders from the eliminated batch together with the orders from the batch starting at that time in solution
. We have
where
is the total delay of the orders released between times
and
within the interval
in solution
, which is precisely the increase in the total delay of the orders in solution
compared to solution
. Since
(the first inequality is obvious and the second one follows from inequality (
7)),
holds, another contradiction. The lemma is proved. □
Let now consider feasible solutions with a given number of batches with , . The -canonical solution with batches is defined as follows: if then repeatedly, each batch from delivers the orders released at the next k successive splitting points: the first batch delivers the orders released at times at time , the second batch delivers the orders released at times at time , and so on, the last batch delivers the orders released at times at time . If then solution is defined similarly to case with the only difference that the first batches contain the orders released at the next successive splitting points and the batch delivery times are modified correspondingly. Note that we can construct -canonical solution similarly as in Algorithm 1 (Algorithm 2) just replacing () by k.
Lemma 7. The β-canonical solution to an instance with the homogeneously released orders is constructedto an instance with the homogeneously released orders is constructed in time (with , ) and has the minimum cost among all feasible solutions with β batches.
Proof. From
we immediately obtain that the cost of the construction of solution
is
(similarly to Algorithms 1 and 2). Now we turn to the optimality proof for the case
. Let us consider two neighboring batches
and
delivering at times
and
, respectively, the orders released at times
and
, respectively, in solution
. Consider an alternative solution
, which contains the same batches as solution
except that the sets of orders in the
ith and and
st batches are modified: the
ith batch contains all the orders from batch
except the ones released at time
; the latter orders are assigned to the
st batch in solution
together with the orders from batch
. In this way, the delivery time of the first batch is
and the starting time of the second one is
in solution
. Clearly, the total delay in the
ith batch in the latter solution is decreased by
compared to batch
, and the total delay in the
st batch in solution
is increased by
compared to batch
, i.e.,
and hence
. The repeated application of the same interchange argument easily yields that a feasible solution containing two batches which differ by more than one in the number of the covered splitting points (the release times of the orders from these batches) cannot have a total cost less than
, which proves the lemma for case
. In case
, there exists no solution in which each batch covers the same number of splitting points. Hence in a solution with the minimum cost with
batches, at least two batches must extend to different number of splitting points. The interchange argument applied above for case
easily yields that there can be no benefit in creating batches that differ by more than one in the number of the covered splitting points. Hence,
is the minimum possible total cost for any feasible solution with
batches. □
Corollary 2. An optimal solution to an instance with the homogeneously released orders can be obtained in the number of steps bounded above by .
Proof. Based on Lemmas 6 and 7 we can easy construct an optimal solution combining the procedure for the creation of the -canonical solution with the binary search. We consider trial values for from the interval (see Lemma 6) using the binary search: for each from this interval, we construct the -canonical solution by the above mentioned procedure in time (see Lemma 7). The time complexity also follows since the number of the trial s is bounded above by . □
5. Extensions
In this section, a possible extension of the studied model is considered and it is shown how the basic notions and properties earlier introduced for our basic model can be adopted for the extended setting. Consider a two-stage supply chain scheduling model, in which we have a traditional machine and the batch machines with the delivery cost D. The processing requirement of order on the machine is . Order cannot be delivered until it is finished on the machine, i.e., it is processed for units of time on the machine once it is already released. A feasible solution specifies the starting time of each order (job) on the machine and the starting time of each batch (that may contain any number of the already finished and yet undelivered orders).
In the offline setting, a feasible solution can be constructed in two stages. At the first scheduling stage all the orders are scheduled on the machine, and at the second delivering stage the scheduled orders are distributed into the batches.
It can be easily observed that none of the two claims from Lemma 1 hold (in general) for an optimal solution for this two-stage supply chain scheduling problem according to our definition of a spitting point. However, if this notion is redefined for the two-stage problem as an order completion time on the machine, then Lemma 1 can be reformulated as follows:
Lemma 8. Every batch in an optimal solution to the two-stage supply chain scheduling problem is delivered at the completion time of some order and that batch contains all the yet undelivered finished orders.
Proof. Similarly to Lemma 1, consider a feasible solution containing a batch that is delivered between the completion times of two successively performed orders i and j on the machine. Then the total cost of that solution can clearly be decreased by replacing this batch with another batch that delivers the orders from the former batch at the completion time of order i. Likewise, a batch that does not include all the orders released by the starting time of that batch can be replaced by a batch which contains all these orders without increasing the total cost. □
It is helpful to look at the problem in a two-dimensional scale representing schematically the machine time and the delivery costs by X-axes and Y-axes, respectively. The X-axes can be seen as the already formed sequence of all the orders with their starting times, constructed at the first scheduling stage. It may contain two kinds of points, an order release time and an order completion time; we shall refer to the first and the second type of points, respectively, as the type (1) and the type (2) splitting points, respectively. Note that an active splitting point can only be a type (2) splitting point, and that the same point may be a type (1) and also a type (2) splitting point (e.g., an order may be completed at the release time of another order). At the second delivering stage some type (2) splitting points are extended by the vertical line segments of the length D (so a complete solution can be represented by extending its every active splitting point with a vertical line segment of the length D).
The X-axes may contain a gap, a machine idle time, which will occur in case all the released orders are already finished on the machine and there is no new order released at the completion time of the latest scheduled order, say i (so a gap is a time interval on the X-axes between the completion time of order i and the starting time of the next released order, in case such an order exists). Note that the left end-point of a gap is a type (1) splitting point and its right end-point is a type (2) splitting point.
A block is a sequence of the orders, the first of which starts immediately after a gap (or at the earliest order release time), there is no gap in between two successive orders and the last scheduled order of the sequence is immediately succeeded by a gap (or there is no more order to be scheduled).
Call a sequence of all the simultaneously released orders on the X-axes a chain; a sub-chain of a chain consists of some successively scheduled orders of that chain. Note that to each (sub)chain C a particular order release time (a type (1) splitting point) corresponds, which will be referred to as the release time of that (sub)chain and denoted by .
Let the offset of (sub)chain C be the difference between the starting time of the earliest scheduled order of this (sub)chain and the release time of that (sub)chain, and let denote the completion time of the last scheduled job of (sub)chain C (a type (2) splitting point).
Property 1. The X-axes of an optimal solution to the two-stage supply chain scheduling problem is formed by a sequence of (sub)chains. Every block in that solution starts by a (sub)chain with 0 offset and is followed by zero or more (sub)chains with the positive offset. Furthermore, every delivery in that solution occurs at time , for some (sub)chain C (a type (2) spitting point).
Proof. The first part can be established using a simple interchange argument, and the second part follows from Lemma 8. □
For an already constructed X-axes, consider a (sub)chain
and a type (2) splitting point
,
(a potential delivery time of a batch including the orders in
C). We can calculate
, the total delay of the
k orders from (sub)chain
C if they are delivered at time
, as the sum of the delays of all these
k orders:
Denote by
and
the starting and completion times, respectively, of block
B. Suppose
and
are type (2) splitting points from block
B (the interval
may include more than one (sub)chain from block
B), and let
be another type (2) splitting point with
(point
not necessarily belongs to the interval of block
B). The total delay of all the orders scheduled within the interval
in block
B if they are delivered at time
can clearly be obtained by summing up the total delays of the (sub)chains from that interval:
Now, point
is the terminal splitting point in block
B if either
or there exists a splitting point
such that
where
is the completion time of the order scheduled the first after time
(note that not necessarily a block possesses the terminal splitting point).
Note that the left hand-side of inequality (
10) is a natural adaptation of the concept of the relative total delay for the extended model applied to points
and
. Below we state a version of Lemma 2 for the extended model.
Lemma 9. Suppose block B possesses the terminal splitting point . Then there is a batch in solution that delivers (some) orders from block B at time .
Proof. We can use the dominance argument similar to that from Lemma 2. □
Now a segment is defined as a sequence of blocks ending with a block possessing the terminal splitting point and Algorithm 0 is straightforwardly extended for the two-stage model. Algorithms 1 and 2 can similarly be extended based on the modified notions of total delay and the relative total delay. Note that these algorithms now consist of two stages, the scheduling and the delivering stages.
6. Discussion and Concluding Remarks
A new algorithmic framework that allows to build efficient fast algorithms for the considered basic supply chain scheduling problems was proposed. The conditions under which the proposed algorithms for the basic supply chain scheduling problem provide an optimal solution were established. These algorithms serve as a basis for the construction of a very fast optimal algorithm for the homogeneously released orders. Using similar construction ideas, it might be possible to extend this algorithm for the setting with the non-homogeneously released orders.
As mentioned earlier in the introduction, although the problem was stated as a single-criteria optimization problem, it is essentially a by-criteria problem. Indeed, note that, given a feasible solution
S,
is the sum of the two different magnitudes, the total delivery cost of all batches (which is
D multiplied by the number of the batches in that solution) and the total delay of the orders in all the delivered batches. Minimizing the first magnitude will maximize the second one, and visa-versa. Hence the two objectives, minimizing the total number of batches (hence minimizing the total batch delivery cost) and minimizing the total order delay are contradictory, which precisely makes the problem non-trivial. In this paper, the problem was treated as a single-criteria problem but alternatively, different extensions of this problem with the same objective function can be studied as bi-criteria problems. Such a study may reveal new useful properties of these problems and may permit to widen the class of the supply chain scheduling problems allowing other objective criteria. These multi-criteria optimization problems can be approached using the traditional Pareto-optimality or, for example, a recently proposed threshold-optimality measure [
25].
In the introduction, the dynamic programming algorithms from [
1,
3,
7] proposed earlier for the relevant supply chain scheduling problems were mentioned. It is convenient to use dynamic programming for such problems because the two contradictory criteria hidden in the objective function can be evaluated together as sum of the two quantities (the total order delay and the total delivery cost) in the corresponding recurrent relations. However, these dynamic programming algorithms yield the enumeration of a considerable number of alternatives which make them not too time efficient. Unlike the dynamic programming algorithms, the proposed here algorithms are direct combinatorial algorithms based on a theoretical study of the structural properties of the problem. This makes these algorithms more time efficient, but at the same time, more difficult to construct because the two contradictory criteria in the objective function, which cannot be dealt with independently from each other. Nevertheless, we have succeeded to construct an optimal algorithm for the case of the homogeneously released orders. Moreover, in previous section, it was shown how the proposed theoretical framework can be extended to a two-stage supply chain scheduling problem, an immediate extension of the studied here model with two types of resources, a traditional machine and a batch machine (e.g., a vehicle). It was shown how the proposed main framework for the basic model can be adopted for a two-stage supply chain scheduling problem. Alternatively, models with two batch machines, the second of which is a transportation vehicle can be considered. These models can further be extended by allowing parallel batch machines. The adaptation of our framework for these models would be somewhat similar. In particular, the studied here basic model can gradually be extended to more complex real-life supply chain scheduling problems and the proposed algorithms can accordingly be expanded. Efficient solution of the studied basic models might be essential for the development of fast solution methods for more complex real-life supply chain scheduling environments.