Next Article in Journal
A Multi-Dimensional Covert Transaction Recognition Scheme for Blockchain
Next Article in Special Issue
Low-Rank Matrix Completion via QR-Based Retraction on Manifolds
Previous Article in Journal
An Efficient Algorithm for the Joint Replenishment Problem with Quantity Discounts, Minimum Order Quantity and Transport Capacity Constraints
Previous Article in Special Issue
Black Widow Optimization Algorithm Used to Extract the Parameters of Photovoltaic Cells and Panels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scheduling of Jobs with Multiple Weights on a Single Machine for Minimizing the Total Weighted Number of Tardy Jobs

1
School of Mathematics and Statistics, Zhengzhou University, Zhengzhou 450001, China
2
Department of Logistics and Maritime Studies, The Hong Kong Polytechnic University, Hong Kong SAR, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(4), 1013; https://doi.org/10.3390/math11041013
Submission received: 7 January 2023 / Revised: 9 February 2023 / Accepted: 10 February 2023 / Published: 16 February 2023
(This article belongs to the Special Issue Advanced Optimization Methods and Applications)

Abstract

:
We consider the scheduling of jobs with multiple weights on a single machine for minimizing the total weighted number of tardy jobs. In this setting, each job has m weights (or equivalently, the jobs have m weighting vectors), and thus we have m criteria, each of which is to minimize the total weighted number of tardy jobs under a corresponding weighting vector of the jobs. For this scheduling model, the feasibility problem aims to find a feasible schedule such that each criterion is upper bounded by its threshold value, and the Pareto scheduling problem aims to find all the Pareto-optimal points and for each one a corresponding Pareto-optimal schedule. Although the two problems have not been studied before, it is implied in the literature that both of them are unary NP-hard when m is an arbitrary number. We show in this paper that, in the case where m is a fixed number, the two problems are solvable in pseudo-polynomial time, the feasibility problem admits a dual-fully polynomial-time approximation scheme, and the Pareto-scheduling problem admits a fully polynomial-time approximation scheme.
MSC:
90B35; 90C27

1. Introduction

The number of tardy jobs is a fundamental criterion related to the due date in scheduling problems. It is critical for examining the efficiency of production and evaluation management and has a wide range of applications in reality. The weighted number of tardy jobs is a familiar scheduling criterion in scheduling research. The objective function U j was first studied by Moore [1] and then, w j U j was studied in many works, among which Lawler and Moore [2], Karp [3], and Sahni [4] are representative. Subsequently, more and more studies on these two objectives were presented, especially the primary–secondary scheduling problems and the multi-agent scheduling problems, which can better meet the needs of customers and managers. In the previous multi-criterion scheduling problems, each of the jobs to be scheduled generally had a single weight. However, there are cases in which the jobs have multi-weights.
Take the information transmission as an example. In computer systems, information transmission is an important and common business. In practice, an information service company would transmit n pieces of information (corresponding to n jobs J 1 , J 2 , , J n ) to m customers (where m 2 ) every day. Each piece of information J j has its own transmission time (corresponding to the processing time p j of J j ), which may be different on different days. Based on the usual cooperation experience, all the m customers have an expected time for receiving each information J j (corresponding to the due date d j of J j ). The transmission time p j of each piece of information J j on each workday is known in advance. In addition, the importance of each piece of information to different customers is different. So each piece of information J j will have m weights ( w j ( 1 ) , w j ( 2 ) , , w j ( m ) ), called the weight vector of J j , where w j ( i ) is the weight of J j with respect to the i-th customer and also can be understood as the degree of dissatisfaction of the i-th customer when J j is tardy. The information service company definitely hopes all the customers can receive all the information on time, but sometimes this is impractical. Therefore, the decision maker needs to find a reasonable information transmission scheme to minimize all the customers’ degrees of dissatisfaction, or equivalently, to minimize the sum of the weight vectors of the tardy jobs, which is called the objective vector. Considering the conflicts of interest and inconsistencies between different customers, finding all the non-dominated solutions is an ideal choice to the problem. This motivates us to study the problem in this paper.
For the sake of understanding the problem better, now let us consider a concrete instance with n = 8 and m = 3 . The data are given in the following Table 1.
In this instance, there are many ways to schedule the jobs, and the objective weight vectors maybe different for different schedules. In the following Figure 1, Figure 2 and Figure 3, we display three different schedules in which the early jobs are reasonably scheduled in the nondecreasing order of due dates, and the tardy jobs are scheduled after all the early jobs.   
In schedule σ 1 , jobs J 2 and J 3 are tardy. Then, the objective vector of σ 1 is
w 2 + w 3 = ( w 2 ( 1 ) , w 2 ( 2 ) , w 2 ( 3 ) ) + ( w 3 ( 1 ) , w 3 ( 2 ) , w 3 ( 3 ) ) = ( 4 , 2 , 3 ) + ( 1 , 1 , 2 ) = ( 5 , 3 , 5 ) .
Similarly, the objective vectors of σ 2 and σ 3 are ( 7 , 4 , 4 ) and ( 7 , 5 , 3 ) , respectively. From the above three objective vectors, the three schedules have different effects for the customers.
Problem formulation: Suppose that we have n jobs J 1 , J 2 , , J n to be processed on a single machine. Each job J j has a processing time p j > 0 , a due date d j 0 , and m weights w j ( 1 ) , w j ( 2 ) , , w j ( m ) , where m 2 . Given i { 1 , 2 , , m } , we call w ( i ) = ( w 1 ( i ) , w 2 ( i ) , , w n ( i ) ) the i-th weighting vector of the jobs. Then, each job J j has weight w j ( i ) under the i-th weighting vector. Given j { 1 , 2 , , n } , we call w j = ( w j ( 1 ) , w j ( 2 ) , , w j ( m ) ) the weight vector of job J j . All the jobs are released at time 0 and preemption is not allowed. Then, a schedule σ can be denoted by a permutation σ = ( J σ ( 1 ) , J σ ( 2 ) , , J σ ( n ) ) , where J σ ( x ) is the x-th job in schedule σ . The notation and terminology will be used in this article.
  • J = { J 1 , J 2 , , J n } is the set of all jobs and J x = { J 1 , J 2 , , J x } is the set of the first x jobs of J . Note that J 0 = and J n = J .
  • P x = p 1 + p 2 + + p x is the total processing time of the first x jobs of J , x = 1 , 2 , , n . P n is also denoted by P.
  • W ( i ) = w 1 ( i ) + w 2 ( i ) + w n ( i ) is the total i-th weight, i = 1 , 2 , , m .
  • C σ ( x ) ( σ ) = p σ ( 1 ) + p σ ( 2 ) + + p σ ( x ) is the completion time of job J σ ( x ) in schedule σ , 1 x n .
  • J j is called a tardy job in schedule σ if C j ( σ ) > d j and an early job if C j ( σ ) d j .
  • The tardy indicator number of job J j in schedule σ is given by U j ( σ ) = 0 , 1 according to J j is early or tardy in σ .
  • j = 1 n U j ( σ ) is called the number of tardy jobs in schedule σ .
  • U w ( i ) ( σ ) = j = 1 n w j ( i ) U j ( σ ) is called the weighted number of tardy jobs with respect to the i-th weighting vector in schedule σ , 1 i m .
  • U w ( σ ) = ( U w ( 1 ) ( σ ) , U w ( 2 ) ( σ ) , , U w ( m ) ( σ ) ) is call the objective vector of schedule σ .
We summarize the above notation and terminology in the following Table 2:
For convenience, several related concepts are defined below.
Definition 1. 
Given a vector Q = ( Q 1 , Q 2 , , Q m ) , the feasibility problem, denoted 1 | | U w Q , asks for finding a feasible schedule σ such that U w ( σ ) Q . We call Q the threshold vector and call each Q i , i = 1 , 2 , , m , the threshold value of the i-th criterion U w ( i ) .
Definition 2. 
A schedule σ of J is called Pareto-optimal if there is no schedule π of J such that U w ( π ) U w ( σ ) and U w ( π ) U w ( σ ) . In this case, we also say that U w ( σ ) is the Pareto-optimal point corresponding to the Pareto-optimal schedule σ. We use P ( J ) to denote the set of all Pareto-optimal points and call it the Pareto frontier.
Definition 3. 
The Pareto-scheduling problem for minimizing U w = ( U w ( 1 ) , U w ( 2 ) , , U w ( m ) ) , denoted 1 | | P ( U w ) , i.e., 1 | | P ( U w ( 1 ) , U w ( 2 ) , , U w ( m ) ) , asks for determining the Pareto frontier P ( J ) and for each Pareto-optimal point in P ( J ) a corresponding Pareto-optimal schedule. We use 1 | | P ˜ ( U w ) to denote the weakened version which only asks for finding a set of objective vectors including all the Pareto-optimal points of 1 | | P ( U w ) .
Definition 4. 
A dual-fully polynomial-time approximation scheme (DFPTAS) for the feasibility problem 1 | | U w Q on instance J is a family of algorithm { A ε : ε > 0 } such that, for each ε > 0 , the algorithm A ε runs in a polynomial time in 1 ε and the size of J , and moreover, A ε either finds a schedule σ of J such that U w ( σ ) ( 1 + ε ) Q or correctly tells us that the problem 1 | | U w Q is infeasible.
For the Pareto scheduling problem 1 | | P ( U w ) on instance J , a family of algorithms { A ε : ε > 0 } is called a fully polynomial-time approximation scheme (FPTAS) if for each positive number ε, the algorithm A ε runs in a polynomial time in 1 ε and the size of J , and moreover, A ε generates a set P ε of objective vectors such that, for each point (m-vector) U P ( J ) , there is a vector v P ε such that v ( 1 + ε ) U .
In this paper, we study the feasibility problem 1 | | U w Q , the Pareto-scheduling problem 1 | | P ( U w ) , and the weakened Pareto-scheduling problem 1 | | P ˜ ( U w ) .
Related known results: The weighted number of tardy jobs is a familiar scheduling criterion in scheduling research. In the early time, Moore [1] presented an O ( n log n ) -time algorithm for the problem 1 | | U j . For the problem 1 | d j = d | w j U j , Karp [3] showed its binary NP-hardness. For the problem 1 | | w j U j , Lawler and Moore [2] presented an O ( n P ) -time algorithm, and Sahni [4] presented an O ( n W ) -time algorithm, where P = j = 1 n p j and W = j = 1 n w j .
Now, we consider the primary–secondary scheduling problem 1 | | Lex ( w j ( 1 ) U j , w j ( 2 ) U j ) , which minimizes the primary criterion w j ( i ) U j firstly and the secondary criterion w j ( 2 ) U j next. This problem is binary NP-hard since the problem 1 | | w j U j is binary NP-hard. Lee and Vairaktarakis [5] showed that the problem 1 | | Lex ( U j , w j ( 2 ) U j ) is binary NP-hard and is solvable in O ( n 2 P ) time. According to Lee and Vairaktarakis [5], the exact complexity (unary NP-hard or solvable in pseudo-polynomial time) of the problem 1 | | Lex ( w j ( 1 ) U j , w j ( 2 ) U j ) is still open.
Our research is also related to the multi-agent scheduling. The notation and terminology in multi-agent scheduling can be found in Agnetis et al. [6]. In particular, “CO” indicates that we consider competing agents, and “ND” indicates that we consider non-disjoint agents. Cheng et al. [7] studied the feasibility problem 1 | CO | w j ( i ) U j ( i ) Q i | 1 i m . They showed that the problem is unary NP-complete if m is an arbitrary number and is solvable in pseudo-polynomial time if m is a fixed number. This further implies that the Pareto-scheduling problem 1 | CO | P ( w j ( 1 ) U j ( 1 ) , w j ( 2 ) U j ( 2 ) , , w j ( m ) U j ( m ) ) is unary NP-hard. Note that the CO-agent model is a special version of the ND-agent model under the condition d j ( k ) = d j (which means that the due dates of the jobs are independent of the agents). Then, the problem 1 | ND , d j ( k ) = d j | P ( w j ( 1 ) U j ( 1 ) , w j ( 2 ) U j ( 2 ) , , w j ( m ) U j ( m ) ) is also unary NP-hard. For the constraint problem 1 | ND , d j ( k ) = d j | w j ( 1 ) U j ( 1 ) : w j ( i ) U j ( i ) Q i | 2 i m , Agnetis et al. [6] presented a pseudo-polynomial-time algorithm when m is a fixed number. Exact complexity of the constraint problem 1 | ND | w j ( 1 ) U j ( 1 ) : w j ( 2 ) U j ( 2 ) Q was posed as open in Agnetis et al. [6]. When the jobs have identical processing times, research for some double-weight bicriteria scheduling on uniform parallel machines can be found in Zou and Yuan [8].
Regarding multi-weight scheduling, Guo et al. [9] studied problems 1 | | P ( w j U j , w j Y j ) and 1 | d j = d | P ( w j U j , w j Y j ) and presented pseudo-polynomial algorithms and FPTASs for both of them. They also provided the computational complexity results of several special cases of the problem 1 | | P ( w j U j , w j Y j ) . Chen et al. [10] provided a pseudo-polynomial algorithm for the preemptive scheduling problems 1 | p m t n | # ( w j ( 1 ) Y j , w j ( 2 ) U j ) and studied some subproblems. Guo et al. [11] provided a pseudo-polynomial algorithm and an FPTAS for the problem 1 | | P ( Y w ) .
Other recently related studies can be found in Zhang et al. [12], Masadeh et al. [13], Zhang et al. [14], Kim and Kim [15], Valouxis et al. [16], and Wang et al. [17] among many others.
Remark 1. 
Since 0-weights are allowed, a scheduling problem under multiple weights with criteria w j ( i ) U j | i = 1 , 2 , , m is equivalent to its corresponding problem under ND-agents under the condition d j ( k ) = d j with criteria w j ( i ) U j ( i ) | i = 1 , 2 , , m . Then, our problem 1 | | P ( U w ( 1 ) , U w ( 2 ) , , U w ( m ) ) is equivalent to the ND-agent Pareto-scheduling problem 1 | N D , d j ( k ) = d j | P ( w j ( 1 ) U j ( 1 ) , w j ( 2 ) U j ( 2 ) , , w j ( m ) U j ( m ) ) . From the unary NP-hardness of the subproblem 1 | C O | P ( w j ( 1 ) U j ( 1 ) , w j ( 2 ) U j ( 2 ) , , w j ( m ) U j ( m ) ) , where m is an arbitrary number, we conclude that our problem 1 | | P ( U w ( 1 ) , U w ( 2 ) , , U w ( m ) ) is unary NP-hard if m is an arbitrary number. Hence, it is reasonable to present algorithms for this problem when m is a fixed number.
Yuan [18] showed that the problem 1 | d ¯ j | U j is unary NP-hard. This implies that the exact complexity of the open problem 1 | ND | w j ( 1 ) U j ( 1 ) : w j ( 2 ) U j ( 2 ) Q posed in Agnetis et al. [6] is unary NP-hard. This also means that our research in this paper cannot be extended to the ND-agent Pareto-scheduling problem 1 | ND | P ( w j ( 1 ) U j ( 1 ) , w j ( 2 ) U j ( 2 ) , , w j ( m ) U j ( m ) ) , where m is a fixed number.
We notice that the open problem 1 | | Lex ( w j ( 1 ) U j , w j ( 2 ) U j ) posed in Lee and Vairaktarakis [5] is solvable in pseudo-polynomial time, since it can be reduced in pseudo-polynomial time to the constraint problem 1 | ND , d j ( k ) = d j | w j ( 2 ) U j ( 2 ) : w j ( 1 ) U j ( 1 ) Q solvable in pseudo-polynomial time by Agnetis et al. [6], where Q is the optimal value of the problem 1 | | w j ( 1 ) U j .
Finally, we illustrate the differences between our work and the known ones.
In the known two-agent or multi-agent scheduling problems, each job just has a contribution to one of the objectives. However, in this paper, each job must contribute to all of the objectives. Thus, our problems are indeed different from the multi-agent problems. Moreover, the problems in Guo et al. [9] and Chen et al. [10] are different from ours, first because of the objective functions and then because of the jobs with two weights in these problems but multi-weights in our problems. Although jobs in the problem studied in Guo et al. [11] have multi-weights, the objective function is different from that studied in this paper.
Our contributions: Let m be a fixed positive integer and assume that all the parameters p j , d j , w j ( i ) , Q i are integer-valued. Our contributions can be listed as follows.
  • For the problem 1 | | U w Q , we present a pseudo-polynomial O ( n Q 1 Q 2 Q m ) -time algorithm and a DFPTAS of running time O ( n m + 1 ( 1 ε ) m ) .
  • For the problem 1 | | P ˜ ( U w ) , we present a pseudo-polynomial O ( n W ( 1 ) W ( 2 ) W ( m ) ) -time algorithm. By a further discussion, the problem 1 | | P ( U w ) is also solvable in O ( n i = 1 m W ( i ) ) time.
  • For the problem 1 | | P ( U w ) , we present an FPTAS of running time O ( n m + 1 ε m i = 1 m log W ( i ) ) .
Organization of the paper: This paper is organized as follows. In Section 2, we present some preliminaries. In Section 3, we study the feasiblility problem 1 | | U w Q . In Section 4, we study the Pareto-scheduling problems 1 | | P ( U w ) and 1 | | P ˜ ( U w ) . In the last section, we conclude the paper and suggest future research topics.

2. Preliminaries

For convenience, we renumber the n jobs of J = { J 1 , J 2 , , J n } in the EDD (earliest due date first) order such that
d 1 d 2 d n .
We call ( J 1 , J 2 , , J n ) the EDD-schedule of J . Moreover, we renumber the m weighting vectors w ( i ) = ( w 1 ( i ) , w 2 ( i ) , , w n ( i ) ) , i = 1 , 2 , , m , in the non-decreasing order of their total weights such that
W ( 1 ) W ( 2 ) W ( m ) .
Then, we keep the indexing in (1) and (2) throughout this paper.
Some other important definitions are used in the rest of our paper.
Definition 5. 
A schedule σ of J is called efficientif σ satisfies the following two conditions: (i) the early jobs in σ are consecutively scheduled from time 0 in the increasing order of their jobs indices, and (ii) the tardy jobs in σ are scheduled after all the early jobs (in an arbitrary order). By the job-shifting argument, the following lemma can be easily verified.
Lemma 1. 
The problems studied in this paper allow us to only consider the efficient schedules of J .
Definition 6. 
For an efficient schedule σ of J , the early-makespan of σ, denoted τ ( σ ) , is defined to be the total processing time of the early jobs in σ. Then, τ ( σ ) = { p j : U j ( σ ) = 0 } . The early-makespan criterion will play an important role in this paper.
Because of the multi-criteria U w = ( U w ( 1 ) , U w ( 2 ) , , U w ( m ) ) studied in this paper, we need some knowledge about the operations on vector sets.
Let k 2 be an integer. We use 0 to denote the all-zero k-vector, i.e., 0 = ( 0 , 0 , , 0 ) .
Definition 7. 
For two k-vectors u = ( u 1 , u 2 , , u k ) and v = ( v 1 , v 2 , , v k ) , u v indicates that u i v i for i = 1 , 2 , , k , and u < v indicates that u i < v i for i = 1 , 2 , , k . The notation u v and u > v can be understood similarly. If u v , we say that u dominates v , or equivalently, v is dominated by u .
Definition 8. 
Let V be a finite k-vector set. A vector v V is non-dominated if it is dominated by no vector of V \ { v } . We use V * to denote the set consisting of all the non-dominated vectors of V and call V * the non-dominated set of V . Moreover, we define V = { v : v V } , where v is the ( k 1 ) -vector obtained from v by deleting the last component of v , i.e., v = ( v 1 , v 2 , , v k 1 ) if v = ( v 1 , v 2 , , v k ) .
Definition 9. 
The lexicographical order on V , denoted by ≺, is defined in the following way: for two vectors u = ( u 1 , u 2 , , u k ) and v = ( v 1 , v 2 , , v k ) of V , u is a predecessor of v , i.e., u v , if and only if there is an index i { 1 , 2 , , k } such that u i < v i and u x = v x for x = 1 , 2 , , i 1 . The lexicographic sequence of V is the sequence ( u 1 , u 2 , , u z ) of all the vectors of V such that u 1 u 2 u z , where z = | V | . For convenience, we use ( V , ) to denote the ordered set and also the lexicographic sequence of V .
Clearly, under the lexicographic order ≺ on V , every two vectors in V are comparable. Then, ≺ is a total order on V . As a consequence, ( V , ) exists and is unique.
Definition 10. 
According to Zou and Yuan (2020), a subset V ˜ V is called a simplified version of V if V * V ˜ , and every two consecutive vectors in the lexicographic sequence ( V ˜ , ) are non-dominated by each other.
Some properties about the non-dominated sets, lexicographic sequences, and simplified versions of vector sets are established in Zou and Yuan. Among these properties, the ones which can be used in this paper are described in the following lemma.
Lemma 2. 
Let k 2 be a fixed integer. Let U and V be two finite k-vector sets. Then, we have the following five statements:
(i) 
( U V ) * U * V * and ( U V ) * = ( U * V * ) * .
(ii) 
If U V , then ( U , ) is a subsequence of ( V , ) .
(iii) 
If ( U , ) = ( u 1 , u 2 , , u z ) and u 0 is a k-vector, then ( U + u 0 , ) = ( u 1 + u 0 , u 2 + u 0 , , u z + u 0 ) , where U + u 0 = { u + u 0 : u U } .
(iv) 
Suppose that both ( U , ) and ( V , ) are given in hand. Then, ( U V , ) can be obtained from ( U , ) and ( V , ) in O ( | U | + | V | ) time (by using the similar operations as that for merging two sorted sequences of numbers).
(v)
Suppose that ( V , ) is given in hand. Then, a simplified version V ˜ of V , together with ( V ˜ , ) , can be obtained from ( V , ) in O ( | V | ) time (by iteratively checking two consecutive vectors and deleting the dominated ones).
The following lemma reveals a new property of the lexicographic sequences of simplified versions.
Lemma 3. 
Let V be a k-vector set where k 2 , and suppose that V is a simplified version of itself. Then, | V | = | V | and ( V , ) can be obtained from ( V , ) by deleting the last component of each vector of V directly.
Proof. 
Suppose that ( V , ) = ( v 1 , v 2 , , v z ) , where v x = ( v x , 1 , v x , 2 , , v x , k ) for x = 1 , 2 , , z . We only need to show that v 1 , v 2 , , v z are z distinct ( k 1 ) -vectors.
If possible, let x and x be two indices with 1 x < x z such that v x = v x . Since v x v x + 1 v x , from the meaning of “≺”, we have v x = v x + 1 = = v x . Now, ( v x , v x , k ) = v x v x + 1 = ( v x + 1 , v x + 1 , k ) = ( v x , v x + 1 , k ) . From the meaning of “≺” again, we have v x , k < v x + 1 , k . This means that v x dominates v x + 1 in V , contradicting the assumption that V is a simplified version. Consequently, v 1 , v 2 , , v z are z distinct ( k 1 ) -vectors.
Finally, it is routine to see that V = { v 1 , v 2 , , v z } and ( V , ) = ( v 1 , v 2 , , v z ) . The lemma follows. □
The following lemma, which was first established by Kung et al. [19], will also be used in our research.
Lemma 4. 
Let V be a finite k-vector set, where k 2 , and let K = | V | . Then V * can be determined in O ( H k ( K ) ) time, where H k ( K ) = K log K if k = 2 , 3 , and H k ( K ) = K ( log K ) k 2 if k 4 .

3. The Feasibility Problem

In this section, we study the feasibility problem 1 | | U w Q . In Section 3.1, we present a dynamic programming algorithm for the problem. In Section 3.2, we present a dual-fully polynomial-time approximation scheme (DFPTAS) for this problem. Recall that U w = ( U w ( 1 ) , U w ( 2 ) , , U w ( m ) ) and Q = ( Q 1 , Q 2 , , Q m ) .
If Q i 0 for some i { 1 , 2 , , m } , then the problem 1 | | U w Q is either infeasible or all the jobs must be early in a feasible schedule. Then, the EDD-schedule σ = ( J 1 , J 2 , , J n ) can be used to directly check the feasibility of the problem. Hence, we assume in this section that Q > 0 , i.e.,
Q i > 0   for   all   i = 1 , 2 , , m .

3.1. A Dynamic Programming Algorithm

For each j { 1 , 2 , , n } and each nonnegative m-vector v = ( v 1 , v 2 , , v m ) with v Q , we consider the feasibility problem 1 | | U w v on the instance J j = { J 1 , J 2 , , J j } . Let C ( j : v ) be the minimum value of early-makespan of a feasible efficient schedule for the problem, i.e., C ( j : v ) = min σ τ ( σ ) , where σ runs over all the feasible efficient schedules for the problem 1 | | U w v on J j . In the case where there is no feasible schedule for the problem, we just define C ( j : v ) = + . Thus, the problem 1 | | U w v on J j is feasible if and only if C ( j : v ) < + .
In an efficient schedule σ of J j which assumes the early-makespan C ( j : v ) , i.e., τ ( σ ) = C ( j : v ) , the job J j may have the following two possibilities in σ :
J j may be a tardy job in σ . In this case, let σ be the schedule of J j 1 obtained from σ by deleting J j . Then, U w ( σ ) = U w ( σ ) w j . This means that σ is an efficient schedule of J j 1 assuming the early-makespan C ( j 1 : v w j ) . Clearly, σ and σ have the same early-makespan. Then, we have C ( j : v ) = C ( j 1 : v w j ) .
J j may be an early job in σ . Since σ is an efficient schedule, J j must be the last early job in σ . Let σ be the schedule of J j 1 obtained from σ by deleting J j . Then, U w ( σ ) = U w ( σ ) . This means that σ is an efficient schedule of J j 1 assuming the early-makespan C ( j 1 : v ) . Clearly, τ ( σ ) = τ ( σ ) p j . Then, we have C ( j : v ) = C ( j 1 : v ) + p j . It should be noted that J j can be early only if C ( j 1 : v ) + p j d j .
The intuition of the following Algorithm 1 can be described as follows. We always schedule the early jobs in their index order (EDD order) from time 0 and schedule the tardy jobs after the early jobs, so we mainly consider the processing of the early jobs. Our algorithm considers the jobs J 1 , J 2 , , J n one by one. Suppose that for some j with 1 j n , the first j 1 jobs have been considered, and the last early job completes at time τ . Then, job J j is considered with two possibilities: either τ + p j d j and J j is scheduled in the interval [ τ , τ + p j ] as an early job, or J j is taken as a tardy job. The task of our algorithm is to record the effects of these operations and remove the infeasible objective vectors.
According to the above discussion, we obtain the following dynamic programming Algorithm DP1 for solving the problem 1 | | U w Q .
In the above recursion function, the vector v has O ( Q 1 Q 2 Q m ) choices, and the iteration index j has O ( n ) choices. Moreover, the calculation in each iteration needs a constant time since m is a fixed number. Then, we conclude the following result.
Theorem 1. 
Algorithm DP1 solves the feasibility problem 1 | | U w Q in O ( n Q 1 Q 2 Q m ) time.
Algorithm 1: For solving the feasibility problem 1 | | U w Q
Set J = { J 1 , J 2 , , J n } with d 1 d 2 d n , set Q = ( Q 1 , Q 2 , , Q m ) with Q > 0 , and set
C ( j : v ) = 0 , if   j = 0   and v is   a   nonnegative   m - vector , + , if   v   is   not   a   nonnegative   m - vector .
Mathematics 11 01013 i001

3.2. A Numerical Example

Next, we establish a numerical example to explain how Algorithm DP1 works.
Instance I 2 : The instance has three jobs as described in Table 3, and the threshold vector Q = ( 2 , 3 ) .
Then, we process Algorithm DP1 to solve the problem 1 | | U w Q on Instance I 2 .
Initial condition:
C ( 0 : v ) = 0 , if   v   is   a   nonnegative   m - vector , + , if   v   is   not   a   nonnegative   m - vector .
The values of the recursion function (in Table 4):
Feasibility decision: Since C ( n : Q ) = 2 < + , the problem 1 | | U w Q on Instance I 2 and both of schedule σ 1 = ( J 2 , J 1 , J 3 ) and σ 2 = ( J 2 , J 3 , J 1 ) are feasible.

3.3. A Dual-Fully Polynomial-Time Approximation Scheme

By rounding the multiple weights of jobs, Algorithm DP1 enables us to present a DFPTAS for the problem 1 | | U w Q .
Let ε be a positive constant. From the job instance J = { J 1 , J 2 , , J n } and the threshold vector Q > 0 , we obtain a rounded instance  J ˜ = { J ˜ 1 , J ˜ 2 , , J ˜ n } by setting, for j = 1 , 2 , , n ,
p ˜ j = p j   and   d ˜ j = d j
and, for i = 1 , 2 , , m ,
w ˜ j ( i ) = 2 n ε + n + 1 , if   w j ( i ) > Q i , 2 n w j ( i ) ε Q i , otherwise .
Moreover, we define a new threshold vector Q ˜ = ( Q ˜ 1 , Q ˜ 2 , , Q ˜ m ) by setting Q ˜ i = 2 n ε + n for i = 1 , 2 , , m , i.e.,
Q ˜ = ( 2 n ε + n , 2 n ε + n , , 2 n ε + n ) .
For a schedule σ of J , we use σ ˜ to denote the schedule of J ˜ which is obtained from σ by replacing each job J j with the job J ˜ j . From (3), we know that σ is an efficient schedule of J if and only if σ ˜ is an efficient schedule of J ˜ .
The following two lemmas reveal some relations between the three feasibility problems 1 | | U w Q on J , 1 | | U w ( 1 + ε ) Q on J , and 1 | | U w Q ˜ on J ˜ .
Lemma 5. 
If σ ˜ is a feasible efficient schedule for the problem 1 | | U w Q ˜ on J ˜ , then σ is a feasible efficient schedule for the problem 1 | | U w ( 1 + ε ) Q on J .
Proof. 
Since σ ˜ is a feasible efficient schedule for the problem 1 | | U w Q ˜ on J ˜ , we have U w ( σ ˜ ) Q ˜ , i.e.,
U w ( i ) ( σ ˜ ) = j = 1 n w ˜ j ( i ) U j ( σ ˜ ) Q ˜ i = 2 n ε + n , i = 1 , 2 , , m ,
where the last equality in (6) follows from (5). Let I = { j { 1 , 2 , , n } : U j ( σ ˜ ) = 1 } . Then, j I w ˜ j ( i ) = U w ( i ) ( σ ˜ ) . From (3) and from the relation of σ ˜ and σ , we have I = { j { 1 , 2 , , n } : U j ( σ ) = 1 } and j I w j ( i ) = U w ( i ) ( σ ) . From (4) and (6), for each j I and each i { 1 , 2 , , m } , we have w j ( i ) Q i and w ˜ j ( i ) = 2 n w j ( i ) ε Q i , and so,
w j ( i ) ε Q i 2 n w ˜ j ( i ) .
Consequently, from (6) and (7), for each i { 1 , 2 , , m } , we have
U w ( i ) ( σ ) = j = 1 n w j ( i ) U j ( σ ) = j I w j ( i ) ε Q i 2 n × j I w ˜ j ( i ) ε Q i 2 n × ( 2 n ε + n ) ε Q i 2 n × ( 2 n ε + 1 + n ) ( 1 + ε ) Q i .
It follows that σ is a feasible efficient schedule for the problem 1 | | U w ( 1 + ε ) Q on J . The lemma follows. □
Lemma 6. 
If the problem 1 | | U w Q ˜ on J ˜ is infeasible, then the problem 1 | | U w Q on J is also infeasible.
Proof. 
Suppose to the contrary that the problem 1 | | U w Q on J is feasible, and let σ be a feasible efficient schedule for this problem. As in Lemma 5, by setting I = { j { 1 , 2 , , n } : U j ( σ ) = 1 } , we have j I w j ( i ) = U w ( i ) ( σ ) Q i for i { 1 , 2 , , m } . Moreover, for j I and i { 1 , 2 , , m } , we have w j ( i ) Q i and w ˜ j ( i ) = 2 n w j ( i ) ε Q i 2 n w j ( i ) ε Q i + 1 . Consequently, for each i { 1 , 2 , , m } , we have
U w ( i ) ( σ ˜ ) = j = 1 n w ˜ j ( i ) U j ( σ ˜ ) = j I w ˜ j ( i ) j I ( 2 n w j ( i ) ε Q i + 1 ) 2 n ε Q i j I w j ( i ) + | I | 2 n ε + | I | 2 n ε + n = Q ˜ i .
It follows that σ ˜ is a feasible efficient schedule for the problem 1 | | U w Q ˜ on J ˜ . This contradicts the assumption that the problem 1 | | U w Q ˜ on J ˜ is infeasible. The lemma follows. □
Based on Lemmas 5 and 6, we present the following DFPTAS for solving the problem 1 | | U w Q .
Algorithm DP1( ε ): For the feasibility problem 1 | | U w Q .
Input: A job instance J = { J 1 , J 2 , , J n } with d 1 d 2 d n , a positive threshold vector Q = ( Q 1 , Q 2 , , Q m ) , and a constant ε > 0 .
Step 1: Generate the rounded instance J ˜ = { J ˜ 1 , J ˜ 2 , , J ˜ n } by using (3) and (4). Generate the rounded threshold vector Q ˜ by using (5).
Step 2: Run Algorithm DP1 for the problem 1 | | U w Q ˜ on instance J ˜ .
Feasibility decision: If DP1 tells us that 1 | | U w Q ˜ on J ˜ is infeasible, then the problem 1 | | U w Q on J is infeasible. If DP1 tells us that 1 | | U w Q ˜ on J ˜ is feasible and presents a feasible schedule σ ˜ for this problem, then the problem 1 | | U w ( 1 + ε ) Q on J is feasible and σ is a feasible schedule of the problem.
The correctness of Algorithm DP1( ε ) is guaranteed by Theorem 1 and Lemmas 5 and 6. Note that Q ˜ i = O ( n ε ) for each i { 1 , 2 , , m } . From Theorem 1, the time complexity of Algorithm DP1( ε ) is given by O ( n m + 1 ( 1 ε ) m ) . Consequently, we have the following result.
Theorem 2. 
The feasibility problem 1 | | U w Q admits a dual-fully polynomial-time approximation scheme (DFPTAS) of running time O ( n m + 1 ( 1 ε ) m ) for any given constant ε > 0 .

4. The Pareto-Scheduling Problem

In this section, we study the Pareto-scheduling problem 1 | | P ( U w ) . In Section 4.1, we present a pseudo-polynomial-time algorithm for problem 1 | | P ˜ ( U ) and 1 | | P ( U w ) and provide the time complexity of the algorithm. In Section 4.2, we present a fully polynomial-time approximation scheme (DFPTAS) for the problem 1 | | P ( U w ) .

4.1. A Dynamic Programming Algorithm

We now consider the Pareto-scheduling problem 1 | | P ( U w ) on the instance J = { J 1 , J 2 , , J n } with d 1 d 2 d n . Let j { 1 , 2 , , n } and let σ be an efficient schedule of the jobs of J j = { J 1 , J 2 , , J j } . Recall that the early-makespan τ ( σ ) of σ is equal to the total processing time of the early jobs in σ . The ( m + 1 ) -vector
( U w ( σ ) , τ ( σ ) ) = ( U w ( 1 ) ( σ ) , U w ( 2 ) ( σ ) , , U w ( m ) ( σ ) , τ ( σ ) )
is called a state of J j corresponding to the schedule σ . In this case, we also say that σ is a u -schedule, where u = ( U w ( σ ) , τ ( σ ) ) . Γ j is used to denote the set of all states of J j and is called the state set of J j . For convenience, we also define Γ 0 = { ( 0 , 0 , , 0 ) } = { 0 } . For each state u Γ 0 Γ 1 Γ n , we use t ( u ) to denote the last component of u . Hence, if u = ( u 1 , u 2 , , u m , τ ) , then u = ( u , t ( u ) ) , where u = ( u 1 , u 2 , , u m ) and t ( u ) = τ . Moreover, we define
Γ = Γ n = { u : u Γ n } .
Then, Γ * = P ( J ) is the Pareto frontier, i.e., the set of the Pareto-optimal points, of the problem 1 | | P ( U w ) on instance J .
From Lemma 1 and from the meanings of Γ j , j = 0 , 1 , , n , we have the following lemma, which provides a method to generate Γ j from Γ j 1 , j = 1 , 2 , , n .
Lemma 7. 
For j = 1 , 2 , , n , we have Γ j = Γ j Γ j , where
Γ j = { u + ( w j , 0 ) : u Γ j 1 }
and
Γ j = { u + ( 0 , p j ) : u Γ j 1   a n d   t ( u ) + p j d j } .
Proof. 
Let Γ j ( 1 ) be the set which consists of the states of Γ j corresponding to the efficient schedules of J j in which J j is tardy. Let Γ j ( 2 ) be the set which consists of the states of Γ j corresponding to the efficient schedules of J j in which J j is early. Then, we have Γ j = Γ j ( 1 ) Γ j ( 2 ) . The remaining issue is to show that Γ j ( 1 ) = Γ j and Γ j ( 2 ) = Γ j .
Given u Γ j 1 , let σ be a u -schedule of J j 1 . Let σ be the efficient schedule of J j which is obtained from σ by adding J j to be a tardy job. We have U w ( σ ) = U w ( σ ) + w j and τ ( σ ) = τ ( σ ) . Then, the state corresponding to σ is given by u + ( w j , 0 ) Γ j ( 1 ) . Consequently, we have
Γ j = { u + ( w j , 0 ) : u Γ j 1 } Γ j ( 1 ) .
Given v Γ j ( 1 ) , let σ be a v -schedule of J j in which J j is tardy. Let σ be the efficient schedule of J j 1 which is obtained from σ by deleting the tardy job J j . Again, we have U w ( σ ) = U w ( σ ) + w j and τ ( σ ) = τ ( σ ) . Then, the state corresponding to σ is given by v ( w j , 0 ) Γ j 1 . Consequently, we have
Γ j = { u + ( w j , 0 ) : u Γ j 1 } Γ j ( 1 ) .
Given u Γ j 1 with t ( u ) + p j d j , let σ be a u -schedule of J j 1 . Let σ be the efficient schedule of J j which is obtained from σ by adding J j to be the last early job. The construction of σ does work since τ ( σ ) = τ ( σ ) + p j = t ( u ) + p j d j . We have U w ( σ ) = U w ( σ ) . Then, the state corresponding to σ is given by u + ( 0 , p i ) Γ ( 2 ) . Consequently, we have
Γ j = { u + ( 0 , p j ) : u Γ j 1 , t ( u ) + p j d j } Γ j ( 2 ) .
Given v Γ j ( 2 ) , let σ be a v -schedule of J j in which J j is an early job. Then, J j is the last early job in σ , implying that t ( v ) = τ ( σ ) d j . Let σ be the efficient schedule of J j 1 which is obtained from σ by deleting the last early job J j . Again, we have U w ( σ ) = U w ( σ ) and τ ( σ ) + p j = τ ( σ ) d j . Then, the state corresponding to σ is given by v ( 0 , p j ) Γ j 1 . Consequently, we have
Γ j = { u + ( 0 , p j ) : u Γ j 1 , t ( u ) + p j d j } Γ j ( 2 ) .
From (8)–(11), we conclude that Γ j ( 1 ) = Γ j and Γ j ( 2 ) = Γ j . The lemma follows. □
Let T = { 0 , 1 , , P } and X i = { 0 , 1 , , W ( i ) } for i = 1 , 2 , , m . Then, | X 1 × X 2 × × X m × T | = O ( P i = 1 m W ( i ) ) , and all the state sets Γ 0 , Γ 1 , , Γ n are included in X 1 × X 2 × × X m × T . A direct application of Lemma 7 may lead to a dynamic programming algorithm which generates Γ 0 , Γ 1 , , Γ n , Γ in ( n P i = 1 m W ( i ) ) time. Since a simplified version of Γ includes all the Pareto-optimal points, to save the time complexity, our algorithm will generate a sequence of state subsets Γ ˜ 0 , Γ ˜ 1 , , Γ ˜ n , Γ ˜ , where Γ ˜ i is a simplified version of Γ i , i = 0 , 1 , , n , and Γ ˜ is a simplified version of Γ . With the help of lexicographic sequences, the time complexity can be reduced to ( n i = 1 m W ( i ) ) .
The intuition of the following Algorithm 2 can be described as follows. We always schedule the early jobs in their index order (EDD order) from time 0 and schedule the tardy jobs after the early jobs, so we mainly consider the processing of the early jobs. Our algorithm considers the jobs J 1 , J 2 , , J n one by one. Suppose that for some j with 1 j n , the first j 1 jobs have been considered, and the last early job completes at time τ . Then, job J j is considered by two possibilities: either τ + p j d j and J j is scheduled in the interval [ τ , τ + p j ] as an early job, or J j is taken as a tardy job. The task of our algorithm is to record the effects of these operations and remove the dominated objective vectors.
According to the above discussion, we obtain the following dynamic programming Algorithm DP2 to generate the simplified versions Γ ˜ 0 , Γ ˜ 1 , , Γ ˜ n , Γ ˜ .
Algorithm 2: For generating the simplified versions Γ ˜ 0 , Γ ˜ 1 , , Γ ˜ n , Γ ˜
Set J = { J 1 , J 2 , , J n } with d 1 d 2 d n , set Γ ˜ 0 : = { ( 0 , 0 , , 0 ) } = { 0 } , and set ( Γ ˜ 0 , ) : = ( 0 ) be the lexicographic sequence of Γ ˜ 0
Mathematics 11 01013 i002
(
Determine Γ ˜ n and generate ( Γ ˜ n , ) from ( Γ ˜ n , ) . Then, generate a simplified version Γ ˜ of Γ ˜ n from ( Γ ˜ n , ) .
Output the sets Γ ˜ 0 , Γ ˜ 1 , , Γ ˜ n , Γ ˜ . The efficient schedules corresponding to the states in Γ ˜ can be generated by backtracking.
Lemma 8. 
Algorithm DP2 runs in O ( n i = 1 m W ( i ) ) time.
Proof. 
Algorithm DP2 has totally n iterations for running Step 2. We consider the implementation of Step 2 at iteration j, where j { 1 , 2 , , n } .
Step 2.1 runs in O ( | Γ ˜ i 1 | ) time clearly. In Step 2.2, u + u j u + u j in ( Γ ˜ j , ) if and only if u u in ( Γ ˜ j 1 , ) , and v + v j v + v j in ( Γ ˜ j , ) if and only if v v in ( Γ ˜ j 1 , ) . Thus, from Lemma 2(ii,iii), Step 2.2 runs in O ( | Γ ˜ j 1 | ) time. From Lemma 2(iv), Step 2.3 runs in O ( | Γ ˜ j | + | Γ ˜ j | ) time, which is also O ( | Γ ˜ j 1 | ) time since | Γ ˜ j | = | Γ ˜ j 1 | and | Γ ˜ j | | Γ ˜ j 1 | . From Lemma 2(v), Step 2.4 runs in O ( | Γ ˜ j Γ ˜ j | ) = O ( | Γ ˜ j 1 | ) time. Consequently, at iteration j, Step 2 runs in O ( | Γ ˜ j 1 | ) time. Since Γ ˜ j 1 X 1 × X 2 × × X m × T is a simplified version of itself, from Lemma 3, we have | Γ ˜ j 1 | = | Γ ˜ j 1 | i = 1 m | X i | = O ( i = 1 m W ( i ) ) , j = 1 , 2 , , n . In fact, by the same reason, we also have | Γ ˜ n | = | Γ ˜ n | = O ( i = 1 m W ( i ) ) .
Summing up the running times O ( | Γ ˜ j 1 | ) for j = 1 , 2 , , n , we conclude that the running time of Algorithm DP2 for Step 2 at all the n iterations is O ( n i = 1 m W ( i ) ) .
Finally, from Lemma 3, Step 4 takes O ( | Γ ˜ n | ) = O ( i = 1 m W ( i ) ) time to generate Γ ˜ n and ( Γ ˜ n , ) from ( Γ ˜ n , ) , and from Lemma 2(v), Step 4 takes O ( | Γ ˜ n | ) = O ( i = 1 m W ( i ) ) time to generate Γ ˜ from ( Γ ˜ n , ) , which are dominated by the time complexity of Step 2. Consequently, the time complexity of Algorithm DP2 is given by O ( n i = 1 m W ( i ) ) . The lemma follows. □
We show the correctness of Algorithm DP2 in the following lemma.
Lemma 9. 
Let Γ ˜ 0 , Γ ˜ 1 , , Γ ˜ n , Γ ˜ be the sets returned by Algorithm DP2. Then, we have the following three statements.
(i) 
For each j = 0 , 1 , , n , Γ ˜ j is a simplified version of Γ j .
(ii) 
Γ ˜ is a simplified version of Γ.
(iii) 
| Γ ˜ | = O ( i = 1 m 1 W ( i ) ) .
Proof. 
We prove statement (i) by induction on j. From Algorithm DP2, we have Γ ˜ 0 = { 0 } = Γ 0 . Hence, statement (i) holds for j = 0 .
Inductively, we assume that j 1 and statement (i) is correct up to index j 1 , i.e., Γ ˜ j 1 is a simplified version of Γ j 1 . Then, Algorithm DP2 generates the two sets Γ ˜ j = { u + u j : u Γ ˜ j 1 } and Γ ˜ j = { u + v j : u Γ ˜ j 1   and   t ( u ) + p j d j } . From Lemma 7, by setting Γ j = { u + u j : u Γ j 1 } and Γ j = { u + v j : u Γ j 1   and   t ( u ) + p j d j } , we have Γ j = Γ j Γ j . Since Γ ˜ j 1 is a simplified version of Γ j 1 , we have Γ j 1 * Γ ˜ j 1 . Then, the construction of Γ ˜ j and Γ ˜ j implies that Γ j * Γ ˜ j and Γ j * Γ ˜ j . Since Γ ˜ j is a simplified version of Γ ˜ j Γ ˜ j , we further have ( Γ ˜ j Γ ˜ j ) * Γ ˜ j . From Lemma 2(i), we have Γ j * = ( Γ j * Γ j * ) * ( Γ ˜ j Γ ˜ j ) * Γ ˜ j . Consequently, Γ ˜ j is a simplified version of Γ j Γ j = Γ j .
From the above discussion, statement (i) follows from the principle of induction.
We next prove statement (ii). From statement (i), Γ ˜ n is a simplified version of Γ n . Then, we have Γ n * Γ ˜ n Γ n and Γ n * = Γ ˜ n * . From the fact that Γ = Γ n , we have that Γ * ( Γ n * ) Γ ˜ n Γ n = Γ , implying that ( Γ ˜ n ) * = Γ * . Since Γ ˜ is a simplified version of Γ ˜ n , we have Γ * = ( Γ ˜ n ) * Γ ˜ Γ ˜ n Γ n = Γ . It follows that Γ ˜ is a simplified version of Γ . Thus, statement (ii) holds.
Statement (iii) follows from Lemma 3 by noting that Γ ˜ is a simplified version of itself, and Γ ˜ X 1 × X 2 × × X m . This completes the proof. □
From Lemmas 8 and 9(ii), we have the following result.
Theorem 3. 
The problem 1 | | P ˜ ( U ) is solvable in O ( n i = 1 m W ( i ) ) time.
Let K = i = 1 m 1 W ( i ) . From Lemma 9(iii), we have | Γ ˜ | = O ( K ) . Note that Γ ˜ * = Γ * = P ( J ) . From Lemma 4, an additional O ( H m ( K ) ) -time is needed to obtain P ( J ) from Γ ˜ , where H m ( K ) = K log K if m = 2 , 3 , and H m ( K ) = K ( log K ) m 2 if m 4 . In any case, we have H m ( K ) K ( log K ) m . Since W ( 1 ) W ( 2 ) W ( m ) , we have log K = log i = 1 m 1 W ( i ) ( m 1 ) log W ( m ) . Thus, by using the fact that m is a fixed number, we can easily verify that
( log K ) m = O ( ( log W ( m ) ) m ) = O ( W ( m ) ) .
It follows that H m ( K ) K ( log K ) m = O ( i = 1 m W ( i ) ) , which is dominated by the time complexity O ( n i = 1 m W ( i ) ) used for generating Γ ˜ . Thus, we have the following corollary.
Corollary 1. 
The Pareto-scheduling problem 1 | | P ( U w ) is solvable in O ( n i = 1 m W ( i ) ) time.

4.2. A Numerical Example

Next, we establish a numerical example to explain how Algorithm DP2 works.
Instance I 3 : The instance has three jobs as described in the following Table 5.
Then, we process Algorithm DP2 to solve the problem 1 | | P ( U w ) on instance I 3 .
Initialition: Set Γ ˜ 0 = { ( 0 , 0 , , 0 ) } = { 0 } and ( Γ ˜ 0 , ) = ( 0 ) .
Generating Γ ˜ j from Γ ˜ j 1 :
When j = 1 , u 1 = ( 5 , 2 , 3 , 0 ) and v 1 = ( 0 , 0 , 0 , 4 ) .
Γ ˜ 1 = { ( 5 , 2 , 3 , 0 ) }   and   Γ ˜ 1 = .
Then, ( Γ ˜ 1 , ) = { ( 5 , 2 , 3 , 0 ) }
When j = 2 , u 2 = ( 2 , 3 , 3 , 0 ) and v 2 = ( 0 , 0 , 0 , 3 ) .
Γ ˜ 2 = { ( 7 , 5 , 6 , 0 ) }   and   Γ ˜ 2 = { ( 5 , 2 , 3 , 3 ) } .
Then, ( Γ ˜ 2 , ) = { ( 5 , 2 , 3 , 3 ) , ( 7 , 5 , 6 , 0 ) }
When j = 3 , u 3 = ( 3 , 4 , 2 , 0 ) and v 3 = ( 0 , 0 , 0 , 4 ) .
Γ ˜ 3 = { ( 8 , 6 , 5 , 3 ) , ( 10 , 9 , 8 , 0 ) }   and   Γ ˜ 3 = { ( 5 , 2 , 3 , 7 ) , ( 7 , 5 , 6 , 4 ) } .
Then, ( Γ ˜ 3 , ) = { ( 5 , 2 , 3 , 7 ) , ( 7 , 5 , 6 , 4 ) , ( 8 , 6 , 5 , 3 ) , ( 10 , 9 , 8 , 0 ) }
Output: Γ ˜ n = { ( 5 , 2 , 3 ) , ( 7 , 5 , 6 ) , ( 8 , 6 , 5 ) , ( 10 , 9 , 8 ) } and the simplified version Γ ˜ = { ( 5 , 2 , 3 ) } . The efficient schedule corresponding to the state in Γ ˜ is σ = { J 2 , J 3 , J 1 } .

4.3. A Fully Polynomial-Time Approximation Scheme

Based on Algorithm DP2, by using the trimming technique, we now present an FPTAS for the Pareto-scheduling problem 1 | | P ( U w ) . Recall that X i = { 0 , 1 , , W ( i ) } , i = 1 , 2 , , m . The intuition of our FPTAS can be described as follows.
Given a constant ε > 0 , we partition each interval [ 0 , W ( i ) ] into O ( n ε log W ( i ) ) subintervals, i = 1 , 2 , , m . Then, the space [ 0 , W ( 1 ) ] × [ 0 , W ( 2 ) ] × × [ 0 , W ( m ) ] (corresponding to X 1 × X 2 × × X m ) is naturally partitioned into O ( n m ε m i = 1 m log W ( i ) ) small boxes (of dimension m). In our algorithm, we generate a sequence of sets Γ ¯ 0 , Γ ¯ 1 , , Γ ¯ n such that, for each j { 1 , 2 , , n } , we have Γ ¯ j Γ j , and for each small box, at most one vector u Γ ¯ j can make u in this box. The sequence Γ ¯ 0 , Γ ¯ 1 , , Γ ¯ n is generated iteratively by n iterations. Initially, we set Γ ¯ 0 = { 0 } . At iteration j { 1 , 2 , , n } , we generated Γ ¯ j from Γ ¯ j 1 in the following way: First generate two subsets Γ ¯ j and Γ ¯ j of Γ j by the similar way as Algorithm DP2, and then Γ ¯ j is obtained from Γ ¯ j Γ ¯ j by eliminating all the vectors v if there is another vector u Γ ¯ j Γ ¯ j such that u and v are in the same box and t ( u ) t ( v ) . Finally, Γ ¯ n will act as the ( 1 + ε ) -approximation of the Pareto frontier P ( J ) . Our algorithm borrows the approach used in Li and Yuan [20].
Algorithm DP2( ε ): An FPTAS for the problem 1 | | P ( U w ) .
Input: A job instance J = { J 1 , J 2 , , J n } with d 1 d 2 d n and a constant ε > 0 .
Step 0: For i = 1 , 2 , , m , partition the interval [ 0 , W ( i ) ] into ξ i + 1 subintervals I 0 ( i ) , I 1 ( i ) , , I ξ i ( i ) such that I 0 ( i ) = [ 0 , 1 ) , I r ( i ) = [ ( 1 + ε ) r 1 n , ( 1 + ε ) r n ) for 1 r ξ i 1 , and I ξ i ( i ) = [ ( 1 + ε ) ξ i 1 n , W ( i ) ] , where ξ i = n log 1 + ε W ( i ) = O ( n ε log W ( i ) ) . This results in i = 1 m ξ i small m-dimensional boxes
I r 1 ( 1 ) × I r 2 ( 2 ) × × I r m ( m ) , r i { 0 , 1 , , ξ i } f o r i = 1 , 2 , , m .
Step 1: Set Γ ¯ 0 : = { 0 } , and set j : = 1 . Go to Step 2.
Step 2: Let u j = ( w j , 0 ) and v j = ( 0 , p j ) . Do the following.
(2.1) Generate the two sets
Γ ¯ j = { u + u j : u Γ ¯ j 1 }
and
Γ ¯ j = { u + v j : u Γ ¯ j 1   and   t ( u ) + p j d j } .
Then, determine Γ ¯ j Γ ¯ j .
(2.2) For any two states u and v of Γ ¯ j Γ ¯ j such that u and v fall within the same box I r 1 ( 1 ) × I r 2 ( 2 ) × × I r m ( m ) with t ( u ) t ( v ) , eliminate the state v from Γ ¯ j Γ ¯ j .
(2.3) Set Γ ¯ j to be the set of all the states of Γ ¯ j Γ ¯ j which are not eliminated in Step 2.2.
Step 3: If j = n , go to Step 4. Otherwise, if j < n , then set j : = j + 1 and go back to Step 2.
Step 4: Set Γ ¯ : = Γ ¯ n . For each point ( u 1 , u 2 , , u m ) Γ ¯ , obtain the corresponding schedule by backtracking.
The small boxes I r 1 ( 1 ) × I r 2 ( 2 ) × × I r m ( m ) with r i { 0 , 1 , , ξ i } for i = 1 , 2 , , m , play an important role. The following property of these boxes can be easily observed from their construction in Step 0 of Algorithm DP2( ε ).
Lemma 10. 
Let I r 1 ( 1 ) × I r 2 ( 2 ) × × I r m ( m ) be a small box constructed in Step 0 of Algorithm DP2(ε). For every two states u and v with u and v falling within this small box, we have u ( 1 + ε ) 1 n v .
Given the instance J and the constant ε > 0 , let Γ ˜ 0 , Γ ˜ 1 , , Γ ˜ n be the state sets generated by Algorithm DP2, and let Γ ¯ 0 , Γ ¯ 1 , , Γ ¯ n be the state sets generated by Algorithm DP2( ε ). Then, we have the following lemma.
Lemma 11. 
For each state u Γ ˜ j , there exists a state u ¯ Γ ¯ j such that t ( u ¯ ) t ( u ) and u ¯ ( 1 + ε ) j n u , i.e., u ¯ i ( 1 + ε ) j n u i for i = 1 , 2 , , m .
Proof. 
We prove the lemma by induction over j = 1 , 2 , , n . Note that there are two states ( 0 , w 1 ( 1 ) , w 1 ( 2 ) , , w 1 ( m ) ) and ( p 1 , 0 , 0 , , 0 ) generated in Γ ˜ 1 and Γ ¯ 1 in the first iteration, and no one is eliminated in Algorithm DP2( ε ). So, we have Γ ˜ 1 = Γ ¯ 1 . Then, the result holds for j = 1 .
Inductively, suppose in the following that j { 2 , 3 , , n } , and the lemma holds for the indices up to j 1 . Let u Γ ˜ j . Then, either u Γ ˜ j or u Γ ˜ j .
If u Γ ˜ j , then there is a state x Γ ˜ j 1 such that u = x + u j , where u j = ( w j , 0 ) . By the induction hypothesis, there is a state x ¯ Γ ¯ j 1 such that t ( x ¯ ) t ( x ) and x ¯ ( 1 + ε ) j 1 n x . Note that x ¯ + u j Γ ¯ j . From Algorithm DP2( ε ), there is a state u ¯ Γ ¯ j such that t ( u ¯ ) t ( x ¯ + u j ) and such that ( x ¯ + u j ) and u ¯ fall within a common small box I r 1 ( 1 ) × I r 2 ( 2 ) × × I r m ( m ) . From Lemma 10, we have u ¯ ( 1 + ε ) 1 n ( x ¯ + u j ) . By collecting the above information together, we have u ¯ Γ ¯ j , t ( u ¯ ) t ( x ¯ + u j ) t ( x + u j ) = t ( u ) , and u ¯ ( 1 + ε ) 1 n ( x ¯ + u j ) ( 1 + ε ) 1 n ( ( 1 + ε ) j 1 n x + u j ) ( 1 + ε ) j n ( x + u j ) = ( 1 + ε ) j n u , as required.
If u Γ ˜ j , then there is a state x Γ ˜ j 1 such that t ( x ) + p j d j and u = x + v j , where v j = ( 0 , p j ) . By the induction hypothesis, there is a state x ¯ Γ ¯ j 1 such that t ( x ¯ ) t ( x ) and x ¯ ( 1 + ε ) j 1 n x . Since t ( x ¯ ) + p j t ( x ) + p j d j , we have x ¯ + v j Γ ¯ j . From Algorithm DP2( ε ), there is a state u ¯ Γ ¯ j such that t ( u ¯ ) t ( x ¯ + v j ) and such that ( x ¯ + v j ) = x ¯ and u ¯ fall within a common small box I r 1 ( 1 ) × I r 2 ( 2 ) × × I r m ( m ) . From Lemma 10, we have u ¯ ( 1 + ε ) 1 n x ¯ . By collecting the above information together, we have u ¯ Γ ¯ j , t ( u ¯ ) t ( x ¯ + v j ) t ( x + v j ) = t ( u ) , and u ¯ ( 1 + ε ) 1 n x ¯ ( 1 + ε ) 1 n · ( 1 + ε ) j 1 n x ( 1 + ε ) j n x = ( 1 + ε ) j n u , as required.
Thus, the lemma holds for the index j. This completes the proof. □
Theorem 4. 
The Pareto-scheduling problem 1 | | P ( U w ) admits a fully polynomial-time approximation scheme (FPTAS) of running time O ( n m + 1 ε m i = 1 m log W ( i ) ) for any given constant ε > 0 .
Proof. 
Recall that Γ = Γ n , Γ ˜ n is a simplified version of Γ n , and P ( J ) is the Pareto frontier of the problem 1 | | P ( U w ) on instance J . Then, Γ * = P ( J ) . From Lemma 9(ii), Γ ˜ , which is a simplified version of Γ ˜ n returned by Algorithm DP2, is also a simplified version of Γ . This means that
P ( J ) = Γ * Γ ˜ Γ ˜ n .
We first show that the point set Γ ¯ , which is returned by Algorithm DP2( ε ), is a ( 1 + ε ) -approximation of P ( J ) . For each point v P ( J ) , from (12), we have ( v 1 , v 2 , , v m ) Γ ˜ n . Thus, there is a non-negative number t such that ( v , t ) Γ ˜ n . From Lemma 11, there is a state u Γ ¯ n such that u ( 1 + ε ) n n ( v , t ) = ( 1 + ε ) v . Since Γ ¯ = Γ ¯ n , we have u Γ ¯ . Thus, each point in P ( J ) has a ( 1 + ε ) -approximation in Γ ¯ . Consequently, Γ ¯ is a ( 1 + ε ) -approximation of P ( J ) .
Next, we discuss the time complexity of Algorithm DP2( ε ). In Step 0, it takes O ( i = 1 m n log 1 + ε W ( i ) ) = O ( n m ε m i = 1 m log W ( i ) ) time to partition the intervals and form the small boxes. Step 1 needs O ( 1 ) time for initialization. Note that at the end of each iteration (for performing Step 2), at most one state is left in each small box. Then, there are at most O ( n m ε m i = 1 m log W ( i ) ) distinct states in Γ ¯ j for j = 1 , 2 , , n . Moreover, out of each state in Γ ¯ j 1 , at most one new state is generated in Γ ¯ j . Therefore, the construction of Γ ¯ j , Γ ¯ j , and Γ ¯ j Γ ¯ j in Steps 2.1 and 2.2 requires at most O ( n m ε m i = 1 m log W ( i ) ) time, which is also the time needed for the elimination procedure in Step 2.3. Step 4 requires at most O ( n m ε m i = 1 m log W ( i ) ) time and Step 2 is repeated n times. Then, the overall running time of Algorithm DP2( ε ) is given by O ( n m + 1 ε m i = 1 m log W ( i ) ) . □

5. Conclusions

In this paper, we study the feasibility problem 1 | | U w Q , the Pareto-scheduling problem 1 | | P ( U w ) , and the weakened version 1 | | P ˜ ( U w ) , where m is a fixed number. We present a pseudo-polynomial algorithm and a DFPTAS through rounding technique for the problem 1 | | U w Q . Then, we provide a pseudo-polynomial algorithm for the problems 1 | | P ( U w ) and 1 | | P ˜ ( U w ) together with an FPTAS for the problem 1 | | P ( U w ) .
Scheduling of jobs with multiple weights can be applied in various production lines or machine environments. In a further study, we will suggest extending the problems studied in our paper and Guo et al. [11] to other scheduling models, such as parallel-machine scheduling, serial-batch scheduling, parallel-batch scheduling, and so on.

Author Contributions

Conceptualization, S.G.; methodology, S.G., H.L. and H.Z.; validation, S.G., H.L. and H.Z.; writing—original draft preparation, S.G.; software, S.G., H.L. and H.Z.; visualization, S.G., H.L. and H.Z.; writing—review and editing, S.G., H.L. and H.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moore, J.M. An n job, one machine sequencing algorithm for minimizing the number of late jobs. Manag. Sci. 1968, 15, 102–109. [Google Scholar] [CrossRef]
  2. Lawler, E.L.; Moore, J.M. A functional equation and its applications to resource allocation and sequencing problems. Manag. Sci. 1969, 16, 77–84. [Google Scholar] [CrossRef]
  3. Karp, R.M. Reducibility among combinatorial problems. In Complexity of Computer Computations; IBM Thomas J. Watson Research Center: Yorktown Heights, NY, USA; Plenum, NY, USA, 1972; pp. 85–103. [Google Scholar]
  4. Sahni, S. Algorithms for scheduling independent tasks. J. Assoc. Comput. Mach. 1976, 23, 116–127. [Google Scholar] [CrossRef]
  5. Lee, C.Y.; Vairaktarakis, G. Complexity of single machine hierarchical scheduling: A survey. In Complexity in Numerical Optimization; Pardalos, P.M., Ed.; World Scientific: River Edge, NJ, USA, 1993; pp. 269–298. [Google Scholar]
  6. Agnetis, A.; Billaut, J.C.; Gawiejnowicz, S.; Pacciarelli, D.; Soukhal, A. Multiagent Scheduling: Models and Algorithms; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  7. Cheng, T.C.E.; Ng, C.T.; Yuan, J.J. Multi-agent scheduling on a single machine to minimize total weighted number of tardy jobs. Theor. Comput. Sci. 2006, 362, 273–281. [Google Scholar] [CrossRef] [Green Version]
  8. Zou, J.; Yuan, J.J. Single-machine scheduling with maintenance activities and rejection. Discret. Optim. 2020, 38, 100609. [Google Scholar] [CrossRef]
  9. Guo, S.E.; Lu, L.F.; Yuan, J.J.; Ng, C.; Cheng, T.C.E. Pareto-scheduling with double-weighted jobs to minimize the weighted number of tardy jobs and total weighted late work. Nav. Res. Logist. 2022, 69, 816–837. [Google Scholar] [CrossRef]
  10. Chen, R.B.; He, R.Y.; Yuan, J.J. Preemptive scheduling to minimize total weighted late work and weighted number of tardy jobs. Comput. Ind. Eng. 2022, 167, 107969. [Google Scholar] [CrossRef]
  11. Guo, S.E.; Geng, Z.C.; Yuan, J.J. Single-machine Pareto-scheduling with multiple weighting vectors for minimizing the total weighted late works. J. Ind. Manag. Optim. 2023, 19, 456–471. [Google Scholar] [CrossRef]
  12. Zhang, Y.; Geng, Z.C.; Yuan, J.J. Two-Agent Pareto-Scheduling of Minimizing Total Weighted Completion Time and Total Weighted Late Work. Mathematics 2020, 8, 2070. [Google Scholar] [CrossRef]
  13. Masadeh, R.; Alsharman, N.; Sharieh, A.; Mahafzah, B.A.; Abdulrahman, A. Task scheduling on cloud computing based on sea lion optimization algorithm. Int. J. Web Inf. Syst. 2021, 17, 99–116. [Google Scholar] [CrossRef]
  14. Zhang, Y.; Yuan, J.J.; Ng, C.T.; Cheng, T.C.E. Pareto-optimization of three-agent scheduling to minimize the total weighted completion time, weighted number of tardy jobs, and total weighted late work. Nav. Res. Logist. 2021, 68, 378–393. [Google Scholar] [CrossRef]
  15. Kim, Y.J.; Kim, B.S. Population-Based Meta-Heuristic Algorithms for Integrated Batch Manufacturing and Delivery Scheduling Problem. Mathematics 2022, 10, 4127. [Google Scholar] [CrossRef]
  16. Valouxis, C.; Gogos, C.; Dimitsas, A.; Potikas, P.; Vittas, A. A Hybrid Exact–Local Search Approach for One-Machine Scheduling with Time-Dependent Capacity. Algorithms 2022, 15, 450. [Google Scholar] [CrossRef]
  17. Wang, Y.C.; Wang, S.H.; Wang, J.B. Resource Allocation Scheduling with Position-Dependent Weights and Generalized Earliness–Tardiness Cost. Mathematics 2023, 11, 222. [Google Scholar] [CrossRef]
  18. Yuan, J.J. Unary NP-hardness of minimizing the number of tardy jobs with deadlines. J. Sched. 2017, 20, 211–218. [Google Scholar] [CrossRef]
  19. Kung, H.T.; Luccio, F.; Preparata, F.P. On finding the maxima of a set of vectors. J. Assoc. Comput. Mach. 1975, 22, 469–476. [Google Scholar] [CrossRef]
  20. Li, S.S.; Yuan, J.J. Single-machine scheduling with multi-agents to minimize total weighted late work. J. Sched. 2020, 23, 497–512. [Google Scholar] [CrossRef]
Figure 1. Schedule σ 1 corresponding to Instance I 1 .
Figure 1. Schedule σ 1 corresponding to Instance I 1 .
Mathematics 11 01013 g001
Figure 2. Schedule σ 2 corresponding to Instance I 1 .
Figure 2. Schedule σ 2 corresponding to Instance I 1 .
Mathematics 11 01013 g002
Figure 3. Schedule σ 3 corresponding to Instance I 1 .
Figure 3. Schedule σ 3 corresponding to Instance I 1 .
Mathematics 11 01013 g003
Table 1. Instance 1.
Table 1. Instance 1.
Job J j Processing Time  p j Due Date d j Weights
Weight  w j ( 1 ) Weight  w j ( 2 ) Weight  w j ( 3 )
J 1 23321
J 2 45423
J 3 38112
J 4 110213
J 5 312224
J 6 213323
J 7 515212
J 8 118432
Table 2. The notation and terminology.
Table 2. The notation and terminology.
NotationTerminology
p j The processing time of job J j , 1 j n
d j The due date of job J j , 1 j n
w ( i ) = ( w 1 ( i ) , w 2 ( i ) , , w n ( i ) ) The i-th weighting vector, 1 i m
w j = ( w j ( 1 ) , w j ( 2 ) , , w j ( m ) ) The weight vector of job J j , 1 j n
J = { J 1 , J 2 , , J n } The set of all jobs
J x = { J 1 , J 2 , , J x } The set of the first x jobs of J
P x = p 1 + p 2 + + p x The total processing time of the first x jobs of J , x = 1 , 2 , , n
C σ ( x ) ( σ ) = p σ ( 1 ) + p σ ( 2 ) + + p σ ( x ) The completion time of job J σ ( x ) in schedule σ , 1 x n
U j ( σ ) The tardy indicator number of job J j in schedule σ
j = 1 n U j ( σ ) The number of tardy jobs in schedule σ
U w ( i ) ( σ ) = j = 1 n w j ( i ) U j ( σ ) The weighted number of tardy jobs with respect to w ( i ) in schedule σ
U w ( σ ) = ( U w ( 1 ) ( σ ) , U w ( 2 ) ( σ ) , , U w ( m ) ( σ ) ) The objective vector of schedule σ
Table 3. Job Instance I 2 for Algorithm DP1.
Table 3. Job Instance I 2 for Algorithm DP1.
Jobs J 1 J 2 J 3
processing times p j 324
due dates d j 245
weights w j ( 1 ) 121
weights w j ( 2 ) 211
Table 4. The values of the recursion function.
Table 4. The values of the recursion function.
v j = 1 j = 2 j = 3
v = ( 0 , 0 ) f ( 1 : v ) = + C ( 1 : v ) = + f ( 2 : v ) = + C ( 2 : v ) = + f ( 3 : v ) = + C ( 3 : v ) = +
v = ( 0 , 1 ) f ( 1 : v ) = + C ( 1 : v ) = + f ( 2 : v ) = + C ( 2 : v ) = + f ( 3 : v ) = + C ( 3 : v ) = +
v = ( 0 , 2 ) f ( 1 : v ) = + C ( 1 : v ) = + f ( 2 : v ) = + C ( 2 : v ) = + f ( 3 : v ) = + C ( 3 : v ) = +
v = ( 0 , 3 ) f ( 1 : v ) = + C ( 1 : v ) = + f ( 2 : v ) = + C ( 2 : v ) = + f ( 3 : v ) = + C ( 3 : v ) = +
v = ( 1 , 0 ) f ( 1 : v ) = + C ( 1 : v ) = + f ( 2 : v ) = + C ( 2 : v ) = + f ( 3 : v ) = + C ( 3 : v ) = +
v = ( 1 , 1 ) f ( 1 : v ) = + C ( 1 : v ) = + f ( 2 : v ) = + C ( 2 : v ) = + f ( 3 : v ) = + C ( 3 : v ) = +
v = ( 1 , 2 ) f ( 1 : v ) = + C ( 1 : v ) = 0 f ( 2 : v ) = 0 C ( 2 : v ) = 2 f ( 3 : v ) = + C ( 3 : v ) = +
v = ( 1 , 3 ) f ( 1 : v ) = + C ( 1 : v ) = 0 f ( 2 : v ) = 0 C ( 2 : v ) = 2 f ( 3 : v ) = + C ( 3 : v ) = 2
v = ( 2 , 0 ) f ( 1 : v ) = + C ( 1 : v ) = + f ( 2 : v ) = + C ( 2 : v ) = + f ( 3 : v ) = + C ( 3 : v ) = +
v = ( 2 , 1 ) f ( 1 : v ) = + C ( 1 : v ) = + f ( 2 : v ) = + C ( 2 : v ) = + f ( 3 : v ) = + C ( 3 : v ) = +
v = ( 2 , 2 ) f ( 1 : v ) = + C ( 1 : v ) = 0 f ( 2 : v ) = 0 C ( 2 : v ) = 2 f ( 3 : v ) = + C ( 3 : v ) = +
v = ( 2 , 3 ) f ( 1 : v ) = + C ( 1 : v ) = 0 f ( 2 : v ) = 0 C ( 2 : v ) = 2 f ( 3 : v ) = + C ( 3 : v ) = 2
Table 5. Job Instance I 3 for Algorithm DP2.
Table 5. Job Instance I 3 for Algorithm DP2.
Jobs J 1 J 2 J 3
processing times p j 434
due dates d j 357
weights w j ( 1 ) 523
weights w j ( 2 ) 234
weights w j ( 3 ) 332
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, S.; Lang, H.; Zhang, H. Scheduling of Jobs with Multiple Weights on a Single Machine for Minimizing the Total Weighted Number of Tardy Jobs. Mathematics 2023, 11, 1013. https://doi.org/10.3390/math11041013

AMA Style

Guo S, Lang H, Zhang H. Scheduling of Jobs with Multiple Weights on a Single Machine for Minimizing the Total Weighted Number of Tardy Jobs. Mathematics. 2023; 11(4):1013. https://doi.org/10.3390/math11041013

Chicago/Turabian Style

Guo, Shuen, Hao Lang, and Hanxiang Zhang. 2023. "Scheduling of Jobs with Multiple Weights on a Single Machine for Minimizing the Total Weighted Number of Tardy Jobs" Mathematics 11, no. 4: 1013. https://doi.org/10.3390/math11041013

APA Style

Guo, S., Lang, H., & Zhang, H. (2023). Scheduling of Jobs with Multiple Weights on a Single Machine for Minimizing the Total Weighted Number of Tardy Jobs. Mathematics, 11(4), 1013. https://doi.org/10.3390/math11041013

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop