Next Article in Journal
SNC_Net: Skin Cancer Detection by Integrating Handcrafted and Deep Learning-Based Features Using Dermoscopy Images
Previous Article in Journal
A Combined Model of Diffusion Model and Enhanced Residual Network for Super-Resolution Reconstruction of Turbulent Flows
Previous Article in Special Issue
Novel Creation Method of Feature Graphics for Image Generation Based on Deep Learning Algorithms
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scheduling with Group Technology, Resource Allocation, and Learning Effect Simultaneously

1
School of Science, Shenyang Aerospace University, Shenyang 110136, China
2
College of Mathematics and Computer, Jilin Normal University, Siping 136000, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(7), 1029; https://doi.org/10.3390/math12071029
Submission received: 22 February 2024 / Revised: 25 March 2024 / Accepted: 26 March 2024 / Published: 29 March 2024
(This article belongs to the Special Issue Applied Computing and Artificial Intelligence, 2nd Edition)

Abstract

:
This paper studies the single-machine group scheduling problem with convex resource allocation and learning effect. The number of jobs in each group is different, and the corresponding common due dates are also different, where the processing time of jobs follows a convex function of resource allocation. Under common due date assignment, the objective is to minimize the weighted sum of earliness, tardiness, common due date, resource consumption, and makespan. To solve the problem, we present the heuristic, simulated annealing, and branch-and-bound algorithms. Computational experiments indicate that the proposed algorithms are effective.

1. Introduction

In actual industrial production, with the increasingly fierce competition among enterprises, jobs with similar characteristics are divided into the same group for processing to adapt to the efficient production rhythm (Potts et al. [1], Pei et al. [2], Lee et al. [3]). The group technology (expressed by G T ) arises, corresponding to the group technology scheduling problem in scheduling theory. The essence of enterprise production is a huge consumption of energy, for example, in the case of the input of steel; such cases can be collectively referred to as resources. The input of resources can effectively reduce the processing process of the job and then shorten the processing time; that is, the greater the input, the shorter the processing time. The corresponding cost input also increases accordingly. Therefore, striking a balance between resource consumption and cost growth has become one of the most popular research topics. In the processing operation, in addition to resources affecting the processing time, machine/manual learning also shorten its duration, which corresponds to the scheduling problem with learning effect. On the other hand, when the customer submits an order, there is a delivery date, also known as the due date, which is fined for advance/delay to the customer. Therefore, each group of jobs has a common due date of reasonable arrangement of job production so as to reduce fines, which also plays an important role in scheduling. According to the above, in the group environment of resource allocation, learning effect and common due date reasonably determine the sequences of the job and the group so as to reduce costs, which is main goal of this study. The relevant research results of the above production environment (resource allocation, learning effect, common due date, and groups) will be elaborated in the future.
In reality, the processing times of jobs are alterable according to resource allocation (Shabtay and Steiner [4], Wang and Cheng [5], Kayvanfar et al. [6], Tian [7]) and learning effect (Wu et al. [8], Azzouz et al. [9], Zhao [10], Liu et al. [11]). The learning effect is the efficiency gained through repetitive production. Yin [12] studied single-machine resource allocation scheduling with the learning effect. Under common due window assignment, the researchers showed that some non-regular objectives are polynomially solvable. Zhao [13] studied the problems of no-wait flow shop scheduling with learning effect and convex resource allocation. Wang et al. [14] investigated single-machine scheduling with truncated learning effect and resource allocation. Jiang et al. [15] considered Seru production scheduling problems with resource allocation. For the total processing cost plus scheduling measure minimizations, they proved that the problems are polynomially solvable. Wang and Wang [16] addressed the single-machine scheduling with convex resource allocation and a time-dependent learning effect. For three versions stemming from the total resource consumption cost and the scheduling cost, they proposed some solution algorithms.
In order to achieve higher productivity, the  G T is often used in production and processing. For example, Shabtay et al. [17] studied these two types of resource allocation functions separately. Zhu et al. [18] studied the single-machine scheduling with learning effect, resource allocation, and G T . Li et al. [19] considered a single-machine scheduling problem involving the due date assignment under G T environment. They provided an O ( n log n ) time complexity for the above three due date assignment methods. Lu et al. [20] investigated a single-machine scheduling problem with decreasing time-dependent processing times and group technology assumption. For makespan minimization, they showed that the problem can be solved in polynomial time. Sun et al. [21] introduced single-machine group scheduling with learning effect and resource allocation. Chen et al. [22] considered a single-machine group scheduling problem with due date assignment and resource allocation. Liang et al. [23] dealt with a single-machine resource allocation scheduling problem with deteriorating jobs and G T . Yan et al. [24] studied a single-machine problem with resource allocation and deteriorating effect in group technology environment. Liu and Wang [25] and Wang and Liu [26] considered single-machine G T scheduling with resource allocation and due date assignment.
In view of the importance of due date assignment (Shabtay [27], Shabtay et al. [28], Cheng [29], Li et al. [30], Zhang et al. [31], Lv and Wang [32]), most of the above literature studies the special polynomial-time solvable cases of G T problems. In this paper, we integrate the case where processing time follows a convex function of resource allocation under common due date allocation as well as learning effects, i.e., the job number of each group is different. We conduct a specific study on the objective of minimizing the weighted sum of earliness, tardiness, common due date, resource consumption, and makespan. For the general case of the problem, we propose the heuristic, simulated annealing, and branch-and-bound algorithms. The rest of this paper is organized as follows. In Section 2, the problem statement is presented. Section 3 offers basic properties. A special case is discussed in Section 4. Section 5 presents the specific solving algorithms of the problem. In Section 6, computational experiments are proposed. Finally, the conclusions are given in Section 7.

2. Problem Statement

There are n independent jobs grouped into r groups (i.e., G 1 , G 2 , , G r ) to be processed by a single machine. Each G i ( i = 1 , 2 , , r ) has an independent setup time s ^ i , where the number of jobs allocated to G i is n ¯ i (i.e., i = 1 r n ¯ i = n ).
If the job J i j ( i = 1 , 2 , , r , j = 1 , 2 , , n ¯ i ) is scheduled at the x ^ th ( x ^ = 1 , 2 , , n ¯ i ) position of group l ^ ( l ^ = 1 , 2 , , r ), the actual processing time of job J i j is
p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ ,
where u ^ i j is the amount of resource of J j in G i , w ^ i j is a positive parameter which represents the workload of J i j in G i , α 1 0 denotes the learning index of the job position, α 2 0 denotes the group learning index, and θ > 0 is a given constant.
We define the earliness and tardiness of the job J i j as E ˜ i j = max { d ˜ i C ^ i j , 0 } and T ˜ i j = max { C ^ i j d ˜ i , 0 } for i = 1 , 2 , , r , j = 1 , 2 , , n ¯ i , where d ˜ i is the common due date of group G i and C ^ i j is the completion time of J i j . The objective is to find the optimal group sequence ϕ * Φ , where Φ is set of all group sequences, the optimal internal job sequence π ¯ i * Π ¯ i within G i for i = 1 , 2 , , r , where Π ¯ i is set of all internal job sequences, the optimal set of due dates Ψ = ( d ˜ 1 * , d ˜ 2 * , , d ˜ r * ) , and the optimal resource allocation matrix u ^ * = ( u ^ i j * ) for i = 1 , 2 , , r , j = 1 , 2 , , n ¯ i , which together minimize an objective function that includes penalties due to earliness, tardiness, due date assignment, resource consumption, and makespan as given below:
Z ˇ = i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j + λ C ^ m a x = i = 1 r j = 1 n ¯ i σ d ˜ i + ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j + λ p ^ i j ,
where C ^ m a x = i = 1 r j = 1 n ¯ i p ^ i j , σ ( σ > 0 ) , ρ ( ρ > 0 ) , μ ( μ > 0 ) , and λ ( λ > 0 ) represent the per unit cost of the due date cost, earliness penalties, tardiness penalties, and processing time, v i j is the cost of resources allocated to J i j . Using the three-parameter representation, the problem can be denoted as
1 | G T , C O N , p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ | i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j + λ C ^ m a x ,
where C O N denotes common due date. The literature review related to the problems with G T is given below (see Table 1).
As can be seen in Table 1, most of the studies in the literature on the group sequencing problem are limited to considering only the common due date or only the convex function of the processing time with respect to the resources. This study integrates the case where processing time follows a convex function of resource allocation under common due date allocation as well as learning effects.

3. Basic Properties

There exists an optimal solution that starts at time zero and no-idle machine time. From Shabtay et al. [17], we have the following lemmas (see Lemma 1 and Equation (20) in [17], the only difference is whether they have the learning effect):
Lemma 1.
There exists an optimal due date for G i that is equal to completion time of a job, namely d ˜ i = C ^ i [ l i ] (at the position l i of the ith group), where
l i = max n ¯ i × ( μ σ ) μ + ρ , 0 ,
and C ^ i [ 0 ] = 0 by definition.
Lemma 2.
For the problem
1 | G T , C O N , p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ | i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j + λ C ^ m a x ,
the optimal objective function of the given sequence is
Z ˇ = η i = 1 r n ¯ [ i ] × k = 1 i 1 P ^ [ k ] + k = 1 i s ^ [ k ] + i = 1 r j = 1 n ¯ i ι [ i ] j p ^ [ i ] [ j ] + i = 1 r j = 1 n ¯ i v [ i ] [ j ] u ^ [ i ] [ j ] ,
where P ^ k is the processing time of group G k , i.e., P ^ k = j = 1 n ¯ k p ^ k j , η = min ( σ , μ ) , and
ι [ i ] j = σ × n ¯ [ i ] + ρ × ( j 1 ) + λ , f o r j l i , μ × ( n ¯ [ i ] j + 1 ) + λ , f o r l i < j n ¯ [ i ] .
The objective function in Equation (4) can be represented as
Z ˇ = η × i = 1 r n ¯ [ i ] × k = 1 i 1 P ^ [ k ] + k = 1 i s ^ [ k ] + i = 1 r j = 1 n ¯ ι [ i ] j p ^ [ i ] [ j ] + i = 1 r j = 1 n ¯ [ i ] v [ i ] [ j ] u ^ [ i ] [ j ] = η × i = 1 r n ¯ [ i ] × k = 1 i 1 j = 1 n ¯ p ^ [ k ] j + k = 1 i s ^ [ k ] + i = 1 r j = 1 n ¯ ι [ i ] j p ^ [ i ] [ j ] + i = 1 r j = 1 n ¯ [ i ] v [ i ] [ j ] u ^ [ i ] [ j ] = i = 1 r j = 1 n ¯ [ i ] η × k = i + 1 r n ¯ [ k ] + ι [ i ] j × p ^ [ i ] [ j ] + η × i = 1 r n ¯ [ i ] × k = 1 i s ^ [ k ] + i = 1 r j = 1 n ¯ [ i ] v [ i ] [ j ] u ^ [ i ] [ j ] .
By substituting Equation (1) into Equation (5), we have
Z ˇ = i = 1 r j = 1 n ¯ i η k = i + 1 r n ¯ [ k ] + ι [ i ] j × w ^ i j j α 1 i α 2 u ^ i j θ + η × i = 1 r n ¯ [ i ] × k = 1 i s ^ [ k ] + i = 1 r j = 1 n ¯ i v [ i ] [ j ] u ^ [ i ] [ j ] .
Theorem 1.
For
1 | G T , C O N , p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ | i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j + λ C ^ m a x ,
the optimal resource allocation is
u ^ [ i ] [ j ] * = θ × ( k = i + 1 r n ¯ [ k ] + ι [ i ] j ) v [ i ] [ j ] 1 1 + θ × w ^ [ i ] [ j ] j α 1 i α 2 θ 1 + θ ,
and the objective function is
Z ˇ * = θ θ θ + 1 + θ 1 θ + 1 i = 1 r j = 1 n ¯ i δ [ i ] [ j ] × j α 1 i α 2 θ 1 + θ × η k = i + 1 r n ¯ [ k ] + ι [ i ] j 1 1 + θ + η i = 1 r n ¯ [ i ] × k = 1 i s ^ [ k ] ,
where δ [ i ] [ j ] = w ^ [ i ] [ j ] × v [ i ] [ j ] θ θ + 1 , for  i = 1 , 2 , , r , j = 1 , 2 , , n ¯ i .
Proof. 
Deriving the objective function given by Equation (5) with respect to u ^ [ i ] [ j ] for i = 1 , 2 , , r , j = 1 , 2 , , n ¯ i ,
Z ˇ = θ i = 1 r j = 1 n ¯ i η × k = i + 1 r n ¯ [ k ] + ι [ i ] j × ( w ^ i j x ^ α 1 l ^ α 2 ) θ u ^ i j θ + 1 + i = 1 r j = 1 n ¯ i v [ i ] [ j ] .
By equating it to zero, we have
u ^ [ i ] [ j ] * = θ × ( k = i + 1 r n ¯ [ k ] + ι [ i ] j ) v [ i ] [ j ] 1 1 + θ × w ^ [ i ] [ j ] j α 1 i α 2 θ 1 + θ .
By substituting Equation (7) into Equation (6), we have
Z ˇ * = θ θ θ + 1 + θ 1 θ + 1 i = 1 r j = 1 n ¯ i δ [ i ] [ j ] × j α 1 i α 2 θ 1 + θ × η k = i + 1 r n ¯ [ k ] + ι [ i ] j 1 1 + θ + η i = 1 r n ¯ [ i ] × k = 1 i s ^ [ k ] ,
where δ i j = w ^ i j × v i j θ θ + 1 , i = 1 , 2 , , r , j = 1 , 2 , , n ¯ i .    □
Theorem 2.
The term
i = 1 r n ¯ [ i ] × k = 1 i s ^ [ k ] ,
it can be minimized by sequencing the groups in non-increasing order of n ¯ [ k ] s ^ [ k ] .
Proof. 
Assuming that ϕ * = { , G [ i ] , G [ k ] , } and ϕ = { , G [ k ] , G [ i ] , } , we have
Z ˇ ϕ * Z ˇ ϕ = n ¯ [ i ] s ^ [ f + 1 ] + + s ^ [ i 1 ] + s ^ [ i ] + n ¯ [ k ] s ^ [ f + 1 ] + + s ^ [ i 1 ] + s ^ [ i ] + s ^ [ k ] n ¯ [ k ] s ^ [ f + 1 ] + + s ^ [ i 1 ] + s ^ [ k ] n ¯ [ i ] s ^ [ f + 1 ] + + s ^ [ i 1 ] + s ^ [ k ] + s ^ [ i ] = n ¯ [ k ] s ^ [ i ] n ¯ [ i ] s ^ [ k ] = s ^ [ i ] s ^ [ k ] n ¯ [ k ] s ^ [ k ] n ¯ [ i ] s ^ [ i ] .
If  n ¯ [ i ] s ^ [ i ] > n ¯ [ k ] s ^ [ k ] , there is Z ˇ ϕ * < Z ˇ ϕ .    □
Theorem 3.
For a given group sequence of
1 | G T , C O N , p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ | i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j + λ C ^ m a x ,
we let Ω i j = ι i j j θ α 1 1 1 + θ ; the optimal internal job sequence π ¯ i * i = 1 , 2 , , r can be obtained by matching the smallest δ i j to the job with the largest Ω i j , the second smallest δ i j to the job with the second largest ι i j , and so on.
Proof. 
For a given group sequence, from Equation (8), θ θ θ + 1 + θ 1 θ + 1 , η k = i + 1 r n ¯ [ k ] , and  η i = 1 r n ¯ [ i ] × k = 1 i s ^ [ k ] are constants. For group G [ i ] , the objective function is j = 1 n ¯ i δ [ i ] [ j ] × i α 2 θ 1 + θ × η k = i + 1 r n ¯ [ k ] θ + ι [ i ] j j θ α 1 1 1 + θ . Obviously, minimization of this term is equal to minimization of j = 1 n ¯ i δ [ i ] [ j ] × ι [ i ] j j θ α 1 1 1 + θ . From Hardy et al. [33], the term j = 1 n ¯ i δ [ i ] [ j ] × ι [ i ] j j θ α 1 1 1 + θ can be minimized by matching the smallest δ i j to the job with the largest Ω i j , the second smallest δ i j to the job with the second largest ι i j , and so on.    □

4. Special Case

For
1 | G T , C O N , p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ | i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j + λ C ^ m a x ,
the complexity remains an open problem, so a special case is discussed, i.e.,  s ^ i = s ^ and n ¯ i = n ¯ for i = 1 , 2 , , r .
Theorem 4.
For
1 | G T , C O N , p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ | i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j + λ C ^ m a x ,
if s ^ i = s ^ and n ¯ i = n ¯ for i = 1 , 2 , , r , the optimal group sequence ϕ * is solvable by solving the assignment problem (denoted by A P ˜ ).
Proof. 
Apparently, the optimal sequence π ¯ * of G i can be determined by Theorem 3, and 
η i = 1 r n ¯ [ i ] × k = 1 i s ^ [ k ]
is a constant for s ^ i = s ^ and n ¯ i = n ¯ , i = 1 , 2 , , r . We let the binary variables
X i l ^ = 1 , i f G i i s a s s i g n e d t o p o s i t i o n l ^ i , l ^ = 1 , 2 , , r , 0 , o t h e r w i s e
and
Q ^ i l ^ = θ θ θ + 1 + θ 1 θ + 1 × i = 1 r j = 1 n ¯ i δ [ i ] [ j ] × j α 1 l ^ α 2 θ θ + 1 × η × r l ^ n ¯ + ι [ i ] j 1 θ + 1 ,
where
ι [ i ] j = σ × n ¯ + ρ × ( j 1 ) + λ , f o r j l i , μ × ( n ¯ j + 1 ) + λ ; f o r l i < j n ¯ ,
the optimal group sequence is obtained by the A P ˜ :
M i n i = 1 r l = 1 r Q ^ i l ^ X i l ^
s . t . i = 1 r X i l ^ = 1 , i = 1 , 2 , , r
l = 1 r X i l ^ = 1 , l ^ = 1 , 2 , , r
X i l ^ = 0 o r 1 , i = 1 , 2 , , r , l ^ = 1 , 2 , , r .
   □
Therefore, the special case (i.e., s ^ i = s ^ and n ¯ i = n ¯ for i = 1 , 2 , , r ) of
1 | G T , C O N , p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ | i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j + λ C ^ m a x
can be solved optimally by the algorithm described below (Algorithm 1).
Algorithm 1 Special Case
  • Input: n, r, θ , w ^ i j , u ^ i j , s ^ i , v i j , σ , ρ , μ , λ
  • First step: Calculate l i by using Lemma 1;
  • Second step: Calculate ι i j by using Lemma 2;
  • Third step: Determine the jobs sequence in each group by Theorem  2;
  • Fourth step: Calculate values Q ^ i l ^ by Equation (11);
  • Fifth step: Determine the optimal group sequence by the A P ˜ (13)–(16);
  • Sixth step: Calculate the optimal resource allocation by Equation (7).
  • Output: optimal resource allocation, optimal sequence
Theorem 5.
For the special case s ^ i = s ^ and n ¯ i = n ¯ of
1 | G T , C O N , p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ | i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j + λ C ^ m a x ,
it is solvable by Algorithm 1 in O ( n 3 ) time.
Proof. 
The correctness of the algorithm follows directly from Theorems 2–4. Steps 1, 2 and 4 need time O ( n ) , respectively. In Step 3, sorting internal job sequence needs time i = 1 r O ( n ¯ i log n ¯ i ) O ( n log n ) . The  A P ˜ in Step 5 needs time O ( r 3 ) O ( n 3 ) . Thus, the total complexity of Algorithm 1 is O ( n 3 ) .    □
The following example illustrates the application of Algorithm 1 for
1 | G T , C O N , p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ | i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j + λ C ^ m a x .
Example 1.
There are n = 9 jobs, r = 3 groups, n ¯ 1 = n ¯ 2 = n ¯ 3 = 3 , s ^ 1 = s ^ 2 = s ^ 3 = 2 , where α 1 = 0.1 , α 2 = 0.2 , θ = 2 , σ = 2 , μ = 4 , ρ = 2 , λ = 2 . The  w ^ i j , u ^ i j , v i j of each job are shown in Table 2.
Solution:
Step 1. From Lemma 1, we have
l 1 = l 2 = l 3 = 2 .
Step 2. From Lemma 2, we have
ι 11 = ι 21 = ι 31 = 8 ,
ι 12 = ι 22 = ι 32 = 10 ,
ι 13 = ι 23 = ι 33 = 6 .
Step 3. According to Theorem 3, the optimal internal job sequences are π ¯ 1 * : [ J 13 J 12 J 11 ] , π ¯ 2 * : [ J 22 J 21 J 23 ] , π ¯ 3 * : [ J 31 J 33 J 32 ] .
Step 4. The values of Q ^ i l ^ (by Equation (11)) are given in Table 3, i = 1 , 2 , 3 , l = 1 , 2 , 3 .
Step 5. Solving the A P ˜ (13)–(16), it follows that ϕ * : [ G 1 G 3 G 2 ] .
Step 6. From Equation (7),  u ^ i j can be calculated as follows:
u ^ 11 = 11.6406 , u ^ 12 = 10.0096 , u ^ 13 = 7.2768 ,
u ^ 21 = 8.8501 , u ^ 22 = 9.6262 , u ^ 23 = 7.4218 ,
u ^ 31 = 12.6616 , u ^ 32 = 11.5276 , u ^ 33 = 8.5981 .

5. General Case

If the optimal job sequence π ¯ k * ( k = 1 , 2 , , r ) is given by Theorem 3, the term
θ θ θ + 1 + θ 1 θ + 1 × i = 1 r j = 1 n ¯ i δ [ i ] [ j ] × j α 1 i α 2 θ 1 + θ × η × k = i + 1 r n ¯ [ k ] + ι [ i ] j 1 θ + 1
cannot be minimized by the non-increasing (LPT) order or the non-decreasing (SPT) order of n ¯ k .
Example 2.
There are three jobs belonging to two groups, G 1 = { J 11 } , G 2 = { J 21 , J 22 } , where α 1 = 0.3 , α 2 = 0.2 , θ = 1 , σ = ρ = λ = 5 , μ = 8 . The  w ^ i j , v i j of each job are shown in Table 4.
According to Lemmas 1 and 2, we have
l 1 = l 2 = 1 ,
ι 11 = 5 × 1 + 5 = 10 , ι 21 = 5 × 2 + 5 = 15 , ι 22 = 8 × 1 + 5 = 13 .
Then, the optimal internal job sequences are π ¯ 1 * : [ J 11 ] , π ¯ 2 * : [ J 21 J 22 ] .
According to the SPT order of n ¯ k ( i . e . , G 1 G 2 ) , the term can be calculated as follows:
θ θ θ + 1 + θ 1 θ + 1 × i = 1 r j = 1 n ¯ i δ [ i ] [ j ] × j α 1 i α 2 θ 1 + θ × η × k = i + 1 r n ¯ [ k ] + ι [ i ] j 1 θ + 1 = 2 × ( 10 × 2 ) 1 / 2 × 1 0.3 × 1 0.2 1 / 2 × ( 10 + 10 ) 1 / 2 + ( 10 × 3 ) 1 / 2 × 1 0.3 × 2 0.2 1 / 2 × 15 1 / 2 + ( 12 × 4 ) 1 / 2 × 2 0.3 × 2 0.2 1 / 2 × 13 1 / 2 = 121.5964 .
According to the LPT order of n ¯ k ( i . e . , G 2 G 1 ) , the term can be calculated as follows:
θ θ θ + 1 + θ 1 θ + 1 × i = 1 r j = 1 n ¯ i δ [ i ] [ j ] × j α 1 i α 2 θ 1 + θ × η × k = i + 1 r n ¯ [ k ] + ι [ i ] j 1 θ + 1 = 2 × ( 10 × 3 ) 1 / 2 × 1 0.3 × 1 0.2 1 / 2 × ( 5 + 15 ) 1 / 2 + ( 12 × 4 ) 1 / 2 × 2 0.3 × 1 0.2 1 / 2 × ( 5 + 13 ) 1 / 2 + 10 × 2 ) 1 / 2 × 1 0.3 × 2 0.2 1 / 2 × 10 1 / 2 = 128.3624 .
Therefore, the LPT order of n ¯ k is not an optimal group sequence.
Example 3.
There are 3 jobs belonging to 2 groups, G 1 = { J 11 } , G 2 = { J 21 , J 22 } , where α 1 = 0.3 , α 2 = 0.2 , θ = 1 , σ = ρ = λ = 5 , μ = 8 . The  w ^ i j , v i j of each job are shown in Table 5.
According to Lemmas 1 and 2, we have
l 1 = l 2 = 1 , ι 11 = 5 × 1 + 5 = 10 , ι 21 = 5 × 2 + 5 = 15 , ι 22 = 8 × 1 + 5 = 13 .
Then, the optimal internal job sequences are π ¯ 1 * : [ J 11 ] , π ¯ 2 * : [ J 21 J 22 ] .
According to the SPT order of n ¯ k ( i . e . , G 1 G 2 ) , the term can be calculated as follows:
θ θ θ + 1 + θ 1 θ + 1 × i = 1 r j = 1 n ¯ i δ [ i ] [ j ] × j α 1 i α 2 θ 1 + θ × η × k = i + 1 r n ¯ [ k ] + ι [ i ] j 1 θ + 1 = 2 × ( 10 × 2 ) 1 / 2 × 1 0.3 × 1 0.2 1 / 2 × ( 10 + 10 ) 1 / 2 + ( 10 × 1 ) 1 / 2 × 1 0.3 × 2 0.2 1 / 2 × 15 1 / 2 + ( 12 × 1 ) 1 / 2 × 2 0.3 × 2 0.2 1 / 2 × 13 1 / 2 = 83.8601 .
According to the LPT order of n ¯ k ( i . e . , G 2 G 1 ) , the term can be calculated as follows:
θ θ θ + 1 + θ 1 θ + 1 × i = 1 r j = 1 n ¯ i δ [ i ] [ j ] × j α 1 i α 2 θ 1 + θ × η × k = i + 1 r n ¯ [ k ] + ι [ i ] j 1 θ + 1 = 2 × ( 10 × 1 ) 1 / 2 × 1 0.3 × 1 0.2 1 / 2 × ( 5 + 15 ) 1 / 2 + ( 12 × 1 ) 1 / 2 × 2 0.3 × 1 0.2 1 / 2 × ( 5 + 13 ) 1 / 2 + ( 10 × 2 ) 1 / 2 × 1 0.3 × 2 0.2 1 / 2 × 10 1 / 2 = 81.1657 .
Therefore, the SPT order of n ¯ k is not an optimal group sequence.

5.1. Heuristic Algorithm

According to Theorem 3, the optimal schedule of jobs within each group can be obtained and the optimal resource allocation for a given schedule can be obtained by Theorem 1. From Section 5, the following heuristic algorithm (HA) can be proposed (Algorithm 2).
Algorithm 2 HA
  • Input:  n ¯ , r, θ , w ^ i j , u ^ i j , s ^ i , v i j , σ , ρ , μ , λ
  • First step: Calculate l i by using Lemma 1;
  • Second step: Calculate ι i j by using Lemma 2;
  • Third step: Determine the jobs sequence in each group by Theorem  3;
  • Fourth step: Schedule groups by the non-increasing of n ¯ i s ^ i and calculate objective function
  •                        value by Equation (6);
  • Fifth step: Schedule groups by the non-increasing of s ^ i and calculate objective function
  •                        value by Equation (6);
  • Sixth step: Schedule groups by the non-increasing of n ¯ i and calculate objective function
  •                        value by Equation (6);
  • Seventh step: Calculate the optimal resource allocation by Equation (7).
  • Output: resource allocation, local optimal sequence

5.2. Simulated Annealing Algorithm

As in Lai et al. [34], simulated annealing (SA) (Algorithm 3) is actually a greedy algorithm, but it introduces a stochastic element to the search process. This paper uses this classical algorithm to compare it with the proposed HA algorithm (Algorithm 2). Simulated annealing accepts with some probability a solution that is worse than the current one, so it is possible to jump out of this local optimum and research for the global optimum, SA is a good choice for
1 | G T , C O N , p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ | i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j + λ C ^ m a x .
Algorithm 3 SA
  • Input:  n ¯ , r, θ , w ^ i j , u ^ i j , s ^ i , v i j , σ , ρ , μ , λ
  • First step: Set the internal job sequence by the Second step of Algorithm 1;
  • Second step: Use the pairwise interchange (PI) neighborhood generation method;
  • Third step: Calculate the objective value of the original schedule Z ˇ ;
  • Fourth step: Calculate the objective value of the new schedule Z ˇ * . If the Z ˇ * is less than
  •                         Z ˇ , it is accepted. Nevertheless, if the Z ˇ * is higher, it might still be accepted
  •                         with a decreasing probability as the process proceeds. This acceptance
  •                          probability is determined by the following exponential distribution function:
  •                                 P ( a c c e p t ) = e x p ( α × W C / L ) ,
  •                         where α is a parameter and W C / L is the change in the objective function.
  •                          In addition, the method is used to change α in the kth iteration as follows:
  •                                       α = k δ ,
  •                          where δ is a constant. After preliminary trials, δ = 1 is used;
  • Fifth step: If Z ˇ * increases, the new sequence is accepted when P ( a c c e p t ) > β , where β is
  •                          randomly sampled from the uniform distribution;
  • Sixth step: The schedule is stable after 1000 r iterations.
  • Output: resource allocation, local optimal sequence

5.3. Lower Bound

We let ϕ = [ ϕ G S ¯ , ϕ G U ¯ ] be a group sequence, where π ¯ G S ¯ ( ϕ ¯ G U ¯ ) is the scheduled (unscheduled) part, and  there are f groups in ϕ ¯ G S ¯ . We have
Z ˇ = θ θ θ + 1 + θ 1 θ + 1 × i = 1 f j = 1 n ¯ i δ i j × j α 1 i α 2 θ 1 + θ × η × k = i + 1 f n ¯ k + ι i j 1 θ + 1 + θ θ θ + 1 + θ 1 θ + 1 × i = f + 1 r j = 1 n ¯ i δ [ i ] ( j ) × j α 1 i α 2 θ 1 + θ × η × k = i + 1 r n ¯ [ k ] + ι [ i ] j 1 θ + 1 + η × i = 1 f n ¯ i × k = 1 i s ^ k + η × i = f + 1 r n ¯ < i > × k = 1 f s ^ k + k = f + 1 i s ^ ( k ) .
From the above equation, it can be seen that the terms θ θ θ + 1 + θ 1 θ + 1 × i = 1 f j = 1 n ¯ i δ i j × j α 1 i α 2 θ 1 + θ × η × k = i + 1 f n ¯ k + ι i j 1 θ + 1 , η × i = 1 f n ¯ i × k = 1 i s ^ k , and k = 1 f s ^ k are known, which can be seen as constants. And the term η × i = f + 1 r n ¯ < i > × k = 1 f s ^ k + k = f + 1 i s ^ ( k ) can be minimized by Theorem 2, and  i = f + 1 r j = 1 n ¯ i δ [ i ] [ j ] × i α × η × k = i + 1 r n ¯ [ k ] + ι [ i ] j 1 θ + 1 i = f + 1 r j = 1 n ¯ i δ [ i ] [ j ] × i α × ι [ i ] j 1 θ + 1 . Then, we have the following lower bound:
L B 1 = θ θ θ + 1 + θ 1 θ + 1 × i = 1 f j = 1 n ¯ i δ i j × j α 1 i α 2 θ 1 + θ × η × k = i + 1 f n ¯ k + ι i j 1 θ + 1 + θ θ θ + 1 + θ 1 θ + 1 × i = f + 1 r j = 1 n ¯ i δ [ i ] ( j ) × j α 1 i α 2 θ 1 + θ × ι [ i ] j 1 θ + 1 + η × i = 1 f n ¯ i × k = 1 i s ^ k + η × i = f + 1 r n ¯ min × k = 1 f s ^ k + k = f + 1 i s ^ ( k ) ,
where n ¯ min = min { n ¯ < f + 1 > , n ¯ < f + 2 > , , n ¯ < r > } , and  s ^ ( f + 1 ) < s ^ ( f + 2 ) < . . . < s ^ ( r ) . We let s ^ min = min { s ^ ( f + 1 ) , s ^ ( f + 2 ) , , s ^ ( r ) } ; then, the second lower bound can be written as
L B 2 = θ θ θ + 1 + θ 1 θ + 1 × i = 1 f j = 1 n ¯ i δ i j × j α 1 i α 2 θ 1 + θ × η × k = i + 1 f n ¯ k + ι i j 1 θ + 1 + θ θ θ + 1 + θ 1 θ + 1 × i = f + 1 r j = 1 n ¯ i δ [ i ] ( j ) × j α 1 i α 2 θ 1 + θ × ι [ i ] j 1 θ + 1 + η × i = 1 f n ¯ i × k = 1 i s ^ k + η × i = f + 1 r n ¯ < i > × k = 1 f s ^ k + k = f + 1 i s ^ min ,
it can be known that n ¯ < f + 1 > > n ¯ < f + 2 > > . . . > n ¯ < r > by Theorem 2. And 
L B 3 = θ θ θ + 1 + θ 1 θ + 1 × i = 1 f j = 1 n ¯ i δ [ i ] [ j ] × j α 1 i α 2 θ 1 + θ × η × k = i + 1 r n ¯ [ k ] + ι [ i ] j 1 θ + 1 + θ θ θ + 1 + θ 1 θ + 1 × i = f + 1 r j = 1 n ¯ i δ [ i ] ( j ) × j α 1 i α 2 θ 1 + θ × ι [ i ] j 1 θ + 1 + η × i = 1 f n ¯ [ i ] × k = 1 i s ^ [ k ] + η × i = f + 1 r n ¯ < i > × k = 1 f s ^ [ k ] + k = f + 1 i s ^ ( k ) ,
where n ¯ < f + 1 > > n ¯ < f + 2 > > . . . > n ¯ < r > and s ^ ( f + 1 ) < s ^ ( f + 2 ) < . . . < s ^ ( r ) . Note that n ¯ < i > and s ^ ( i ) ( i = f + 1 , f + 2 , , r ) do not necessarily correspond to the same group. To make the lower bound tighter, we select the largest of the three as the lower bound, that is,
L B = max { L B 1 , L B 2 , L B 3 } .

5.4. Branch-and-Bound Algorithm

Based on the above-mentioned lower bounds, the branch-and-bound (B&B) algorithm with enumeration as the central idea is proposed, and this enumeration algorithm can be used as an optimal program to solve the problem posed in this study in the steps described below (Algorithm 4).
Algorithm 4  B & B
  • Input:  n ¯ , r, θ , w ^ i j , u ^ i j , s ^ i , v i j , σ , ρ , μ , λ
  • First step: Use Algorithm 1 to obtain an initial solution for the problem;
  • Second step: Calculate the lower bound for the node;
  • Third step: If the lower bound for an unfathomed partial schedule of groups is larger
  •                      than or equal to the value of the objective function of the initial solution,
  •                      eliminate the node and all the nodes following it in the branch. Calculate
  •                      the objective function value of the completed schedule; if it is less than the
  •                      initial solution, replace it as the new solution; otherwise, eliminate it.
  • Fourth step: Continue until all nodes are explored.
  • Output: resource allocation, optimal sequence
Example 4.
We consider an example in which there are 14 jobs belonging to five groups, G 1 = { J 11 , J 12 , J 13 } ,   G 2 = { J 21 , J 22 , J 23 , J 24 } ,   G 3 = { J 31 , J 32 , J 33 } ,   G 4 = { J 41 , J 42 } ,   G 1 = { J 51 , J 52 } , where θ = 2 , σ = 2 , μ = 4 , ρ = 3 , λ = 2 , α 1 = 0.1 , α 2 = 0.2 . The w ^ i j , u ^ i j , v i j of each job, job number n ¯ k , and setup time s ^ k of each group are shown in Table 6 and Table 7.
Solution:
From Algorithm 2, we have π ¯ 1 * : [ J 12 J 13 J 11 ] , π ¯ 2 * : [ J 23 J 24 J 21 J 22 ] , π ¯ 3 * : [ J 31 J 32 J 33 ] , π ¯ 4 * : [ J 42 J 41 ] , π ¯ 5 * : [ J 51 J 52 ] . According to Steps 3-5, the initial group sequences are: G 2 G 3 G 1 G 4 G 5 , G 2 G 3 G 1 G 4 G 5 , G 2 G 3 G 4 G 1 G 5 , and their corresponding objective values are 1017.1409 , 1017.1409 , and 1022.2123 . We select the smallest 1017.1409 as the upper bound. According to the B&B algorithm, the following search tree can be obtained (see Figure 1). The numbers in Figure 1 represent the lower values, and the G 0 is defined as Level 0. From Figure 1, the optimal group sequence is G 2 G 3 G 1 G 4 G 5 , where at Level 1, for group G 1 , from Equations (18)–(20), the lower bounds are
L B 1 ( G 1 ) = 1.8899 × 13.208 × 1 0.1 × 1 0.2 2 / 3 × 8 1 3 + 11.5 × 2 0.1 × 1 0.2 2 / 3 × 10 1 3 + 13.572 × 3 0.1 × 1 0.2 2 / 3 × 6 1 3 + 21.256 × 1 0.1 × 2 0.2 2 / 3 × 4 1 3 + 9.665 × 2 0.1 × 2 0.2 2 / 3 × 10 1 3 + 13.932 × 3 0.1 × 2 0.2 2 / 3 × 9 1 3 + 19.18 × 4 0.1 × 2 0.2 2 / 3 × 6 1 3 + 18.566 × 1 0.1 × 3 0.2 2 / 3 × 8 1 3 + 13.208 × 2 0.1 × 3 0.2 2 / 3 × 10 1 3 + 20.966 × 3 0.1 × 3 0.2 2 / 3 × 6 1 3 + 15.326 × 1 0.1 × 4 0.2 2 / 3 × 6 1 3 + 17.307 × 2 0.1 × 4 0.2 2 / 3 × 6 1 3 + 15.326 × 1 0.1 × 5 0.2 2 / 3 × 6 1 3 + 12.083 × 1 0.1 × 5 0.2 2 / 3 × 6 1 3 + 2 × 3 × 6 + 2 × ( 6 + 4 ) + 2 × ( 6 + 4 + 4 ) + 2 × ( 6 + 4 + 4 + 5 ) + 2 × ( 6 + 4 + 4 + 5 + 8 ) = 977.1444 .
L B 2 ( G 1 ) = 1.8899 × 13.208 × 1 0.1 × 1 0.2 2 / 3 × 8 1 3 + 11.5 × 2 0.1 × 1 0.2 2 / 3 × 10 1 3 + 13.572 × 3 0.1 × 1 0.2 2 / 3 × 6 1 3 + 21.256 × 1 0.1 × 2 0.2 2 / 3 × 4 1 3 + 9.665 × 2 0.1 × 2 0.2 2 / 3 × 10 1 3 + 13.932 × 3 0.1 × 2 0.2 2 / 3 × 9 1 3 + 19.18 × 4 0.1 × 2 0.2 2 / 3 × 6 1 3 + 18.566 × 1 0.1 × 3 0.2 2 / 3 × 8 1 3 + 13.208 × 2 0.1 × 3 0.2 2 / 3 × 10 1 3 + 20.966 × 3 0.1 × 3 0.2 2 / 3 × 6 1 3 + 15.326 × 1 0.1 × 4 0.2 2 / 3 × 6 1 3 + 17.307 × 2 0.1 × 4 0.2 2 / 3 × 6 1 3 + 15.326 × 1 0.1 × 5 0.2 2 / 3 × 6 1 3 + 12.083 × 1 0.1 × 5 0.2 2 / 3 × 6 1 3 + 2 × 3 × 6 + 4 × ( 6 + 4 ) + 2 × ( 6 + 4 + 4 ) + 2 × ( 6 + 4 + 4 + 4 ) + 2 × ( 6 + 4 + 4 + 4 + 4 ) = 993.1444 .
L B 3 ( G 1 ) = 1.8899 × 13.208 × 1 0.1 × 1 0.2 2 / 3 × 8 1 3 + 11.5 × 2 0.1 × 1 0.2 2 / 3 × 10 1 3 + 13.572 × 3 0.1 × 1 0.2 2 / 3 × 6 1 3 + 21.256 × 1 0.1 × 2 0.2 2 / 3 × 4 1 3 + 9.665 × 2 0.1 × 2 0.2 2 / 3 × 10 1 3 + 13.932 × 3 0.1 × 2 0.2 2 / 3 × 9 1 3 + 19.18 × 4 0.1 × 2 0.2 2 / 3 × 6 1 3 + 18.566 × 1 0.1 × 3 0.2 2 / 3 × 8 1 3 + 13.208 × 2 0.1 × 3 0.2 2 / 3 × 10 1 3 + 20.966 × 3 0.1 × 3 0.2 2 / 3 × 6 1 3 + 15.326 × 1 0.1 × 4 0.2 2 / 3 × 6 1 3 + 17.307 × 2 0.1 × 4 0.2 2 / 3 × 6 1 3 + 15.326 × 1 0.1 × 5 0.2 2 / 3 × 6 1 3 + 12.083 × 1 0.1 × 5 0.2 2 / 3 × 6 1 3 + 2 × 3 × 6 + 4 × ( 6 + 4 ) + 3 × ( 6 + 4 + 4 ) + 2 × ( 6 + 4 + 4 + 5 ) + 2 × ( 6 + 4 + 4 + 5 + 8 ) = 1045.1444 .
L B ( G 1 ) = max { L B 1 ( G 1 ) , L B 2 ( G 1 ) , L B 3 ( G 1 ) } = 1045.1444 .

6. Computational Experiments

6.1. Simulating Configuration

In this section, the performance of the proposed algorithms is evaluated. We use visual studio2022 v17.1.0 software for programming the HA (i.e., Algorithm 2), SA (i.e., Algorithm 3), and B&B algorithms (i.e., Algorithm 4), respectively, using a DELL personal computer equipped with an Intel® CoreTM i5-10500U @ 3.10 3.10 GHz CPU, 8 GB RAM, and Window 10 Operating System.

6.2. Simulating Parameters

To compare the efficiency of the proposed method, heuristic, SA, and B&B algorithms are taken into account. The values used as simulation parameters in C++ are considered according to Table 8. The proposed method is implemented with jobs ranging from 100 to 300 in three different algorithms, and the parameter setting distribution is considered uniformly. To avoid contingency, we evaluate 10 randomly generated results and set the maximum CPU time for each result to 3600 s.
For computational experiments, the parameter settings can be obtained randomly as follows:

6.3. Experimental Results

In the experiment, a total of 1500 instances was tested. The results are summarized in Table 9, Table 10 and Table 11 and consist of CPU time, node number of B&B, and relative error, where bold font is the best solution. The error of the solution by HA (TS) is calculated as
Z ˇ ( X ) Z ˇ * Z ˇ * ,
where Z ˇ ( X ) is the objective function value generated by Algorithm X, X { H A , S A } , and Z ˇ * is the optimal value generated by the B&B Algorithm.
From Table 9, Table 10 and Table 11, the CPU time of the HA algorithm is much shorter than that of other algorithms, and the error bound can be within an acceptable range (maximum error bound is 0.004046 < 0.5%). For the case of over 11 groups (i.e., n 11 ), the CPU time of each result is greater than or equal to 3600 s, exceeding the set time, which causes the algorithm to stop running. In addition, it is seen that the performance of HA and SA algorithms is very good in terms of the error. By comparison following Table 9, Table 10 and Table 11, it is easy to find that the maximum error of the HA (rep. SA) algorithm is less than 0.5% (resp. 0.04%). It can be concluded that when the value range of α 1 , α 2 is (−0.5, −0.25), the HA and SA algorithms perform better. Moreover, the computational results also show that HA algorithm has a shorter CPU time than the SA algorithm, but its error performance is not better than that of the SA algorithm.

7. Conclusions

This paper investigates the group scheduling with convex resource allocation and learning effect. Under common due date assignment, the objective is to minimize the weighted sum of earliness, tardiness, due date assignment, resource consumption, and makespan, where the number of jobs in each group is different. It can be concluded that in a special case, this problem can be solved within polynomial time. For the general case of the problem, this paper presents a heuristic, a B&B, and a simulated annealing algorithm for comparison. With the data obtained from the experimental simulations, the B&B algorithm is able to obtain an optimal solution with less than or equal to 11 groups in a reasonable time frame. The CPU time of the heuristic algorithm is shorter compared to SA, and the error can be within the reasonable range (maximum error bound <0.5%). Future research could explore group scheduling under linear resource consumption function or with deterioration effects (Huang [35], Liu et al. [36], Sun et al. [37], Lv et al. [38], Wang et al. [39], Zhang et al. [40], and Oron [41]) or deal with problems under flow shop setting (Sun et al. [42], and Kovalev et al. [43]).

Author Contributions

Methodology, M.-H.L. and J.-B.W.; writing—original draft preparation, M.-H.L.; writing—review and editing, M.-H.L., D.-Y.L., Y.-Y.L. and J.-B.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Science Research Foundation of Educational Department of Liaoning Province (LJKMZ20220527).

Data Availability Statement

The data used to support the findings of this paper are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Potts, C.N.; Strusevich, V.A. Fifty years of scheduling: A survey of milestones. J. Oper. Res. Soc. 2009, 60, S41–S68. [Google Scholar] [CrossRef]
  2. Pei, J.; Wang, X.; Fan, W.; Pardalos, P.M.; Liu, X. Scheduling step-deteriorating jobs on bounded parallel-batching machines to maximise the total net revenue. J. Oper. Res. Soc. 2019, 70, 1830–1847. [Google Scholar] [CrossRef]
  3. Lee, M.; Moon, K.; Lee, K.; Hong, J.; Pinedo, M. A critical review of planning and scheduling in steel-making and continuous casting in the steel industry. J. Oper. Res. Soc. 2023, 1–35. [Google Scholar] [CrossRef]
  4. Shabtay, D.; Steiner, G. A survey of scheduling with controllable processing times. Discret. Appl. Math. 2007, 155, 1643–1666. [Google Scholar] [CrossRef]
  5. Wang, X.; Cheng, T.C.E. Single machine scheduling with resource dependent release times and processing times. Eur. J. Oper. 2005, 162, 727–739. [Google Scholar] [CrossRef]
  6. Kayvanfar, V.; Mahdavi, I.; Komaki, G.H.M. Single machine scheduling with controllable processing times to minimize total tardiness and earliness. Comput. Ind. Eng. 2013, 65, 166–175. [Google Scholar] [CrossRef]
  7. Tian, Y. Single-machine due-window assignment scheduling with resource allocation and generalized earliness/tardiness penalties. Asia-Pac. J. Oper. Res. 2022, 39, 2150041. [Google Scholar] [CrossRef]
  8. Wu, C.-C.; Yin, Y.; Cheng, S.-R. Single-machine and two-machine flowshop scheduling problems with truncated position-based learning functions. J. Oper. Res. Soc. 2013, 64, 147–156. [Google Scholar] [CrossRef]
  9. Azzouz, A.; Ennigrou, M.; Said, L.B. Scheduling problems under learning effects: Classification and cartography. Int. J. Prod. Res. 2018, 56, 1642–1661. [Google Scholar] [CrossRef]
  10. Zhao, S. Scheduling jobs with general truncated learning effects including proportional setup times. Comput. Appl. Math. 2022, 41, 146. [Google Scholar] [CrossRef]
  11. Liu, W.G.; Wang, X.Y.; Li, L.; Dai, W. Due-window assignment scheduling with job-rejection, truncated learning effects and setup times. J. Ind. Manag. Optim. 2024, 20, 313–324. [Google Scholar] [CrossRef]
  12. Yin, N. Single-machine due-window assignment resource allocation scheduling with job-dependent learning effect. J. Appl. Math. Comput. 2018, 56, 715–725. [Google Scholar] [CrossRef]
  13. Zhao, S. Resource allocation flowshop scheduling with learning effect and slack due window assignment. J. Ind. Manag. Optim. 2021, 17, 2817–2835. [Google Scholar] [CrossRef]
  14. Wang, X.Y.; Liu, W.G.; Li, L.; Zhao, P.Z.; Zhang, R.F. Resource dependent scheduling with truncated learning effects. Math. Biosci. Eng. 2022, 19, 5957–5967. [Google Scholar] [CrossRef] [PubMed]
  15. Jiang, Y.; Zhang, Z.; Song, X.; Yin, Y. Scheduling controllable processing time jobs in seru production system with resource allocation. J. Oper. Res. Soc. 2022, 73, 2551–2571. [Google Scholar] [CrossRef]
  16. Wang, Y.-C.; Wang, J.-B. Study on convex resource allocation scheduling with a time-dependent learning effect. Mathematics 2023, 11, 3179. [Google Scholar] [CrossRef]
  17. Shabtay, D.; Itskovich, Y.; Yedidsion, L.; Oron, D. Optimal due date assignment and resource allocation in a group technology scheduling environment. Comput. Oper. Res. 2010, 37, 2218–2228. [Google Scholar] [CrossRef]
  18. Zhu, Z.G.; Sun, L.Y.; Chu, F.; Liu, M. Single-machine group scheduling with resource allocation and learning effect. Comput. Ind. Eng. 2011, 60, 148–157. [Google Scholar] [CrossRef]
  19. Li, S.S.; Ng, C.T.; Yuan, J.J. Group scheduling and due date assignment on a single machine. Int. J. Prod. Econ. 2011, 130, 230–235. [Google Scholar] [CrossRef]
  20. Lu, Y.-Y.; Wang, J.-J.; Wang, J.-B. Single machine group scheduling with decreasing time-dependent processing times subject to release dates. Appl. Math. Comput. 2014, 234, 286–292. [Google Scholar] [CrossRef]
  21. Sun, L.; Yu, A.J.; Wu, B. Single machine common flow allowance group scheduling with learning effect and resource allocation. Comput. Ind. Eng. 2020, 139, 106126. [Google Scholar] [CrossRef]
  22. Chen, Y.; Ma, X.L.; Zhang, G.Q.; Cheng, Y.X. On optimal due date assignment without restriction and resource allocation in group technology scheduling. J. Comb. 2023, 45, 64. [Google Scholar] [CrossRef]
  23. Liang, X.-X.; Liu, M.; Feng, Y.-B.; Wang, J.-B.; Wen, L.-S. Solution algorithms for single-machine resource allocation scheduling with deteriorating jobs and group technology. Eng. Optim. 2020, 52, 1184–1197. [Google Scholar] [CrossRef]
  24. Yan, J.-X.; Ren, N.; Bei, H.-B.; Bao, H.; Wang, J.-B. Scheduling with resource allocation, deteriorating effect and group technology to minimize total completion time. Mathematics 2022, 10, 2983. [Google Scholar] [CrossRef]
  25. Liu, W.G.; Wang, X.Y. Group technology scheduling with due-date assignment and controllable processing times. Processes 2023, 11, 1271. [Google Scholar] [CrossRef]
  26. Wang, X.Y.; Liu, W.G. Optimal different due-dates assignment scheduling with group technology and resource allocation. Mathematics 2024, 12, 436. [Google Scholar] [CrossRef]
  27. Shabtay, D. Optimal restricted due date assignment in scheduling. Eur. J. Oper. Res. 2016, 252, 79–89. [Google Scholar] [CrossRef]
  28. Shabtay, D.; Mosheiov, G.; Oron, D. Single machine scheduling with common assignable due date/due window to minimize total weighted early and late work. Eur. J. Oper. Res. 2022, 303, 66–77. [Google Scholar] [CrossRef]
  29. Cheng, T.C.E. Optimal slack due-date determination and sequencing. Eng. Costs Prod. Econ. 1986, 10, 305–309. [Google Scholar] [CrossRef]
  30. Li, S.S.; Ng, C.T.; Y, J.J. Scheduling deteriorating jobs with CON/SLK due date assignment on a single machine. Int. J. Prod. Econ. 2011, 131, 747–751. [Google Scholar] [CrossRef]
  31. Zhang, X.G.; Wang, Y. Single-machine scheduling CON/SLK due window assignment problems with sum-of-processed times based learning effect. Appl. Math. Comput. 2015, 250, 628–635. [Google Scholar]
  32. Lv, D.-Y.; Wang, J.-B. No-idle flow shop scheduling with deteriorating jobs and common due date under dominating machines. Asia-Pac. J. Oper. Res. 2024; 2450003. [Google Scholar] [CrossRef]
  33. Hardy, G.H.; Littlewood, J.E.; Polya, G. Inequalities, 2nd ed.; Cambridge University Press: Cambridge, UK, 1967. [Google Scholar]
  34. Lai, K.; Hsu, P.H.; Ting, P.H.; Wu, C.C. A truncated sum of processing-times-based learning model for a two-machine flowshop scheduling problem. Hum. Factors Ergon. Manuf. Serv. Ind. 2014, 24, 152–160. [Google Scholar] [CrossRef]
  35. Huang, X. Bicriterion scheduling with group technology and deterioration effect. J. Appl. Math. Comput. 2019, 60, 455–464. [Google Scholar] [CrossRef]
  36. Liu, F.; Yang, J.; Lu, Y.-Y. Solution algorithms for single-machine group scheduling with ready times and deteriorating jobs. Eng. Optim. 2019, 51, 862–874. [Google Scholar] [CrossRef]
  37. Sun, X.Y.; Liu, T.; Geng, X.-N.; Hu, Y.; Xu, J.-X. Optimization of scheduling problems with deterioration effects and an optional maintenance activity. J. Sched. 2023, 26, 251–266. [Google Scholar] [CrossRef]
  38. Lv, Z.-G.; Zhang, L.-H.; Wang, X.-Y.; Wang, J.-B. Single machine scheduling proportionally deteriorating jobs with ready times subject to the total weighted completion time minimization. Mathematics 2024, 12, 610. [Google Scholar] [CrossRef]
  39. Wang, J.-B.; Wang, Y.-C.; Wan, C.; Lv, D.-Y.; Zhang, L. Controllable processing time scheduling with total weighted completion time objective and deteriorating jobs. Asia-Pac. J. Oper. Res. 2024, 2350026. [Google Scholar] [CrossRef]
  40. Zhang, L.-H.; Geng, X.-N.; Xue, J.; Wang, J.-B. Single machine slack due window assignment and deteriorating jobs. J. Ind. Manag. 2024, 20, 1593–1614. [Google Scholar] [CrossRef]
  41. Oron, D. Scheduling controllable processing time jobs in a deteriorating environment. J. Oper. Res. Soc. 2014, 64, 49–56. [Google Scholar] [CrossRef]
  42. Sun, X.; Geng, X.-N.; Liu, F. Flow shop scheduling with general position weighted learning effects to minimise total weighted completion time. J. Oper. Res. Soc. 2021, 72, 2674–2689. [Google Scholar] [CrossRef]
  43. Kovalev, S.; Chalamon, I.; Becuwe, A. Single machine scheduling with resource constraints: Equivalence to two-machine flow-shop scheduling for regular objectives. J. Oper. Res. Soc. 2023, 1–4. [Google Scholar] [CrossRef]
Figure 1. Search tree of Example 4 (×denotes pruning).
Figure 1. Search tree of Example 4 (×denotes pruning).
Mathematics 12 01029 g001
Table 1. Problems with group technology.
Table 1. Problems with group technology.
ProblemComplexityRef.
1 G T , C O N i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j O ( n log n ) Shabtay et al. [17]
1 G T , C O N , p ^ i j = p i j u ^ i j θ i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j
+ λ C ^ m a x , where n ¯ i = n ¯ O ( max { n log n , r n , r 3 } ) Shabtay et al. [17]
1 G T , C O N , p ^ i j = p i j b i j u ^ i j i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j
+ λ C ^ m a x , where n ¯ i = n ¯ O ( n 3 ) Shabtay et al. [17]
1 G T , R A L E , p ^ i j = p i j x ^ α 1 l ^ α 2 u ^ i j θ δ i = 1 r j = 1 n ¯ i v i j u ^ i j + λ C ^ m a x O ( n 3 ) Zhu et al. [18]
1 G T , R A L E , p ^ i j = p i j x ^ α 1 l ^ α 2 b i j u ^ i j δ i = 1 r j = 1 n ¯ i v i j u ^ i j + λ C ^ m a x O ( n log n ) Zhu et al. [18]
1 G T , C O N i = 1 r j = 1 n ¯ i σ i d ˜ i j + ρ i E ˜ i j + μ i T ˜ i j + λ i C ^ i j O ( n log n ) Li et al. [19]
1 G T , S L K i = 1 r j = 1 n ¯ i σ i d ˜ i j + ρ i E ˜ i j + μ i T ˜ i j + λ i C ^ i j O ( n log n ) Li et al. [19]
1 G T , D I F i = 1 r j = 1 n ¯ i σ i d ˜ i j + ρ i E ˜ i j + μ i T ˜ i j + λ i C ^ i j O ( n log n ) Li et al. [19]
1 G T , p ^ i j = α i j ( 1 b t ) , s ^ i = β i ( 1 b t ) C ^ m a x O ( n log n ) Lu et al. [20]
1 G T , D I F , p ^ i j = w ^ i j u ^ i j θ i = 1 r j = 1 n ¯ i σ i j d ˜ i j + ρ i j E ˜ i j + μ T ˜ i j + v i j u ^ i j O ( n 3 ) Chen et al. [22]
1 G T , D I F , p ^ i j = p i j b i j u ^ i j i = 1 r j = 1 n ¯ i σ i j d ˜ i j + ρ i j E ˜ i j + μ T ˜ i j + v i j u ^ i j O ( n 3 ) Chen et al. [22]
1 G T , p ^ i j = p i j u ^ i j θ + a t , s ^ i = s i u ^ i θ + b t
δ i = 1 r j = 1 n ¯ i v i j u ^ i j + γ i = 1 r ω i u ^ i + λ C ^ m a x O ( n log n ) Liang et al. [23]
1 G T , p ^ i j = w ^ i j u ^ i j θ + a t , s ^ i = w ^ i u ^ i η + b t , i = 1 r j = 1 n ¯ i V ˜ , h = 1 m ˜ U ˜ C i j O ( n 3 ) Yan et al. [24]
1 G T , C O N , p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j
+ λ C ^ m a x , where s ^ i = s ^ and n ¯ i = n ¯ O ( n 3 ) this paper
1 G T , C O N , p ^ i j = w ^ i j x ^ α 1 l ^ α 2 u ^ i j θ i = 1 r σ d ˜ i + i = 1 r j = 1 n ¯ i ρ E ˜ i j + μ T ˜ i j + v i j u ^ i j Heuristic and
+ λ C ^ m a x branch-and-bound
algorithmsthis paper
Note: RALE denotes resource allocation and learning effect; SLK (resp. DIF) denotes slack (resp. different) due date; a and b are the deterioration rates; t is the start time; V ˜ and U ˜ are given constants; W ij is the wait time.
Table 2. The values of w ^ i j , v i j .
Table 2. The values of w ^ i j , v i j .
G 1 G 2 G 3
J 11 J 12 J 13 J 21 J 22 J 23 J 31 J 32 J 33
w ^ i j 101213131414151616
v i j 543467253
Table 3. The values of Q ^ i l ^ .
Table 3. The values of Q ^ i l ^ .
Q ^ i l ^ l ^ = 1 l ^ = 2 l ^ = 3
i = 1 173.8464 140.6598 110.2995
i = 2 244.4035197.6642154.8192
i = 3 184.2365148.8115116.1668
Table 4. The values of w ^ i j , v i j .
Table 4. The values of w ^ i j , v i j .
J 11 J 21 J 22
w ^ i j 101012
v i j 234
Table 5. The values of w ^ i j , v i j .
Table 5. The values of w ^ i j , v i j .
J 11 J 21 J 22
w ^ i j 101012
v i j 211
Table 6. The values of w ^ i j , v i j .
Table 6. The values of w ^ i j , v i j .
n ¯ k s ^ k
G 1 36
G 2 44
G 3 34
G 4 25
G 5 28
Table 7. Parameters of Example 4.
Table 7. Parameters of Example 4.
J 11 J 12 J 13 J 21 J 22 J 23 J 24 J 31 J 32 J 33 J 41 J 42 J 51 J 52
w ^ i j 1012131314141516161615181214
v i j 54346725364453
Table 8. Numerical parameters.
Table 8. Numerical parameters.
ParameterValue
n ¯ 100, 150, 200, 250, 300
r8, 9, 10, 11, 12
θ [0.1, 0.5]
w ^ i j [1, 50]
u ^ i j [1, 50]
s ^ i [1, 50]
v i j [1, 50]
σ [1, 10]
ρ [1, 10]
μ [1, 10]
λ [1, 10]
Table 9. Results of the algorithms for α 1 , α 2 ( 0.25 , 0 ) .
Table 9. Results of the algorithms for α 1 , α 2 ( 0.25 , 0 ) .
HA- cpu ^ (s)B&B- cpu ^ (s)SA- cpu ^ (s)Error Bound of HAError Bound of SANode Number of B&B
n ¯ r MeanMaxMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
 80.00070.0014.1034.6020.3750.3830.0001750.0002090.0001590.00039462,352.169,280
 90.00070.00241.00741.1580.4640.4760.0001870.000327 0 . 000170 0.000318623,529623,529
100100.00090.002372.402421.7620.5720.5930.0001460.000277 0 . 000140 0.0002695,611,770.16,235,300
 110.0010.003344236000.6760.7040.0002220.000367 0 . 000210 0.00039754,676,968.255,272,022
 120.00110.003360036000.7930.8000.0001850.000387 0 . 000148 0.00045449,878,82255,886,769
 80.00050.0016.6936.8290.5430.5580.0002220.000393 0 . 000216 0.00037969,28069,280
 90.00020.00162.58566.7670.6920.7370.0001010.000171 0 . 000094 0.000109623,529623,529
150100.00110.002616.894642.1120.8450.8870.0002020.000264 0 . 000111 0.0002726,235,3006,235,300
 110.00060.0033460.236001.0151.0530.0001320.000253 0 . 000123 0.00020636,584,581.636,680,876
 120.0010.002360036001.1931.2110.0001130.000323 0 . 000106 0.00014836,881,159.437,411,051
 80.00040.0018.9468.9930.7230.7320.0001050.000182 0 . 000089 0.00010169,28069,280
 90.00070.00172.48480.7190.9040.9180.0001010.000323 0 . 000096 0.000185561,176.2623,529
200100.0010.004810.209841.7061.1151.1780.0001810.000387 0 . 000177 0.0002866,235,3006,235,300
 110.00040.001345236001.3271.3410.0001890.000302 0 . 000185 0.00030027,596,209.327,954,841
 120.00060.003360036001.5771.5900.0001040.000227 0 . 000068 0.00011027,761,932.927,958,200
 80.00050.00110.09011.2410.9030.9300.0001000.000201 0 . 000059 0.00010469,130.369,379
 90.00160.007103.353105.0231.1471.1560.0000990.000125 0 . 000073 0.000105623,529623,529
250100.00130.004933.7331075.2401.1481.2400.0001380.000304 0 . 000134 0.0004015,611,770.16,235,300
 110.00040.001348036001.7152.0880.0000990.000138 0 . 000097 0.00013121,181,98421,812,984
 120.0010.008360036001.9521.9870.0002110.000237 0 . 000153 0.00034322,329,658.522,672,452
 80.00080.00212.14713.5501.0961.1040.0000710.000146 0 . 000042 0.00016162,352.169,280
 90.00060.001128.441128.9291.4481.6320.0000870.000201 0 . 000083 0.000191623,529623,529
300100.00070.0021218.5921224.5281.6581.6750.0000650.000125 0 . 000062 0.0001226,235,3006,235,300
 110.00060.002324036001.9902.0000.0001200.000306 0 . 000040 0.00015616,722,381.218,671,853
 120.00060.001360036002.3532.4910.0000710.000159 0 . 000246 0.00036918,632,673.818,913,537
Table 10. Results of the algorithms for α 1 , α 2 ( 0.5 , 0.25 ) .
Table 10. Results of the algorithms for α 1 , α 2 ( 0.5 , 0.25 ) .
HA- cpu ^ (s)B&B- cpu ^ (s)SA- cpu ^ (s)Error Bound of HAError Bound of SANode Number of B&B
n ¯ r MeanMaxMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
 80.00040.0014.5184.5290.3660.3690.0009700.0011710.0001090.00023069,248.269,280
 90.00070.00236.60540.8850.4570.4620.0008870.001078 0 . 000095 0.000162561,176.2623,529
100100.00080.002370.917433.0280.5680.5970.0004630.000687 0 . 000207 0.0002745,611,770.16,235,300
 110.00070.002324036000.6710.6720.0008640.001540 0 . 000221 0.00034949,513,611.455,205,174
 120.00060.002360036000.7940.8080.0001760.000477 0 . 000139 0.00026455,403,60455,781,757
 80.00030.0016.7406.7630.5440.5530.0009410.001478 0 . 000100 0.00025069,28069,280
 90.00160.01561.28262.2910.6830.6930.0001610.000515 0 . 000118 0.000216623,529623,529
150100.00030.001613.477618.6560.8380.8490.0016700.002650 0 . 000173 0.0002676,235,3006,235,300
 110.00030.001346036001.0251.0650.0013320.002713 0 . 000142 0.00027736,547,401.636,702,050
 120.00090.004360036001.2061.2240.0003990.000988 0 . 000118 0.00024233,032,464.237,030,175
 80.00060.0028.0999.0150.7250.7310.0005440.001159 0 . 000173 0.00027262,352.169,280
 90.00040.00181.81583.2320.9110.9430.0006540.001032 0 . 000124 0.000209623,529623,529
200100.00020.001815.045822.3931.1091.1190.0004260.000857 0 . 000149 0.0003406,235,3006,235,300
 110.00070.001360036001.3331.3460.0003610.000908 0 . 000140 0.00030027,507,393.627,615,901
 120.00030.001360036001.5721.5770.0003810.000796 0 . 000080 0.00011227,748,257.127,906,681
 80.00030.00110.64012.3650.9500.9890.0007480.001250 0 . 000100 0.00015769,23069,280
 90.00060.001104.907105.0711.1331.1410.0007350.001011 0 . 000087 0.000169561,176.2623,529
250100.00050.0011026.5371033.2451.8162.2450.0009700.001250 0 . 000100 0.0002776,232,529.96,235,300
 110.00040.001288036001.7152.0880.0009710.001361 0 . 000099 0.00013821,181,98421,229,111
 120.00050.002360036001.9952.0100.0004380.000899 0 . 000047 0.00011422,067,73622,254,928
 80.00040.00112.87712.9281.0441.0940.0007910.001532 0 . 000081 0.00015469,221.869,280
 90.0010.002125.298127.5741.4041.4360.0007320.001549 0 . 000075 0.000158623,529623,529
300100.00030.0011101.3361281.5911.6681.7440.0009510.001671 0 . 000115 0.0002685,611,770.16,235,300
 110.00050.001354036001.9101.9180.0008500.001425 0 . 000088 0.00014519,325,090.619,368,360
 120.00050.002360036002.3762.4940.0006220.001090 0 . 000056 0.00013718,469,754.318,865,239
Table 11. Results of the algorithms for α 1 , α 2 ( 0.5 , 0 ) .
Table 11. Results of the algorithms for α 1 , α 2 ( 0.5 , 0 ) .
HA- cpu ^ (s)B&B- cpu ^ (s)SA- cpu ^ (s)Error Bound of HAError Bound of SANode Number of B&B
n ¯ r MeanMaxMeanMaxMeanMaxMeanMaxMeanMaxMeanMax
 80.00060.0014.5194.5440.3680.3750.0016940.0022060.0001820.00026669,28069,280
 90.00080.00240.75340.8360.4560.4600.0017690.002893 0 . 000091 0.000147623,529623,529
100100.00090.002407.671408.0120.5620.5740.0022450.003729 0 . 000102 0.0001206,235,3006,235,300
 110.0010.003324036000.6680.6750.0014990.002119 0 . 000273 0.00043549,585,780.155,544,160
 120.00240.019360036000.7960.8270.0015660.003150 0 . 000233 0.00045849,736,852.656,374,479
 80.00070.0026.1266.9090.5510.5650.0012150.003140 0 . 000161 0.00031762,352.169,280
 90.00080.00160.98661.1430.6800.6880.0014580.003030 0 . 000154 0.000307623,529623,529
150100.00040.001609.401609.8370.8340.8420.0018630.004046 0 . 000193 0.0003086,235,3006,235,300
 110.00070.001324036001.0021.0060.0018310.002483 0 . 000197 0.00024133,016,325.137,061,706
 120.00030.001360036001.1781.2150.0022690.002856 0 . 000180 0.00028033,393,817.437,526,242
 80.00090.0029.0599.3170.7330.8040.0015460.003515 0 . 000159 0.00028269,28069,280
 90.00040.00180.84383.9000.9070.9270.0013000.002883 0 . 000135 0.000289623,529623,529
200100.00050.002810.323814.6031.1111.1210.0009560.001523 0 . 000102 0.0002536,235,3006,235,300
 110.00040.001324036001.3311.3390.0005280.001684 0 . 000085 0.00017124,968,877.527,784,387
 120.00040.001360036001.5641.5900.0006360.001519 0 . 000079 0.00015428,033,053.128,342,718
 80.00050.00110.09711.3170.9100.9170.0003180.000719 0 . 000077 0.00012769,303.169,489
 90.00050.00180.960101.6751.1271.1400.0003950.000817 0 . 000076 0.000176497,052.3623,529
250100.0010.0021006.1101015.8961.2041.3160.0003820.000619 0 . 000072 0.0001436,235,274.26,235,300
 110.00060.002288036001.6461.6690.0003450.000779 0 . 000041 0.00008722,213,44022,273,369
 120.0030.024360036002.0022.1420.0003320.000692 0 . 000032 0.00006119,569,085.522,426,361
 80.00070.00212.15113.5541.0981.1050.0008910.001327 0 . 000092 0.00013462,318.869,280
 90.00030.001121.924123.0051.3761.3970.0009580.001514 0 . 000098 0.000192623,529623,529
300100.00070.0021221.7061233.3441.6791.6950.0009310.001891 0 . 000095 0.0001936,235,3006,235,300
 110.00090.002324036002.0312.1090.0008280.001188 0 . 000075 0.00012116,620,628.518,735,159
 120.00120.004360036002.4272.6610.0007110.000740 0 . 000067 0.00018716,218,855.718,472,247
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, M.-H.; Lv, D.-Y.; Lu, Y.-Y.; Wang, J.-B. Scheduling with Group Technology, Resource Allocation, and Learning Effect Simultaneously. Mathematics 2024, 12, 1029. https://doi.org/10.3390/math12071029

AMA Style

Li M-H, Lv D-Y, Lu Y-Y, Wang J-B. Scheduling with Group Technology, Resource Allocation, and Learning Effect Simultaneously. Mathematics. 2024; 12(7):1029. https://doi.org/10.3390/math12071029

Chicago/Turabian Style

Li, Ming-Hui, Dan-Yang Lv, Yuan-Yuan Lu, and Ji-Bo Wang. 2024. "Scheduling with Group Technology, Resource Allocation, and Learning Effect Simultaneously" Mathematics 12, no. 7: 1029. https://doi.org/10.3390/math12071029

APA Style

Li, M. -H., Lv, D. -Y., Lu, Y. -Y., & Wang, J. -B. (2024). Scheduling with Group Technology, Resource Allocation, and Learning Effect Simultaneously. Mathematics, 12(7), 1029. https://doi.org/10.3390/math12071029

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop