Next Article in Journal
Stabilization of Stochastic Dynamical Systems of a Random Structure with Markov Switches and Poisson Perturbations
Next Article in Special Issue
A Modified q-BFGS Algorithm for Unconstrained Optimization
Previous Article in Journal
Deterioration Control Decision Support System for the Retailer during Availability of Trade Credit and Shortages
Previous Article in Special Issue
Application of the ADMM Algorithm for a High-Dimensional Partially Linear Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Distributed Blocking Flowshop Scheduling with Setup Times Using Multi-Factory Collaboration Iterated Greedy Algorithm

1
School of Computer Science, Liaocheng University, Liaocheng 252059, China
2
School of Computer Science, Shandong Normal University, Jinan 252000, China
3
Macau Institute of Systems Engineering, Macau University of Science and Technology, Macau 999078, China
*
Authors to whom correspondence should be addressed.
Mathematics 2023, 11(3), 581; https://doi.org/10.3390/math11030581
Submission received: 15 December 2022 / Revised: 19 January 2023 / Accepted: 19 January 2023 / Published: 22 January 2023
(This article belongs to the Special Issue Optimization Algorithms: Theory and Applications)

Abstract

:
As multi-factory production models are more widespread in modern manufacturing systems, a distributed blocking flowshop scheduling problem (DBFSP) is studied in which no buffer between adjacent machines and setup time constraints are considered. To address the above problem, a mixed integer linear programming (MILP) model is first constructed, and its correctness is verified. Then, an iterated greedy-algorithm-blending multi-factory collaboration mechanism (mIG) is presented to optimize the makespan criterion. In the mIG algorithm, a rapid evaluation method is designed to reduce the time complexity, and two different iterative processes are selected by a certain probability. In addition, collaborative interactions between cross-factory and inner-factory are considered to further improve the exploitation and exploration of mIG. Finally, the 270 tests showed that the average makespan and RPI values of mIG are 1.93% and 78.35% better than the five comparison algorithms on average, respectively. Therefore, mIG is more suitable to solve the studied DBFSP_SDST.

1. Introduction

Industrial intellectualization and informatization are the frontier trends of manufacturing development. Manufacturing is the mainstay of the real economy and the lifeblood of the national economy, and its development is an essential reflection of a country’s comprehensive national power. Smart manufacturing is the main research content of the manufacturing system at this stage. In the manufacturing industry, the flowshop scheduling problem (FSP) has been a popular topic of research and is of great practical importance. In FSP, jobs are processed on a series of machines according to a fixed process flow. The ultimate goal is to find the optimal scheduling sequence with optimal value(s) of the single (multiple) objective function(s). As we all know, in the context of globalization, the collaborative production mode between companies is becoming more and more common. The traditional centralized production methods are no longer able to meet market demands. Thus, the centralized manufacturing model is gradually shifted to a distributed manufacturing model [1], which can break geographical restrictions and make full use of the resources of multiple enterprises or factories to achieve a rational allocation, optimal combination, and sharing of resources [2]. Due to the above advantages of the distributed model, researchers have applied the distribution constraint to FSP and proposed the distributed permutation flowshop scheduling problem (DPFSP).
Many works on the DPFSP have been done. Naderi and Ruiz first constructed a MILP model and adopted heuristic of construction, and a variable neighborhood descent method to address this problem [3]. Liu and Gao presented a hybrid variable neighborhood search by combining with the electromagnetism mechanism to optimize the makespan criterion [4]. Since then, a number of constructive algorithms have emerged, i.e., the improved variable neighborhood descent (VND) algorithm [5], the taboo search (TS) algorithm [6], the estimation distribution algorithm (EDA) [7], the scatter search (SS) algorithms [8], and the bounded search iterated greedy (BSIG) algorithm [9]. In addition, Komaki and Malakooti [10] presented a variable neighborhood search (VNS) to solve the DPFSP with a no-wait constraint. In recent years, new scheduling algorithms, i.e., two stage iterated greedy algorithms containing different local search operators [11] and a cooperative co-evolutionary algorithm (CCEA) [12] have been developed to optimize DPFSP and successfully applied to the distributed robotic scheduling problem [13]. To optimize the total flowtime value of DPFSP, Fernandez-Viagas et al. discussed some properties of DPFSP and proposed eighteen construction heuristics to obtain a solution with high quality [14].
Recently, researchers have also taken sequence-dependent setup times (SDSTs) into account in DPFSP, called DPFSP_SDST, and have done some work on DPFSP_SDST. To address this problem, an IG with restart strategy (IGR) is presented [15]. The experimental results have demonstrated that IGR has the best performance among all the compared algorithms, i.e., chemical reaction optimization, differential evolution, evolutionary algorithm, etc. Han et al. designed an iterated greedy (NIG) algorithm that includes swapping of single jobs and job blocks [16]. Furthermore, it shows better performance compared to advanced algorithms. Li et al. also extended the DPFSP by considering a heterogeneous machine with unrelated parallel (forming DHHFSP_SDST) [17]. Next, to further design a good algorithm, the three heuristics based on problem specifics and a discrete artificial bee colony (DABC) algorithm were employed to solve DPFSP_SDST [18]. The study in [19] proposed two mathematical models of DPFSP_SDST, i.e., constraint planning (CP) and MILP. The authors also presented an evolution strategy algorithm based on a self-adaptive mechanism to quickly provide the quality of solutions. In [20], the authors considered DPFSP_SDST with assembly constraints and presented a hyper-heuristic algorithm based on genetic programming.
The above scheduling problems assume that the buffers are infinite between any adjacent machines. However, due to cost constraints, temporary buffers may not be allowed between any adjacent machines. The current machine must be blocked with a job until the next machine is free. In this case, FSP with no buffer is transformed into a blocking flowshop scheduling problem (BFSP). Thus, our article simultaneously considers the above blocking, SDST and distributed constraints, and forms a distributed flowshop scheduling problem based on blocking and sequence-dependent setup times (DBFSP_SDST).
DPFSP with more than two machines has been evidenced in the literature [12] as an NP-hard problem. However, DBFSP_SDST, as an extension of DPFSP, adds blocking and sequence-dependent setup time constraints that are more complex than the permutation flowshop scheduling problem (PFSP). This is because, (1) from a distributed perspective, PFSP is a single-factory problem. One issue that needs to be solved is how to generate the optimal scheduling sequence. However, when it comes to DBFSP_SDST, the following two sub-issues must be taken into account. One is to assign the job to factories in a reasonable way, and the other is to arrange the job sequence for each factory [21,22]. (2) The DBFSP_SDST simultaneously considers blocking and SDST constraints in a distributed manufacturing environment in addition to the constraints listed in PFSP.
Regarding DBFSP, Companys and Ribas initially studied this problem and presented ten constructive heuristics based on typical heuristic rules [21]. Ying and Li constructed a MILP model of DBFSP and developed different hybrid IG algorithms [23]. Zhang et al. designed a discrete differential evolution (DDE) method based on problem features to optimize two different mathematical models [24]. Shao et al. employed a fruit fly optimization algorithm incorporating constructive heuristic initialization and an enhanced local search strategy [25]. Next, a mutation strategy combining crossover and insertion operators is employed to obtain a good solution [26]. Recently, Han et al. considered SDST and blocking constraints in DPFSP and developed a variable IG (VNIG) algorithm to optimize energy cost [27].
In the present study, the iterated greedy algorithm (IGA) and its modifications have been successfully applied in many discrete scheduling problems. Ruiz and Stützle proposed IGA to address FSP with the makespan criterion for the first time [28]. Next, Lin et al. modified the IGA by improving initialization, local search, and destruction and reconstruction strategies to optimize DPFSP [29]. Pan and Ruiz proposed an effective IG to solve the mixed no-idle FSP [30]. The study in [31] presented an IG based on a reference (IRG) algorithm to effectively solve no-idle DFSP. Huang et al. designed an enhanced IGA to optimize the assembly DPFSP with total flowtime [32]. Mao et al. presented a multi-stage IGA to address DPFSP with a preventive maintenance constraint [33]. For the scheduling problems with the blocking constraint, IGA also shows superiority. Ribas et al. developed an efficient IGA for optimizing the blocking parallel flowshop scheduling problem with a total tardiness criterion [34]. Qin et al. considered an IG algorithm based on double-level mutation (IGDLM) in solving a hybrid BFSP [35]. For the DBFSP with makespan and total flowtime criteria, Chen et al. used some constructive heuristics in the IGA [36] and a population-based IG [21], respectively, to minimize the above two objectives. In addition, Öztop et al. employed four different IG algorithms for the hybrid flowshop scheduling problem to optimize the objective of total flowtime [37].
From the analysis above, it is found that (1) unlike other population-based algorithms, the iterated greedy algorithm (IGA) is an efficient meta-heuristic algorithm with a simple framework that can be coded and replicated easily. (2) Among the intelligence algorithms for DPFSP discussed above, the iterated greedy algorithm (IGA) exhibits advanced performance. The advantages of the IG algorithm are attributed to the simplicity of the algorithm framework, with few parameters, ease of integration, and good reinforcement and local convergence performance. For the DBFSP_SDST, no relevant research has attempted to solve this problem by using improved IGA. Therefore, to make the IGA more appropriate for DBFSP_SDST, this article has made some adjustments according to the problem characteristics and designed a multi-factory collaborative iterated greedy algorithm.
Our main innovations are that (1) a MILP of DBFSP_SDST with makespan is constructed, and the Gurobi solver is adopted to verify the correctness of this model. (2) According to the characteristics of the problem, a new refresh acceleration calculation method based on job insertion is designed to speed up the calculation of the objective, thereby reducing the time complexity of the algorithm. (3) To enrich the diversity of solutions, iterative process I and iterative process II strategies are selected by a certain probability. (4) A collaborative strategy between cross-factory and inner-factory is presented.
The remaining parts are listed as follows. Section 2 formulates a MILP model of DBFSP-SDST. Section 3 states the specific details of the mIG algorithm. Experimental results and statistical analyses are performed in Section 4. Section 5 summarizes the research on the problem, algorithm, and future directions of research.

2. Problem-Specific Characteristics

The DBFSP_SDST considered in this article can be characterized as follows. Assume that F F > 2 identical factories exist. For each factory, J jobs have been processed on M machines. All factories should meet the restrictions in the MILP. The constraints are as follows: (1) A job has been processed continuously in only one factory. (2) Each job should be processed on one machine at a time according to the scheduled order. (3) Each machine can process only a job at a time. (4) No buffer exists between adjacent machines. The current machine must be blocked with a job until the next machine is free. (5) On each machine, the sequence-dependent setup time is taken into account. In addition, the first job on the machine needs to be set with an initial setup time. (6) Jobs cannot be interrupted during processing. Based on the above constraints, the optimization objective of DBFSP_SDST is makespan (unit: seconds).

2.1. Mathematical Model

Notations:
F Number of factories.
J Number of jobs.
M Number of machines in each factory.
j , j Index of jobs, j , j 0 , 1 , , J , where 0 denotes a dummy job that starts and ends at each factory.
m Index of machines.
p j , m Processing time of job j on machine m .
s j , j , m Setup time of adjacent job j and job j on machine m . s 0 , j , m is a predetermined value when j is the initial job on machine m .
h A positive large number.
Decision Variables:
C j , m Completion time of job j on machine m .
D j , m Departure time of job j on machine m .
x j , j A decision variable using binary coding, 1 if job j is a direct successor of job j , 0 otherwise.
Objective:
M i n i m i z e C m a x
Constraints:
j = 0 , j j J x j , j = 1 , j { 1 , 2 , , J }
j = 0 , j j J x j , j = 1 , j { 1 , 2 , , J }
j = 1 J x 0 , j F
j = 1 J x j , 0 F
j = 1 J x 0 , j = j = 1 J x j , 0
D j , m C j , m , j { 1 , 2 , , J } , m { 1 , 2 , , M }
C j , m p j , m = D j , m 1 , j { 1 , 2 , , J } , m { 1 , 2 , , M }
C j , m p j , m D j , m + s j , j , m + ( x j , j 1 ) h , j , j { 1 , 2 , , J } , j j , m { 1 , 2 , , M }
C j , m p j , m s 0 , j , m + ( x 0 , j 1 ) h , j { 1 , 2 , , J } , m { 1 , 2 , , M }
C m a x c j , M , j { 1 , 2 , , J }
Equation (1) is the makespan objective. Constraints (2) and (3) ensure that each job in the scheduling sequence can only have one immediate predecessor and successor, respectively. Constraints (4) and (5) assure that the dummy job has an immediate successor and predecessor, respectively. The dummy job must have an equal number of immediate predecessors and successors, which is assured by Constraint (6). Each job on each machine must have a departure time that is equal to or more than its completion time, as required by Constraint (7). According to Constraint (8), the departure time of each job from the previous machine is equal to the time that started processing on the current machine. Constraint (9) is that the start time of job j on machine m is larger than the sum of the departure time of job j on machine m and the setup time s j , j , m . Constraint (10) considers the initial setup time s 0 , j , m of the first job on machine m . Constraint (11) defines the makespan. No more than 2F dummy jobs are used for sequence-based variables. The job sequence within each factory starts and ends with dummy jobs.

2.2. Example Instance

The described problem is clearly reflected by considering the example having five jobs ( J = 5 ) , two machines ( M = 2 ) , and two factories ( F = 2 ) . Table 1 gives the processing times for the five jobs, and the SDSTs are shown in Table 2. Processing time and SDST are in seconds. One possible solution is denoted as: x 0 , 1 = 1 , x 1 , 4 = 1 , x 4 , 0 = 1 , x 0 , 5 = 1 , x 5 , 3 = 1 , x 3 , 2 = 1 , x 2 , 0 = 1 ; the rest decision variables are equal to 0. The solution corresponds to a sequence {0, 1, 4, 0, 5, 3, 2, 0}, where the dummy job 0 divides it into two sequences {1, 4} and {5, 3, 2}. It means that factory 1 processes jobs 1 and 4, and factory 2 processes jobs 5, 3, and 2. The makespan is 57, and the scheduling Gantt chart as shown in Figure 1.

2.3. Improved Rapid Evaluation Criteria

A type of acceleration method inspired by Taillard [38] was proposed to save computational effort by combining the characteristics of the problem under study. In the rapid evaluation process, forward and backward calculation methods are adopted. The forward calculation is as follows. (1) Compute the leave time of the first job on the first machine, the second machine, and up to the last machine, respectively. (2) Similarly, the departure times on each machine for the second job, the third job, and until the last job are calculated. See Figure 2a. The backward calculation is as follows. (1) Calculate the departure time of the last job on the last machine, on the penultimate machine, and up to the first machine, respectively. (2) Similarly, the departure times on each machine for the penultimate job, the antepenultimate job, and up to the first job are calculated. See Figure 2b.
Assume that the factory f consists of η f jobs processed according to the sequence π f = { π f 1 , π f 2 , , π f j , π f η f } , where the job in the factory f is represented as π f j   j { 1 , 2 , , η f } . [ j ] denotes the index of the j th job. In the forward calculation, j c [ j ] , m and j d [ j ] , m denote the completion time and the leave time, respectively, of π f j on m . In the backward calculation, j s [ j ] , m and j e [ j ] , m denote the completion time and leave time, respectively, of π f j on m .
Refresh accelerated calculation for inserting job:
An attempt is made to insert η s jobs τ j 1 , τ j 2 , , τ j t , , τ j η s , j t { 1 , 2 , , η s } sequentially into the job sequence π f to minimize the makespan of the factory f .
Step 1: Set t = 1 and consider the insertion of the job τ j t .
Step 2: Forward calculate j d [ j ] , m for job π f j on machine m according to Equations (12)–(14). Please see Figure 2a.
j c [ j ] , 0 = 0 , j = 1 , 2 , , η f
j c [ j ] , m = m a x ( s 0 , [ j ] , m , j c [ j ] , m 1 ) + p [ j ] , m , j = 1 , m = 1 , 2 , , M m a x ( j d [ j 1 ] , m + s [ j 1 ] , [ j ] , m , j c [ j ] , m 1 ) + p [ j ] , m , j = 2 , 3 , , η f , m = 1 , 2 , , M
j d [ j ] , m = j c [ j ] , m + 1 p [ j ] , m + 1 , j = 1 , 2 , , η f , m = 1 , 2 , , M 1 j c [ j ] , m , j = 1 , 2 , , η f , m = M
Step 3: Backward calculate j e [ j ] , m for job π f j on machine m according to Equations (15)–(17). Please see Figure 2b.
j s [ j ] , M + 1 = 0 , j = η f , η f 1 , , 1
j s [ j ] , m = j s [ j ] , m + 1 + p [ j ] , m , j = η f , m = M , M 1 , , 1 m a x j e [ j + 1 ] , m + s [ j ] , [ j + 1 ] , j s [ j ] , m + 1 + p [ j ] , m , j = η f 1 , η f 2 , , 1 , m = M , M 1 , , 1
j e [ j ] , m = j s [ j ] , m 1 p [ j ] , m 1 , j = η f , η f 1 , , 1 , m = M , M 1 , , 2 j s [ j ] , m , j = η f , η f 1 , , 1 , m = 1
Step 4: The job sequence π f has a set of η f + 1 positions. The job can be tested in these positions. Suppose that the q th position is inserted by job τ j t , where q = 1 , 2 , , η f + 1 . Then, j d j t , m can be calculated by using Equations (18) and (19), as shown in Figure 2c.
j d j t , 0 = 0
j d j t , m = m a x s 0 , j t , m , j d j t , m 1 + p j t , m , q = 1 , m = 1 , 2 , , M m a x j d [ q 1 ] , m + s [ q 1 ] , j t , m , j d j t , m 1 + p j t , m , q = 2 , , η f + 1 , m = 1 , 2 , , M
Step 5: From Equation (20), the makespan of factory f , C max ( j t , q ) , can be calculated after inserting job τ j t into the q th position of job sequence π f , as shown in Figure 2c.
C m a x ( j t , q ) = m a x m = 1 , 2 , , M j d j t , m + s j t , [ q ] , m + j e [ q ] , m , q = 1 , 2 , , η f j d j t , M , q = η f + 1
Step 6: Repeat steps 3 and 4 until all positions have been considered. It is assumed that position q b e s t is the best position at which job τ j t can be inserted.
Step 7: After job τ j t is inserted into position q b e s t , the j d [ j ] , m ( j e [ j ] , m ) of the job before (after) position q b e s t is unchanged. Therefore, we only need to recalculate j d [ j ] , m of the job after position q b e s t , according to Equations (12)–(14), and j e [ j ] , m of the job before position q b e s t , according to Equations (15)–(17). It is also necessary to calculate j e j t , m of job τ j t , as shown in Figure 2d,e.
Step 8: Set t = t + 1 , η f = η f + 1 .
Step 9: Repeat steps 4, 5, 6, 7, and 8 until all η s jobs have been considered.
With the above steps, we find that the computational complexity of inserting the jobs into the sequence is reduced from O m η s η f 2 + t = 1 η s 2 t η f + t 2 O m n 2 to O m 2 η s + 1 η f + t = 1 η s 2 t 1 O m n . The computational cost savings are substantial when dealing with large-scale problems.

3. Proposed IG Algorithm for DBFSP_SDST

First, unlike other population-based algorithms, the iterated greedy algorithm focuses on the iteration of one solution and has a strong local search capability due to its greedy strategy. It has the advantages that it is a simple framework, has a small number of parameters, and is easy to encode and replicate. Considering the multi-factory feature of DBFSP_SDST and the diversity of solutions from a global perspective, we make some modifications to the IGA, such as designing iterative processes I and II to increase the diversity of solutions and focusing on the cooperation between global search and local search. Thus, we propose a multi-factory collaborative iterated greedy algorithm, i.e., mIG to solve DBFSP_SDST.

3.1. Algorithm Description

Figure 3 shows the flow chart of mIG. It is well-known that an initialization solution with high quality can enhance the convergence of the algorithm. Thus, we first design an enhanced NEH heuristic, R e f r e s h _ N E H _ e n , to initialize the solution by using refresh accelerated calculation (see line 1 of Algorithm 1). Then, we adopt a multi-neighborhood structures search based on the variable neighborhood descent ( m V N D ) method to improve the quality of the initialization solution described above (see line 2 of Algorithm 1). Considering the multiple factories characteristic of DBFSP_SDST and enhancing the diversity of solutions from a global perspective, we also design two iterative stages, called iterative process I and iterative process II, and each iterative process is adopted with a certain probability (see lines 4–8 of Algorithm 1). After performing the above search strategy, a simulated annealing acceptance criterion is adopted to enhance the diversity of solutions. If the performance of the current new solution is not better than the original one, the original one is still retained using the following criterion, r exp { ( C max ( π c u r r e n t ) C max ( π o r i g i n ) ) / T } , T = λ T , λ ( 0 , 1 ) , r ( 0 , 1 ) . Furthermore, the proposed refresh accelerated calculation for inserting job method is adopted throughout the algorithm.
Algorithm 1: The proposed mIG
Input:  ρ is the probability value.
01: π = R e f r e s h _ N E H _ e n
02: π t e m p = m V N D ( π )
03: while (the current CPU time <terminate time) do
04:     if  r a n d ( 0 , 1 ) > ρ
05:          π = i t e r a t i v e   p r o c e s s   I ( π t e m p )
06:     else
07:          π = i t e r a t i v e   p r o c e s s   II ( π t e m p )
08:     end if
09:      π b e s t = A c c e p t a n c e C r i t e r i o n ( π )
10: end while
Output:  B e s t S o l u t i o n

3.2. Solution Representation

Regarding the solution encoding of DBFSP_SDST, a solution is represented by adopting a discrete integer encoding. That is, a solution π can be expressed, { π 1 , π 2 , , π f , , π F } , with each π f consisting of { π f 1 , π f 2 , , π f j , π f η f } , in which π f refers to the job sequence of factory f , and η f refers to the number of jobs in factory f . The specific example can be found in Section 2, in which a solution can be expressed as π = { π 1 , π 2 } , where π 1 = { 1 , 4 } , π 2 = { 5 , 3 , 2 } , η 1 = 2 , and η 2 = 3 . This means that factory 1 processes jobs 1 and 4 in the order 1 4 . Similarly, factory 2 processes jobs 2, 3, and 5 in the order 5 3 2 .

3.3. Initialization Solution

As mentioned above, an initialization sequence is closely related to the convergence nature of the algorithm. Thus, initialization operations are performed by using the heuristic method. According to the distributed characteristic of DBFSP_SDST, two issues need to be addressed. One is the assignment of jobs into factories, and the other is the arrangement of a reasonable scheduling sequence for each factory. NEH2_en presented by [11] has shown superior performance when optimizing a distributed flowshop scheduling problem and can solve the above two issues. However, NEH2_en has high time complexity due to the objective function needing to be reevaluated when jobs are put into all possible positions of all factories. Considering the problem characteristics and rapid evaluation method of the insertion job designed in Section 2.3, we propose a rapid initialization strategy Refresh_NEH_en by using refresh accelerated calculation.
Algorithm 2 shows the procedure of Refresh_NEH_en in detail. First, the total processing time P j is calculated for every job on all machines (see line 1), and a scheduling sequence τ is obtained according to the descending of P j (see line 2). Second, we take each job from the sequence τ and put it into each factory one by one (see lines 3–5), which ensures uniform allocation. The remaining jobs are removed one after another and put into all positions in all factories; finally the best position is selected (see lines 6–12). After finishing the insertion operator, we remove a job at position p o s f * 1 or p o s f * + 1 from π f * , attempt it at all positions in π f * , and select the position with minimal makespan (see lines 13–16).
Algorithm 2:Refresh_NEH_en
Input: an initial solution π = ϕ .
01: P j = m = 1 M p j , m , j = 1 , 2 , , J
02: τ = { τ 1 , τ 2 , , τ J } (Sort jobs according to decreasing P j )
03: for j = 1 to F do       %% uniformly allocate the jobs to the factories
04:   Take job τ j from the set of jobs and assign it in π j
05: end for
06: for j = F + 1 to J do
07:    for f = 1 to F do
08:       Insert τ j in all positions in π f and calculate the corresponding makespan by using refresh accelerated calculation
09:        C f = min p o s f = 1 η f + 1 C f p o s f and p o s f = arg ( min p o s f = 1 η f + 1 C f p o s f )
10:    end for
11:     p o s f * = a r g ( m i n f = 1 F C f )  %% p o s f * is the best position of factory with minimal makespan
12:    Insert τ j into position p o s f * of π f *
13:    Randomly select a job j from p o s f * 1 or p o s f * + 1 of π f *
14:    Measure job j in all positions using refresh accelerated calculation
15:    Insert job j in the position with minimum makespan
16: end for
Output: the initial solution π

3.4. Multi-Neighborhood Structures Search

According to the distributed characteristic of DBFSP_SDST, a variable neighborhood descent based on the multiple neighborhood structures search ( m V N D ) method is adopted to further disturb the current solution. Multiple parallel isomorphic factories exist in DBFSP_SDST; we consider using cross-factory and inner-factory neighborhood search to explore the global solution. In addition, a critical factory that is the one with maximum makespan decides the final makespan value of DBFSP_SDST. In view of this, two neighborhood structures search operators based on a critical factory and non-critical factory, i.e., C r i t i c a l _ c r o s s _ s w a p 1 ( π ) and C r i t i c a l _ i n n e r _ i n s e r t ( π f c r i t i c a l ) , are designed.
C r i t i c a l _ c r o s s _ s w a p 1 ( π ) accomplishes the interaction between the critical factory and secondary factory, called a cross-factory interaction, where the secondary factory is the one with the second highest makespan. The details are as follows. First, a critical factory is found (if there are more than one critical factory, one will be chosen randomly) according to the current solution π . Second, a job is chosen from the critical factory. Third, another job is selected from the secondary factory. Next, the above two selected jobs are swapped and evaluated. If the objective value of the critical factory is reduced, the current solution will be updated.
C r i t i c a l _ i n n e r _ i n s e r t ( π f c r i t i c a l ) accomplishes the interaction within the critical factory, called inner-factory interaction. First, select a random job in the critical factory. Second, try the selected job in all positions of π f c r i t i c a l , and select the best position.
Algorithm 3 gives the pseudocode of the multi-neighborhood structures search algorithm.
Algorithm 3: m V N D π
Input:  π is the initial solution.
01: Find a critical factory f c r i t i c a l and secondary factory f sec o n d a r y and record their scheduling sequences π f c r i t i c a l and π f sec o n d a r y , respectively.
02:   p m a x = 2 and p = 1
03: do {
04:      if p = 1
05:           π t e m p = C r i t i c a l _ c r o s s _ s w a p 1 ( π )
06:       else
07:           π t e m p = C r i t i c a l _ i n n e r _ i n s e r t ( π f c r i t i c a l )
08:      end if
09:      if  C max is improved
10:           π = π t e m p
11:           p = 1
12:      else
13:           p = p + 1
14:      end if
15:    } while ( p p m a x )
16: end while
Output: π

3.5. Two Iterative Processes

As mentioned above, IGA is an efficient meta-heuristic algorithm with a simple framework. Because its structure is easy to reproduce, many good strategies can be ported to its framework to further improve the performance of IGA. In addition, considering the multiple factories characteristic of DBFSP_SDST and enhancing the diversity of solutions from a global perspective, two iterative processes are designed, called iterative process I and iterative process II, and each iterative process is adopted by a certain probability.
The iterative process I (see Algorithm 4) adopts v D e s t r u c t i o n _ R e c o n s t r u c t i o n ( π ) (see Algorithm 5) and C r i t i c a l _ c r o s s _ s w a p 1 ( π ) (see Section 3.4) operators to disturb the current solution. The traditional destruction and reconstruction operators [11] are improved according to the distributed characteristics, abbreviated as v D e s t r u c t i o n _ R e c o n s t r u c t i o n ( π ) . The details are as follows. First, initialize a parameter, d , using the random function r a n d b e t w e e n ( 2 , 6 ) and use it to generate an integer between 2 and 6. Second, a sequence π R with d jobs is obtained, in which d / 2 jobs are extracted from the critical factory, and the rest are randomly selected from the non-critical factories (see lines 2–9). At the same time, the above d jobs are sequentially removed from the original sequence. Third, adopt the jump reconstruction operator [39] to insert d jobs in all possible positions and finally select the best position (see lines 13–21). It should be noted that (1) the difference between jump reconstruction and the traditional reconstruction is that the former adopts a jumpy insertion when the insertion cannot improve the quality of the solution, which can accelerate insertion speed and reduce the time complexity; (2) a refresh accelerated calculation is adopted when performing the above insertion operator and calculation function value. The proposed v D e s t r u c t i o n _ R e c o n s t r u c t i o n ( π ) can further explore the deep neighborhood of the solution, increasing the diversity of solutions to prevent falling into the local optimum. Algorithm 5 displays the procedure of v D e s t r u c t i o n _ R e c o n s t r u c t i o n ( π ) in detail.
Algorithm 4:iterative process I
Input: π is the current primary solution; J is the total number of jobs in π .
01: Find a critical factory f c r i t i c a l and secondary factory f s e c o n d a r y and record their scheduling sequence π f c r i t i c a l and π f s e c o n d a r y , respectively.
02: π = v D e s t r u c t i o n _ R e c o n s t r u c t i o n π    %% Algorithm 5
03: for  c n t = 1  to  J / 2  do
04:    π t e m p = C r i t i c a l _ c r o s s _ s w a p 1 π    %% subSection 3.4
05:   if  C m a x is improved
06:       π = π t e m p
07:   end if
08: end for
Output:  π
Algorithm 5: v D e s t r u c t i o n _ R e c o n s t r u c t i o n ( π )
Input:  π is the current primary solution; d is the number of removed jobs from π , π R =
01: Find a critical factory f c r i t i c a l and record its scheduling sequence π f c r i t i c a l
/* Destruction */
02: d = r a n d b e t w e e n ( 2 , 6 )
03: for  c n t = 1  to  d / 2  do
04:   Select a random job j from π f c r i t i c a l
05:    π R j and π f c r i t i c a l = π f c r i t i c a l \ j
06: end for
07: while  π R < d  do    %% π R refers to the number of jobs in π R
08:    Randomly select a job j from π f ( π f π f c r i t i c a l )   %% π f is the sequence of factory f
09:     π R j and π f = π f \ j
10: end while
/* Reconstruction based on jumpy insertion and refresh accelerated calculation */
11: for  j = 1  to  d  do
12:  for  f = 1  to  F  do
13:     p o s = 0 and K = 1
14:    while  p o s π f  do
15:       Measure job j at position p o s of π f t e m p using refresh accelerated calculation
16:             if  C max is improved
17:                  Insert job j at p o s of π f t e m p , and K = 1
18:             else
19:                   K = K + 1
20:             end if
21:              p o s = p o s + K
22:    end while
23:  end for
24: end for
Output:  π
The iterative process II (see Algorithm 6) accomplishes the interaction of cross-factory and inner-factory. Since the completion time of the critical factory directly affects the optimal solution of the whole scheduling, it is necessary to appropriately schedule the critical factory. Combined with the distributed characteristic of DBFSP_SDST, the cross-factory and inner-factory strategies are designed, respectively. In this way, the development and exploration of the proposed algorithm can be balanced by the cooperation of the two strategies.
Multiple search strategies can improve the diversity of solutions. Therefore, in the cross-factory strategy, four disturbing operators are designed, i.e., v D e s t r u c t i o n _ R e c o n s t r u c t i o n ( π ) , C r i t i c a l _ c r o s s _ s w a p 1 ( π ) , C r i t i c a l _ m i n _ s w a p ( π ) , and C r i t i c a l _ c r o s s _ s w a p 2 ( π ) , to improve the opportunity of obtaining potential solutions. To further increase the search efficiency of mIG, the above four strategies will be adaptively selected (see Algorithm 6). In the inner-factory strategy, an operator C r i t i c a l _ i n n e r _ s w a p ( π f c r i t i c a l ) is proposed to optimize the sequence within the factory.
Except for C r i t i c a l _ c r o s s _ s w a p 1 ( π ) and v D e s t r u c t i o n _ R e c o n s t r u c t i o n ( π ) , which are stated in Section 3.4 and Section 3.5, respectively, C r i t i c a l _ c r o s s _ s w a p 2 ( π ) , C r i t i c a l _ m i n _ s w a p ( π ) , and C r i t i c a l _ i n n e r _ s w a p ( π f c r i t i c a l ) are as follows.
C r i t i c a l _ m i n _ s w a p ( π ) is the interaction between the two factories with maximal and minimal makespan. First, select two jobs from each of the two factories mentioned above. Second, the above two selected jobs are swapped and evaluated. If the objective value of the critical factory is reduced, the current solution will be updated.
C r i t i c a l _ c r o s s _ s w a p 2 ( π ) performs the C r i t i c a l _ c r o s s _ s w a p 1 ( π ) operation twice to explore the space more deeply and facilitate the improvement of the quality of the solution.
C r i t i c a l _ i n n e r _ s w a p ( S s ) : Select two jobs at random from the sequence of critical factory. Next, swap the two selected jobs. This will get a new solution and apply this swap when a sequence of smaller makespan solutions is produced. If the objective value of the critical factory is reduced, the current solution will be updated.
In the self-adaptive strategy, two lists are defined, i.e., L i s t and B e s t L i s t . L i s t contains sixty search strategies that are randomly selected from the above four strategies. B e s t L i s t is initialized to empty. Each value of parameter R represents one of the four strategies, R ( 1 , 2 , 3 , 4 ) . During the iteration, if the solution is improved, the corresponding strategy is saved to the B e s t L i s t . Last, by using the strategies in the B e s t L i s t to update L i s t by parameter ω , ω determines how many strategies in B e s t L i s t are available to update L i s t (see lines 17–22). The details of iterative process II, including the self-adaptive strategy, are described in Algorithm 6.
Algorithm 6:iterative process II
Input: the current solution π , counter c , c n t , i
01: Find a critical factory f c r i t i c a l and secondary factory f s e c o n d a r y and a factory with minimal makespan f m i n . Record their scheduling sequence π f c r i t i c a l , π f s e c o n d a r y , and π f m i n , respectively.
/* cross-factory */
02: for  c = 1  to  L i s t  do         %% L i s t is the length of L i s t
03:    R = r a n d b e t w e e n ( 1 , 4 )
04:      switch ( R )
05:         case 1:  π t e m p = v D e s t r u c t i o n _ R e c o n s t r u c t i o n ( π )    %% Section 3.5
06:                  break;
07:         case 2:  π t e m p = C r i t i c a l _ m i n _ s w a p ( π )
08:                  break;
09:         case 3:  π t e m p = C r i t i c a l _ c r o s s _ s w a p 1 ( π )    %% Section 3.4
10:                  break;
11:         case 4:  π t e m p = C r i t i c a l _ c r o s s _ s w a p 2 ( π )
12:                  break;
13:      if  C max is improved
14:                   π = π t e m p
15:                  Record the R value in B e s t L i s t
16: end for
17: for  i = 1  to  min { ω × L i s t , B e s t L i s t }
18:       L i s t [ i ] = B e s t L i s t [ i ]
19: end for
20: for  i = min { ω × L i s t , B e s t L i s t } + 1  to  L i s t  do
21:       L i s t [ i ] = r a n d b e t w e e n ( 1 , 4 )
22: end for
/* inner-factory */
23: for  c n t = 1  to  J / 2  do
24:      π t e m p = C r i t i c a l _ i n n e r _ s w a p ( π f c r i t i c a l )
25:      if  C max is improved
26:          π = π t e m p
27:      end if
28: end for
Output:  π

3.6. The Computational Complexity of mIG

In mIG, we suppose that there are n jobs, f factories, and m machines. Each factory contains n f jobs. The computational complexity of the mIG algorithm includes initialization, multi-neighborhood structures search, iterative process I, and iterative process II. First, the time complexity of Refresh_NEH_en is O m n + n log 2 n + f + n f m n + 2 n f + m n f O n 2 . Second, the time complexity of the multi-neighborhood structures search is calculated as O n f + n f × n f × m O n 2 . In addition, assume that the number of iterations of iterative process I and iterative process II are k 1 and k 2 , respectively. For iterative process I, the complexity is O k 1 × n 2 × m × n f × 2 O n 2 . For iterative process II, the complexity is O k 2 × n 2 × m × n f O n 2 . In summary, the complexity of the whole mIG is O n 2 .

4. Numerical Experiment and Analysis

This section gives the experimental design and analysis to demonstrate the effectiveness of mIG. The experiments are run on a PC with Intel(R) Core (TM) i7 CPU @ 2.90 GHz processor and 8 GB of RAM. For the proposed MILP model, the Gurobi 9.1.2 solver is adopted. For all the compared algorithms, C++ in the Visual Studio 2019 environment is used for coding and runs on the Release x64 platform. In the algorithm test, to ensure fairness, the maximum CPU elapsed time is adopted as the stopping criterion. In addition, it is considered that the algorithm has practical significance only when it can solve the problem in an acceptable time. Therefore, the termination condition is set as T i m e L i m i t = 5 × J × M milliseconds in this article. J and M indicate the total number of jobs and machines in the test instance, respectively. Each instance is run 5 times independently.

4.1. Test Data and Performance Metric

The experimental data used in this article can be referred to in [15]. This article test 270 instances with F × M × J × F a c t o r , where F ( F { 2 , 3 , 4 , 5 , 6 , 7 } ) is the number of factories, M ( M { 5 , 8 , 10 } ) is the number of machines, J ( J { 100 , 200 , 300 , 400 , 500 } ) is the number of jobs, and F a c t o r ( F a c t o r { 25 , 50 , 100 } ) is the influence factor value that is used to generate different instances for the same scale size problem. From the above analysis, 6 × 3 × 5 × 3 = 270 combinations are obtained. Processing times for each job are evenly distributed within [ 1 , 99 ) . The setup times of each job relative to the other jobs are calculated by the equation ( 1 + r a n d ( ) % 99 ) × F a c t o r / 100 , where r a n d ( ) used to generate a random integer.
We adopt the relative percentage increase (RPI) as an evaluation indicator. RPI estimates the difference between the makespan obtained by an algorithm and the optimal makespan found so far. The equation to calculate the RPI is shown below:
R P I = M i M b e s t M b e s t × 100 %
where M b e s t is the minimal makespan found by all compared algorithms of 5 independent running for a test instance. M i refers to the average makespan obtained by the i th algorithm of 5 independent running for a test instance. i belongs to ES [40], DABC [18], IGR [15], EA [14], and DDE [24]. Because there are 3 different instances for each scale instance, the average PRI is calculated for 3 different instances, called ARPI. Obviously, the smaller the RPI or ARPI, the better result the algorithm obtained.

4.2. Correctness Verification of MILP

The correctness of the presented MILP model is verified by using 8 small-scale instances. The model is written in Python on the Gurobi solver. In the exact solver, the maximum termination criterion is set to 3600 s [41,42]. The termination criterion of mIG is set to T i m e L i m i t = 5 × J × M . Set each instance to run 5 times independently to reduce the randomness of mIG. Table 3 lists the respective makespan and running time of MILP and mIG. Among them, the makespan represents the best value found in the termination time. In addition, F _ J _ M denotes the numbers of factories, jobs, and machines, respectively.
Table 3 shows that the optimal solution can be found by MILP and takes less time when the instance size is small, i.e., 2_2_2, 2_5_2, 2_8_2, 2_10_2, and 2_12_2 instances. Within the termination time, the values of the makespan obtained by Gurobi are good for 6 (6/8) instances, suggesting that the MILP is correct and can find optimal solutions in small-scale instances. As the scale of the instances continues to grow, i.e., 2_35_2 and 2_40_2 instances, MILP cannot generate a good solution even if the run time is extended to 3600 s. However, mIG can obtain the best solution in a shorter time for all instances. Thus, mIG has better capacity to solve large-scale and complicated instances of DBFSP SDST than MILP.

4.3. Parameter Calibration

In the proposed mIG, two key parameters should be calibrated. One is the threshold value of two iterative processes, ρ , and the other is the proportion of | B e s t L i s t | to | L i s t | , ω . To obtain a more intuitive sensitivity of the two parameters, the Taguchi method of design of experiment (DOE) is used to determine the best combination of parameter values. For each parameter, the four levels illustrated in Table 4 are considered, and 16 ( 4 × 4 = 16 ) parameter combinations are listed in Table 5. To fairly investigate the sensitivity of these two parameters, three different instances are randomly selected ( F _ J _ M ), i.e., 2_100_5, 4_300_5, 7_500_10. For each instance, 16 combinations are run independently five times and obtain the average RPI values (see Table 5). Factor-level trends for each parameter are shown in Figure 4. Table 6 indicates the level of significance of the two parameters. The largest influence on the algorithm is exerted by the parameter ρ , followed by ω .
From Table 4, Table 5 and Table 6 and Figure 4, the parameter ρ has the greatest influence on the experimental results. It directly affects the global and local search balance of the two iterative processes. As can be seen in Figure 4, when ρ = 0 , iterative process I is invoked completely; iterative process II is not involved. At this time, the value of ARPI (1.282) is higher, suggesting that the IGA with only the iterative process I strategy easily falls into a local optimum. However, when ρ = 0.1 , the average RPI (1.221) is better than that of ρ = 0.2 and ρ = 0.3 . This can further illustrate the validity of our proposed iterative process II to increase the diversity of solutions and avoid local optima.
For the parameter ω , it determines how many strategies in B e s t L i s t are available to update L i s t . If the value is too small, it suggests that few good search strategies in B e s t L i s t are used to update L i s t , which may influence the convergence of the algorithm. On the contrary, if the value is too large, the diversity of strategies in L i s t may be reduced. Thus, the performance of mIG is tested under the values of ω being 0.6, 0.7, 0.8, and 0.9, respectively. Based on the experimental results of Table 5 and Figure 4, the value of ω is set 0.7.

4.4. Evaluation of the Proposed Problem-Specific mVND Operator

In this section, the proposed mVND strategy is investigated to demonstrate its contribution. mIG_NV refers to the mIG without mVND. All instances were tested in the same experimental environment, and each instance was repeatedly run 5 times, with T i m e L i m i t = 5 × J × M milliseconds as the same termination time. ANOVA will be used to evaluate the RPI values of all instances as experimental results. From the results shown in Figure 5, the value of RPI yielded by mIG with mVND is lower than that of mIG_NV. This suggests that the proposed multi-neighborhood structures search based on variable neighborhood descent can increase the diversity of mIG and provide more opportunities to generate potential solutions. In addition, the reason why mVND has good performance is due t the designed neighborhood search strategies for cross-factory and inner-factory, which provide advantages for exploring the global solution.

4.5. Evaluation of mIG with Other Efficient Algorithms

This section compares the mIG algorithm with five intelligent optimization algorithms for solving DFSP, i.e., ES [40] and DDE [24], which are used to solving DBFSP, DABC [18], IGR [15], and EA [14], which are used to solving DPFSP. For fairness of comparison, all algorithms are carefully implemented according to the characteristics of the problem under the same termination conditions. The termination condition of all algorithms is set as T i m e L i m i t = 5 × J × M milliseconds. The mIG without refresh accelerated calculation, called mIG0, is also compared. In Table 7, J _ M represents the scale with J jobs and M machines. In addition, we calculated the percentage values using equation P C o m p a r i n g P m I G / P C m p a r i n g × 100 % , where P C o m p a r i n g and P m I G refer to the values of Avg or ARPI obtained by the comparing algorithm and mIG, respectively. The calculated percentages represent how much better mIG is than other algorithms, and the data are marked in bold. For Avg, in different size instances of F = 2 , the percentages of mIG superior to EA, DDE, DABC, IGR, ES, and mIG0 are 1.11%, 1.64%, 4.25%, 2.04%, and 1.10%, respectively. Similarly, for F = 3 , 4 , 5 , 6 , 7 , the percentages are better than the six comparing algorithms. For ARPI, the percentages of mIG superior to the comparison algorithms, EA, DDE, DABC, IGR, ES, and mIG0 are 66.67%, 76.47%, 86.37%, 80.56%, 70.98%, and 50.88%, respectively. It is obviously the case that when F = 3 , 4 , 5 , 6 , 7 , the percentages of mIG are still better than other algorithms.
According to Table 7, (1) for most instances, the average makespan and RPI values obtained by mIG0 are smaller than those of EA, DDE, DABC, IGR, and ES, regardless of the number of factories being 2, 3, 4, 5, 6, and 7, respectively. The results demonstrate that the mIG0 is effective and has the ability to generate makespan. The advantages of mIG0 can be attributed to the fact that the proposed strategies, i.e., Refresh_NEH_en, m V N D , and the two iterative processes, are designed based on the distributed multi-factories character of DBFSP. (2) For all the instances, the average makespan and RPI values obtained by mIG0 are better than those of mIG0, EA, DDE, DABC, IGR, and ES, regardless of the number of factories being 2, 3, 4, 5, 6, and 7, respectively. The results demonstrate that the mIG with refresh accelerated calculation has low time complexity and can have more opportunities to search potential solutions. Therefore, mIG shows superior performance compared with all the other algorithms. The main advantage of mIG relative to mIG0 can be attributed to the fact that the proposed refresh accelerated calculation based on job insertion can speed up the calculation of the objective and reduce the time complexity of the algorithm.

4.6. Evolutionary Curves and Interactions for the Compared Algorithms

This section further verifies the convergence of the algorithms by selecting two different scales, i.e., 100 _ 6 _ 10 and 400 _ 7 _ 10 . The evolution curves of the mIG, ES, DABC, IGR, EA, and DDE algorithms are plotted as shown in Figure 6. The termination times for the two scales are T i m e l i m i t = 10 × J × M and T i m e l i m i t = 50 × J × M , respectively. Different colors and symbols represent the six convergence curves obtained by six algorithms, respectively. The abscissa is the execution time of the algorithm (in milliseconds), and ordinate refers to the values of makespan.
From Figure 6a, we can observe that ES and DDE have the fastest convergence speed, but their solutions tend toward convergence as time increases. The evolutionary process of DABC and EA lasts for a long time, and the final results obtained are mediocre. The convergence speed of IGR is slightly faster than EA, and its solution is only better than ES and DDE. Obviously, mIG has good convergence and is constantly converging as time increases, and it is superior to other algorithms. Similarly, for the large-scale instance, mIG still has the best convergence, as shown in Figure 6b. The reason why the convergence curve of mIG is lower than those of compared algorithms may be that the proposed strategies, i.e., Refresh_NEH_en, mVND, and two iterative processes, can generate excellent solutions and effectively improve the convergence.
Although the above experiments have shown the superiority and competitiveness of the proposed mIG, it is necessary to verify whether its superiority is statistically significant. In view of this, a multifactor ANOVA analysis is done and uses different algorithms and the numbers of factories, jobs, and machines as influencing factors, respectively. From Figure 7a, the overall RPI values of all the compared algorithms are significantly different, in which the proposed mIG algorithm remarkably outperforms the other algorithms, followed by mIG0, EA and ES, DABC, DDE, and IGR. Figure 7c,d shows that the values of RPI obtained by mIG are better than those of compared algorithms, and the mIG can remain stable when the numbers of factories, jobs, and machines increase. The ANOVA analysis plotted is illustrated in Figure 7 and shows the significant difference between mIG and other algorithms.

4.7. Friedman Tests

The Friedman test can verify whether multiple overall distributions are significantly different. Its original assumption is that all the algorithms involved in the comparison are not significantly different from each other. When a probability p-value is smaller than the given 0.05, the original assumption is rejected, and all algorithms are considered to be significantly different. Conversely, the original assumption cannot be rejected. It can be concluded that there are no significant differences between compared algorithms.
Table 8 gives the values of rank (Ranks), the number of test instances (CN), mean of RPI, standard deviation (Std. Deviation), minimum value (Min), and maximum value (Max) of makespan, respectively. The p-value obtained by the Friedman test is equal to 0.000, and its confidence level α = 0.050. The values of Ranks, Mean, Std. Deviation, Min, and Max obtained by mIG are 1.04, 0.578, 0.2708, 0.11, and 1.53, and they are the smallest among all the compared algorithms. The proposed mIG performs very well in solving the DBFSP_SDST problem in general.

5. Conclusions and Future Research

There is very little literature about DBFSP_SDST. A MILP model is first constructed for DBFSP_SDST, and this paper uses the Gurobi solver to confirm its accuracy. Then, an efficient mIG algorithm is designed to optimize the above formulated model. For the proposed mIG algorithm, this article has done the following modifications.
1.
A refresh acceleration calculation is proposed to reduce the complexity of the algorithm from O m n 2 to O m n .
2.
A rapid evaluation mechanism, Refresh_NEH_en, is designed to reduce the computational complexity of the initialization process.
3.
Iterative process I and II strategies are designed, and each iterative process is adopted by a certain probability to enhance the diversity of solutions from a global perspective.
4.
According to characteristics of the distributed pattern, cross-factory and inner-factory strategies are presented to allocate the appropriate number and sequence of jobs for each factory, which balance the exploration and exploitation of the proposed mIG algorithm.
5.
The proposed mIG algorithm obtains best solutions for a total of 270 instances when comparing to five state-of-the-art algorithms. The average makespan and RPI values of mIG are 1.93% and 78.35% better than the five comparison algorithms on average, respectively. The comprehensive results prove that the proposed mIG contains dual advantages of high quality and efficient solutions, which are more suitable for solving the DBFSP_SDST.
For future research, many issues of SDST-DBFSP need to be addressed urgently. First, multiple objectives should be considered, i.e., makespan, energy consumption, total flowtime, tardiness time [43] and earliness time, and so on. Second, from a practical production perspective, many uncertain factors should be considered, such as machine breakdowns, uncertain processing time, wrong operations, changes in due date, and so on. Last but not least, problem-specific operators or strategies should be designed according to the constraints and characteristics of problems.

Author Contributions

C.Z.: Conceptualization, Methodology, Data curation, Software, Validation, Writing—original draft. Y.H.: Conceptualization, Methodology, Software, Validation, Writing—original draft. Y.W.: Conceptualization, Methodology, Supervision, Writing—original draft. J.L.: Conceptualization, Methodology, Visualization, Investigation. K.G.: Conceptualization, Methodology, Writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant numbers 61973203, 62106073, 62173216, and 62173356. We are grateful for Guangyue Young Scholar Innovation Team of Liaocheng University under grant number LCUGYTD2022-03.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, L.; Shen, W. Process Planning and Scheduling for Distributed Manufacturing (Springer Series in Advanced Manufacturing); Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  2. Koen, P.A. The PDMA Handbook of New Product Development; Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  3. Naderi, B.; Ruiz, R. The Distributed Permutation Flowshop Scheduling Problem. Comput. Oper. Res. 2010, 37, 754–768. [Google Scholar] [CrossRef]
  4. Liu, H.; Gao, L. A Discrete Electromagnetism-Like Mechanism Algorithm for Solving Distributed Permutation Flowshop Scheduling Problem. In Proceedings of the 2010 International Conference on Manufacturing Automation, Hong Kong, China, 13–15 December 2010; pp. 156–163. [Google Scholar]
  5. Gao, J.; Chen, R.; Deng, W.; Liu, Y. Solving Multi-Factory Flowshop Problems with a Novel Variable Neighbourhood Descent Algorithm. J. Comput. Inf. Syst. 2012, 8, 2025–2032. [Google Scholar]
  6. Gao, J.; Chen, R.; Deng, W. An Efficient Tabu Search Algorithm for the Distributed Permutation Flowshop Scheduling Problem. Int. J. Prod. Res. 2012, 51, 641–651. [Google Scholar] [CrossRef]
  7. Wang, S.; Wang, L.; Liu, M.; Xu, Y. An Effective Estimation of Distribution Algorithm for Solving the Distributed Permutation Flow-Shop Scheduling Problem. Int. J. Prod. Econ. 2013, 145, 387–396. [Google Scholar] [CrossRef]
  8. Naderi, B.; Ruiz, R. A Scatter Search Algorithm for the Distributed Permutation Flowshop Scheduling Problem. Eur. J. Oper. Res. 2014, 239, 323–334. [Google Scholar] [CrossRef] [Green Version]
  9. Fernandez-Viagas, V.; Framinan, J.M. A Bounded-Search Iterated Greedy Algorithm for the Distributed Permutation Flowshop Scheduling Problem. Int. J. Prod. Res. 2015, 53, 1111–1123. [Google Scholar] [CrossRef]
  10. Komaki, M.; Malakooti, B. General Variable Neighborhood Search Algorithm to Minimize Makespan of the Distributed No-Wait Flow Shop Scheduling Problem. Prod. Eng. Res. Devel. 2017, 11, 315–329. [Google Scholar] [CrossRef]
  11. Ruiz, R.; Pan, Q.-K.; Naderi, B. Iterated Greedy Methods for the Distributed Permutation Flowshop Scheduling Problem. Omega 2019, 83, 213–222. [Google Scholar] [CrossRef]
  12. Pan, Q.-K.; Gao, L.; Wang, L. An Effective Cooperative Co-Evolutionary Algorithm for Distributed Flowshop Group Scheduling Problems. IEEE Trans. Cybern. 2022, 52, 5999–6012. [Google Scholar] [CrossRef]
  13. Li, W.; Chen, X.; Li, J.; Sang, H.; Han, Y.; Du, S. An Improved Iterated Greedy Algorithm for Distributed Robotic Flowshop Scheduling Withorderconstraints. Comput. Ind. Eng. 2022, 164, 107907. [Google Scholar] [CrossRef]
  14. Fernandez-Viagas, V.; Perez-Gonzalez, P.; Framinan, J.M. The Distributed Permutation Flow Shop to Minimise the Total Flowtime. Comput. Ind. Eng. 2018, 118, 464–477. [Google Scholar] [CrossRef]
  15. Huang, J.-P.; Pan, Q.-K.; Gao, L. An Effective Iterated Greedy Method for the Distributed Permutation Flowshop Scheduling Problem with Sequence-Dependent Setup Times. Swarm Evol. Comput. 2020, 59, 100742. [Google Scholar] [CrossRef]
  16. Han, X.; Han, Y.; Chen, Q.; Li, J.; Sang, H.; Liu, Y.; Pan, Q.; Nojima, Y. Distributed Flow Shop Scheduling with Sequence-Dependent Setup Times Using an Improved Iterated Greedy Algorithm. Complex Syst. Model. Simul. 2021, 1, 198–217. [Google Scholar] [CrossRef]
  17. Li, Y.; Li, X.; Gao, L.; Zhang, B.; Pan, Q.-K.; Tasgetiren, M.F.; Meng, L. A Discrete Artificial Bee Colony Algorithm for Distributed Hybrid Flowshop Scheduling Problem with Sequence-Dependent Setup Times. Int. J. Prod. Res. 2021, 59, 3880–3899. [Google Scholar] [CrossRef]
  18. Huang, J.-P.; Pan, Q.-K.; Miao, Z.-H.; Gao, L. Effective Constructive Heuristics and Discrete Bee Colony Optimization for Distributed Flowshop with Setup Times. Eng. Appl. Artif. Intell. 2021, 97, 104016. [Google Scholar] [CrossRef]
  19. Karabulut, K.; Öztop, H.; Kizilay, D.; Tasgetiren, M.F.; Kandiller, L. An Evolution Strategy Approach for the Distributed Permutation Flowshop Scheduling Problem with Sequence-Dependent Setup Times. Comput. Oper. Res. 2022, 142, 105733. [Google Scholar] [CrossRef]
  20. Song, H.-B.; Lin, J. A Genetic Programming Hyper-Heuristic for the Distributed Assembly Permutation Flow-Shop Scheduling Problem with Sequence Dependent Setup Times. Swarm Evol. Comput. 2021, 60, 100807. [Google Scholar] [CrossRef]
  21. Companys, R.; Ribas, I. Efficient Constructive Procedures for the Distributed Blocking Flow Shop Scheduling Problem. In Proceedings of the 2015 International Conference on Industrial Engineering and Systems Management (IESM), Seville, Spain, 21–23 October 2015; pp. 92–98. [Google Scholar]
  22. Zhang, G.; Liu, B.; Wang, L.; Yu, D.; Xing, K. Distributed Co-Evolutionary Memetic Algorithm for Distributed Hybrid Differentiation Flowshop Scheduling Problem. IEEE Trans. Evol. Comput. 2022, 26, 1043–1057. [Google Scholar] [CrossRef]
  23. Ying, K.-C.; Lin, S.-W. Minimizing Makespan in Distributed Blocking Flowshops Using Hybrid Iterated Greedy Algorithms. IEEE Access 2017, 5, 15694–15705. [Google Scholar] [CrossRef]
  24. Zhang, G.; Xing, K.; Cao, F. Discrete Differential Evolution Algorithm for Distributed Blocking Flowshop Scheduling with Makespan Criterion. Eng. Appl. Artif. Intell. 2018, 76, 96–107. [Google Scholar] [CrossRef]
  25. Shao, Z.; Pi, D.; Shao, W. Hybrid Enhanced Discrete Fruit Fly Optimization Algorithm for Scheduling Blocking Flow-Shop in Distributed Environment. Expert Syst. Appl. 2020, 145, 113147. [Google Scholar] [CrossRef]
  26. Zhao, F.; Zhao, L.; Wang, L.; Song, H. An Ensemble Discrete Differential Evolution for the Distributed Blocking Flowshop Scheduling with Minimizing Makespan Criterion. Expert Syst. Appl. 2020, 160, 113678. [Google Scholar] [CrossRef]
  27. Han, X.; Han, Y.; Zhang, B.; Qin, H.; Li, J.; Liu, Y.; Gong, D. An Effective Iterative Greedy Algorithm for Distributed Blocking Flowshop Scheduling Problem with Balanced Energy Costs Criterion. Appl. Soft Comput. 2022, 129, 109502. [Google Scholar] [CrossRef]
  28. Ruiz, R.; Stützle, T. A Simple and Effective Iterated Greedy Algorithm for the Permutation Flowshop Scheduling Problem. Eur. J. Oper. Res. 2007, 177, 2033–2049. [Google Scholar] [CrossRef]
  29. Lin, S.-W.; Ying, K.-C.; Huang, C.-Y. Minimising Makespan in Distributed Permutation Flowshops Using a Modified Iterated Greedy Algorithm. Int. J. Prod. Res. 2013, 51, 5029–5038. [Google Scholar] [CrossRef]
  30. Pan, Q.-K.; Ruiz, R. An Effective Iterated Greedy Algorithm for the Mixed No-Idle Permutation Flowshop Scheduling Problem. Omega 2014, 44, 41–50. [Google Scholar] [CrossRef] [Green Version]
  31. Ying, K.-C.; Lin, S.-W.; Cheng, C.-Y.; He, C.-D. Iterated Reference Greedy Algorithm for Solving Distributed No-Idle Permutation Flowshop Scheduling Problems. Comput. Ind. Eng. 2017, 110, 413–423. [Google Scholar] [CrossRef]
  32. Huang, Y.-Y.; Pan, Q.-K.; Huang, J.-P.; Suganthan, P.; Gao, L. An Improved Iterated Greedy Algorithm for the Distributed Assembly Permutation Flowshop Scheduling Problem. Comput. Ind. Eng. 2021, 152, 107021. [Google Scholar] [CrossRef]
  33. Mao, J.; Pan, Q.; Miao, Z.; Gao, L. An Effective Multi-Start Iterated Greedy Algorithm to Minimize Makespan for the Distributed Permutation Flowshop Scheduling Problem with Preventive Maintenance. Expert Syst. Appl. 2021, 169, 114495. [Google Scholar] [CrossRef]
  34. Ribas, I.; Companys, R.; Tort-Martorell, X. An Iterated Greedy Algorithm for Solving the Total Tardiness Parallel Blocking Flow Shop Scheduling Problem. Expert Syst. Appl. 2019, 121, 347–361. [Google Scholar] [CrossRef]
  35. Qin, H.; Han, Y.; Chen, Q.; Li, J.; Sang, H. A Double Level Mutation Iterated Greedy Algorithm for Blocking Hybrid Flow Shop Scheduling. Control Decis. 2022, 37, 2323–2332. [Google Scholar] [CrossRef]
  36. Chen, S.; Pan, Q.-K.; Gao, L. Production Scheduling for Blocking Flowshop in Distributed Environment Using Effective Heuristics and Iterated Greedy Algorithm. Robot. Comput. Integr. Manuf. 2021, 71, 102155. [Google Scholar] [CrossRef]
  37. Öztop, H.; Fatih Tasgetiren, M.; Eliiyi, D.T.; Pan, Q.-K. Metaheuristic Algorithms for the Hybrid Flowshop Scheduling Problem. Comput. Oper. Res. 2019, 111, 177–196. [Google Scholar] [CrossRef]
  38. Taillard, E. Some Efficient Heuristic Methods for the Flow Shop Sequencing Problem. Eur. J. Oper. Res. 1990, 47, 65–74. [Google Scholar] [CrossRef]
  39. Missaoui, A.; Ruiz, R. A Parameter-Less Iterated Greedy Method for the Hybrid Flowshop Scheduling Problem with Setup Times and Due Date Windows. Eur. J. Oper. Res. 2022, 303, 99–113. [Google Scholar] [CrossRef]
  40. Karabulut, K.; Kizilay, D.; Tasgetiren, M.F.; Gao, L.; Kandiller, L. An Evolution Strategy Approach for the Distributed Blocking Flowshop Scheduling Problem. Comput. Ind. Eng. 2022, 163, 107832. [Google Scholar] [CrossRef]
  41. Meng, L.; Zhang, C.; Ren, Y.; Zhang, B.; Lv, C. Mixed-Integer Linear Programming and Constraint Programming Formulations for Solving Distributed Flexible Job Shop Scheduling Problem. Comput. Ind. Eng. 2020, 142, 106347. [Google Scholar] [CrossRef]
  42. Meng, L.; Gao, K.; Ren, Y.; Zhang, B.; Sang, H.; Chaoyong, Z. Novel MILP and CP Models for Distributed Hybrid Flowshop Scheduling Problem with Sequence-Dependent Setup Times. Swarm Evol. Comput. 2022, 71, 101058. [Google Scholar] [CrossRef]
  43. Zhao, F.; Di, S.; Wang, L. A Hyperheuristic With Q-Learning for the Multiobjective Energy-Efficient Distributed Blocking Flow Shop Scheduling Problem. IEEE Trans. Cybern. 2022, 1–14. [Google Scholar] [CrossRef]
Figure 1. The Gantt chart of the example instance.
Figure 1. The Gantt chart of the example instance.
Mathematics 11 00581 g001
Figure 2. Rapid evaluation criteria. (a) Calculate the time j d [ j ] , m . (b) Calculate the time j e [ j ] , m . (c) Insert job τ j t into position 2. (d) Recalculate j d [ j ] , m of the job after position 2. (e) Recalculate j e [ j ] , m of the job before position 2 and calculate j e j t , m .
Figure 2. Rapid evaluation criteria. (a) Calculate the time j d [ j ] , m . (b) Calculate the time j e [ j ] , m . (c) Insert job τ j t into position 2. (d) Recalculate j d [ j ] , m of the job after position 2. (e) Recalculate j e [ j ] , m of the job before position 2 and calculate j e j t , m .
Mathematics 11 00581 g002aMathematics 11 00581 g002b
Figure 3. Flow chart of the mIG algorithm.
Figure 3. Flow chart of the mIG algorithm.
Mathematics 11 00581 g003
Figure 4. The trend of the parameter level.
Figure 4. The trend of the parameter level.
Mathematics 11 00581 g004
Figure 5. Confidence interval for mIG and mIG_NV.
Figure 5. Confidence interval for mIG and mIG_NV.
Mathematics 11 00581 g005
Figure 6. The evolutionary curves of compared algorithms. (a) 100_6_10. (b) 400_7_10.
Figure 6. The evolutionary curves of compared algorithms. (a) 100_6_10. (b) 400_7_10.
Mathematics 11 00581 g006
Figure 7. Interactions for ES, DABC, IGR, EA, DDE, mIG0, and mIG. (a) Means of all the compared algorithms. (bd) are interactions of the numbers of factories, jobs, machines, and compared algorithms, respectively.
Figure 7. Interactions for ES, DABC, IGR, EA, DDE, mIG0, and mIG. (a) Means of all the compared algorithms. (bd) are interactions of the numbers of factories, jobs, machines, and compared algorithms, respectively.
Mathematics 11 00581 g007
Table 1. Processing times p j , m of jobs.
Table 1. Processing times p j , m of jobs.
p j , m J 1 J 2 J 3 J 4 J 5
M 1 11311129
M 2 25313517
Table 2. The SDSTs s j , j , 1 and s j , j , 2 of jobs.
Table 2. The SDSTs s j , j , 1 and s j , j , 2 of jobs.
s j , j , 1 J 1 J 2 J 3 J 4 J 5 s j , j , 2 J 1 J 2 J 3 J 4 J 5
7146215 241221210
J 1 -11161020 J 1 -1318320
J 2 12-12923 J 2 8-20191
J 3 05-2316 J 3 163-1823
J 4 4311-0 J 4 202215-17
J 5 152362- J 5 91375-
Table 3. Result for the MILP model.
Table 3. Result for the MILP model.
F _ J _ M MILPmIG
MakespanTime (s)MakespanTime (s)
2_2_21150.001150.02
2_5_21350.021350.05
2_8_21980.141980.08
2_10_22142.432140.10
2_12_224323.042430.12
2_20_242436004240.20
2_35_276336007420.35
2_40_287936008440.40
Best values are indicated in bold.
Table 4. Parameter level factor.
Table 4. Parameter level factor.
ParametersParameter Level
1234
ρ 00.10.20.3
ω 0.60.70.80.9
Table 5. Orthogonal array and ARPI value.
Table 5. Orthogonal array and ARPI value.
Experiment NumberParametersResponse (ARPI)
ρ ω
100.61.27
200.71.33
300.81.23
400.91.30
50.10.61.39
60.10.71.08
70.10.81.15
80.10.91.26
90.20.61.27
100.20.71.34
110.20.81.39
120.20.91.34
130.30.61.30
140.30.71.25
150.30.81.28
160.30.91.31
Table 6. The average RPI response values.
Table 6. The average RPI response values.
Level ρ ω
11.2821.310
21.2211.249
31.3351.260
41.2851.304
Delta0.1140.061
Rank12
Best values are indicated in bold.
Table 7. Average makespan and RPI values of ES, DABC, IGR, EA, DDE, mIG0, and mIG.
Table 7. Average makespan and RPI values of ES, DABC, IGR, EA, DDE, mIG0, and mIG.
FactoryJ_MTime (s)Algorithms
EADDEDABCIGRESmIG0mIG
AvgARPIAvgARPIAvgARPIAvgARPIAvgARPIAvgARPIAvgARPI
100_52.544641.8045062.6344782.0545724.2045373.4444972.4444160.62
100_8450302.0151163.7850602.6251384.2051133.6850792.9049760.90
100_10551701.4552412.9351841.7752683.4452553.1252112.2651390.83
200_5587751.8388392.5788332.5489073.3487972.1287101.1386880.75
200_8897261.3498442.5497992.1198913.0597611.6997011.0596470.47
200_101010,0571.0610,1341.8510,0991.5010,1932.4410,1131.5910,0701.1410,0030.45
300_57.513,1061.9913,1822.5913,3884.1413,2212.8713,0541.5912,9390.6612,9200.50
F = 2 300_81214,3731.5714,3881.6814,5953.0914,4532.1514,3521.4014,2670.8114,2450.61
300_101514,9291.8015,0052.3615,1543.2815,0702.7914,9051.5814,7810.7714,7710.68
400_51017,1951.6017,3002.1518,1356.9517,3712.5817,2511.8417,0730.8117,0410.61
400_81618,8641.7718,9352.1519,4835.0318,9892.4518,7661.2018,6650.6718,6080.35
400_102019,7711.6819,8682.1520,3004.3019,9532.5719,6971.2619,6020.7419,5250.35
500_512.521,2942.1221,3742.5223,00610.1321,4352.8221,2932.0920,9950.6620,9210.28
500_82023,5611.7323,6081.9624,6756.5023,6622.1923,4391.1823,2910.5323,2700.44
500_102524,5591.4224,6601.8425,5905.6124,7332.1224,5051.1424,3490.5224,3480.51
Mean-14,0581.6814,1332.3814,5184.1114,1902.8814,0561.9313,9491.1413,9010.56
Percentage-1.11%66.67%1.64%76.47%4.25%86.37%2.04%80.56%1.10%70.98%0.03%50.88%--
100_52.530723.2431224.8530602.8231465.6931174.6830743.2330010.88
100_8434252.1934643.3834192.0834884.0834643.4034252.1633640.37
100_10536302.5436503.1436192.2636753.8336753.8136282.4335620.63
200_5558522.2158742.6558542.1859253.5058752.5558281.7257660.60
200_8865392.4466153.6265232.1466333.9165482.5865061.8564280.59
200_101068831.9169683.1868791.8369913.5169272.5268811.8467930.49
300_57.587932.1988502.8788763.0488753.1487781.9587311.4086710.66
F = 3 300_81297341.6697591.9797762.0797932.2796971.2696580.8496250.45
300_101510,0701.5410,1222.0410,1021.8310,1572.3910,0651.4410,0261.0499710.47
400_51011,7051.9311,7492.3411,9413.8411,8072.8411,6711.5611,5960.9211,5290.30
400_81612,8121.9112,8322.0712,9853.2512,8842.4712,7611.4612,7001.0012,6420.51
400_102013,2631.6913,3132.1213,4082.7813,3532.4013,2421.5013,1801.0313,1040.40
500_512.514,4432.1114,4672.2714,9835.8114,5182.6214,3531.4814,2841.0014,2310.53
500_82015,8792.1315,8982.3116,2714.5415,9532.6415,7951.5315,7251.0515,6490.53
500_102516,4821.5216,5411.9416,8183.4916,5882.2116,4421.1716,3910.8916,3040.31
Mean-95052.0895482.7296342.9395863.1794942.1994421.4993760.51
Percentage-1.36%75.48%1.80%81.25%2.68%82.59%2.19%83.91%1.24%76.71%0.70%65.77%--
100_52.523533.3823874.9823252.1423975.3823874.8423372.6723011.09
100_8426743.4627024.5726532.6126964.3227034.5626482.4726070.87
100_10528243.0428514.1128092.5028604.3528504.0127972.0527580.66
200_5544732.6244732.6044542.3245143.5244862.8744622.2643890.57
200_8850402.5150923.5150302.2650933.5550542.7650212.0849530.66
200_101052531.8652902.5252511.8753323.3452802.4252441.7351830.53
300_57.566912.5867132.9166972.6167323.1766872.4766511.9065740.64
F = 4 300_81273731.6274072.1673911.7774292.4473781.6573561.2872990.44
300_101577031.6977522.3577101.7877732.6277271.9876901.4976150.45
400_51087552.1387692.2988352.9887942.5787071.5786721.1286090.33
400_81696812.0897202.4997302.5697602.9096421.6296351.5295560.65
400_102010,1621.8410,2012.2410,2102.2710,2272.5110,1421.5810,1331.5210,0300.44
500_512.510,7972.0110,8352.4211,0254.0410,8662.6910,7561.5810,7151.1410,6290.30
500_82011,9951.4712,0411.8512,1782.9112,0762.1411,9741.1911,9571.0911,8670.26
500_102512,4771.5912,4891.7112,5982.5512,5272.0112,4271.1612,4040.9512,3390.40
Mean-72172.2672482.8572602.4872723.1772132.4271821.6871140.55
Percentage-1.43%75.66%1.85%80.70%2.01%77.82%2.17%82.65%1.37%77.27%0.95%67.26%--
100_52.519284.6119455.5019023.1319465.6119344.9518942.7118621.00
100_8421823.7721894.1021522.3522004.6221984.5221502.2221251.07
100_10523293.4523574.6723052.2823594.7323564.6423032.2422700.80
200_5536503.0936563.3036302.5036753.7936462.9936172.0535680.69
200_8840502.7640703.3340242.1140733.3540572.9640262.1439600.49
200_101042892.4843133.0842631.8743293.4243112.9842722.0942070.46
300_57.552972.3253022.4353092.4953232.8153042.3952691.7052070.45
F = 5 300_81259492.6859663.0059132.0659873.3359382.4759071.9058260.45
300_101562392.1262933.0462291.9063113.3662442.1962251.8561320.29
400_51070305.9770592.4170802.6770802.7170071.6769911.3869280.41
400_81678072.0678262.3178162.1778452.5677781.6877571.4076870.42
400_102081531.5781932.0981881.9782172.3881411.3781301.2280560.26
500_512.587812.2787852.3288953.5488142.6387381.7986891.2186360.48
500_82096691.7297142.2797432.4297352.4896521.4896431.3895550.41
500_102510,1391.9110,1832.3710,1562.0610,1982.5310,1101.6110,0841.3610,0080.48
Mean-58332.8558573.0858402.3758733.3558282.6557971.7957350.54
Percentage-1.68%81.05%2.08%82.47%1.80%54.43%2.35%83.88%1.60%79.62%1.07%69.83%--
100_52.516325.1416375.4115872.2616365.4216325.1115932.6415640.75
100_8418513.8418513.8718111.6118584.2118614.3818182.0117950.75
100_10520364.3220535.2219972.3420474.9020484.9320052.7519701.01
200_5532233.2330813.4630492.3230913.8430723.2330462.2129980.62
200_8834343.6034564.2434012.5834774.8734473.9734032.5533521.06
200_101036382.9236593.5835981.7936703.8936503.2936092.1035530.51
300_57.544582.9544733.2844462.6244893.6544422.5644272.1843630.63
F = 6 300_81250332.4450512.8950252.2750633.1150332.4350121.9449370.45
300_101552422.0352672.4652331.8352902.9252482.1252281.6951660.52
400_51059082.5259282.8959292.7659393.1058922.1958551.5758110.70
400_81666002.1966152.4265751.7866362.7465801.8465671.6564980.49
400_102069172.2169452.6869152.1869532.8069042.0068731.5568090.56
500_512.573002.1973022.2173933.3673262.5372912.0172601.5371980.59
500_82081441.9481552.0981842.3781722.3181251.6880961.2980400.50
500_102584872.0585192.4885022.1985262.5584441.5184211.2583530.37
Mean-49272.9049333.2849102.2849453.5249112.8848811.9348270.63
Percentage-2.02%78.28%2.15%80.79%1.69%72.37%2.39%82.10%1.71%78.13%1.10%67.36%--
100_52.514064.2814094.4413812.3914134.8014185.1713751.9813590.83
100_8416304.5016425.2815912.0316364.9116314.6315942.1915720.82
100_10517813.8917954.6917431.7117854.1017904.4317461.9017280.89
200_5526703.5026723.6526402.2626854.0826553.0226392.2926010.81
200_8830013.2930123.7229722.2730203.9930153.7629722.2429240.64
200_101032012.9332123.3331531.4232083.1732063.0831681.8431260.54
300_57.538682.6438802.9438612.4338993.4538772.8838421.9437890.50
F = 7 300_81243292.7843403.0243152.3543523.3143152.4042941.8942320.41
300_101545732.4345922.9345521.9346033.1345592.1545401.6744830.38
400_51051202.6551312.8851352.8251362.9950912.0550721.6450240.59
400_81657252.4357452.8057162.1957532.9457062.0756841.6756150.36
400_102059702.2459822.4459632.1059902.5759592.0559301.5358720.50
500_512.563232.3163592.8363622.8363683.0062851.6962731.4662130.39
500_82069861.8870052.1770092.1570292.4969651.5369511.3168880.34
500_102573691.9074112.5373822.0774182.6173441.5373221.2372640.37
Mean-42632.9142793.3142522.2042863.4442542.8342271.7941790.56
Percentage-1.97%80.76%2.34%83.08%1.72%74.55%2.50%83.72%1.76%80.21%1.14%68.72%--
Best values are indicated in bold.
Table 8. Friedman test results (confidence level α = 0.050 ).
Table 8. Friedman test results (confidence level α = 0.050 ).
AlgorithmsRanksCNMeanStd. DeviationMinMax
EA3.962702.9341.18500.496.39
DDE5.362703.4691.31260.747.38
DABC4.492703.2541.25171.1311.29
IGR6.492703.7891.22461.187.21
ES4.242703.0141.34780.616.93
mIG02.412702.1600.89120.284.93
mIG1.042700.5780.27080.111.53
p-value0.000
Best values are indicated in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, C.; Han, Y.; Wang, Y.; Li, J.; Gao, K. A Distributed Blocking Flowshop Scheduling with Setup Times Using Multi-Factory Collaboration Iterated Greedy Algorithm. Mathematics 2023, 11, 581. https://doi.org/10.3390/math11030581

AMA Style

Zhang C, Han Y, Wang Y, Li J, Gao K. A Distributed Blocking Flowshop Scheduling with Setup Times Using Multi-Factory Collaboration Iterated Greedy Algorithm. Mathematics. 2023; 11(3):581. https://doi.org/10.3390/math11030581

Chicago/Turabian Style

Zhang, Chenyao, Yuyan Han, Yuting Wang, Junqing Li, and Kaizhou Gao. 2023. "A Distributed Blocking Flowshop Scheduling with Setup Times Using Multi-Factory Collaboration Iterated Greedy Algorithm" Mathematics 11, no. 3: 581. https://doi.org/10.3390/math11030581

APA Style

Zhang, C., Han, Y., Wang, Y., Li, J., & Gao, K. (2023). A Distributed Blocking Flowshop Scheduling with Setup Times Using Multi-Factory Collaboration Iterated Greedy Algorithm. Mathematics, 11(3), 581. https://doi.org/10.3390/math11030581

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop