Next Article in Journal
Analyzing Brain Waves of Table Tennis Players with Machine Learning for Stress Classification
Previous Article in Journal
Levels of IL-23/IL-17 Axis in Plasma and Gingival Tissue of Periodontitis Patients According to the New Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Search Using Genetic Algorithms and Random-Restart Hill-Climbing for Flexible Job Shop Scheduling Instances with High Flexibility

by
Nayeli Jazmin Escamilla-Serna
,
Juan Carlos Seck-Tuoh-Mora
*,†,
Joselito Medina-Marin
,
Irving Barragan-Vite
and
José Ramón Corona-Armenta
Área Académica de Ingeniería, Instituto de Ciencias Básicas e Ingeniería, Universidad Autónoma del Estado de Hidalgo, Carr. Pachuca-Tulancingo Km. 4.5, Pachuca 42184, Hidalgo, Mexico
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2022, 12(16), 8050; https://doi.org/10.3390/app12168050
Submission received: 9 June 2022 / Revised: 22 July 2022 / Accepted: 7 August 2022 / Published: 11 August 2022
(This article belongs to the Topic Applied Metaheuristic Computing: 2nd Volume)

Abstract

:
This work presents a novel hybrid algorithm called GA-RRHC based on genetic algorithms (GAs) and a random-restart hill-climbing (RRHC) algorithm for the optimization of the flexible job shop scheduling problem (FJSSP) with high flexibility (where every operation can be completed by a high number of machines). In particular, different GA crossover and simple mutation operators are used with a cellular automata (CA)-inspired neighborhood to perform global search. This method is refined with a local search based on RRHC, making computational implementation easy. The novel point is obtained by applying the CA-type neighborhood and hybridizing the aforementioned two techniques in the GA-RRHC, which is simple to understand and implement. The GA-RRHC is tested by taking four banks of experiments widely used in the literature and comparing their results with six recent algorithms using relative percentage deviation (RPD) and Friedman tests. The experiments demonstrate that the GA-RRHC is a competitive method compared with other recent algorithms for instances of the FJSSP with high flexibility. The GA-RRHC was implemented in Matlab and is available on Github.

1. Introduction

Production planning is a priority factor in modern manufacturing systems [1], where a critical aspect is the scheduling of operations and resource allocation [2]. Each industry must find a better solution for its respective production scheduling jobs to execute efficient manufacturing on time or launch new products to meet market demands satisfactorily [3].
This paper analyses the topic of task scheduling with flexible machines, also known as the flexible job shop scheduling problem (FJSSP), especially concerning instances with high flexibility, where many machines can process the same operation. The FJSSP reflects the current problem faced by manufacturing industries in the allocation of limited but highly flexible resources to perform tasks in the shortest possible time [4].
In the FJSSP, the goal is to find the most appropriate job sequencing, with each job involving operations with precedence restrictions, in an environment where several machines can perform the same operation but quite possibly in different processing times. Thus, the FJSSP is an NP-hard combinatorial problem where for a bigger number of jobs, there is an exponentially higher number of possible solutions [5]. It is thus impossible to review all solutions to find the optimal scheduling in a suitable time that minimizes the processing time of all operations [6].
Many mathematical methods have been proposed to resolve scheduling problems [7]; some works have approached the FJSSP using the branch and bound method [8], linear programming [9], or Lagrangian relaxation [10]. These methods ensure global convergence and have worked very well in solving small instances, but their computational time makes them impractical for problems with dozens of scheduling operations [6]. That is why many researchers have chosen to move toward hybrid heuristic and metaheuristic techniques.
Over the years, distinct metaheuristic methods have been applied to solve combinatorial problems, including evolutionary algorithms using the survival of the fittest; an example is genetic algorithms (GAs) [11,12]. Other metaheuristics are ant colony optimization (ACO) [13], particle swarm optimization (PSO) [14], tabu search (TS) [15], etc. These algorithms have been adapted to different programming problems, finding good solutions with low computational time.
The FJSSP is an extension of the classic job shop scheduling problem [16]. In the inital problem, the optimal allocation of operations is sought on a set of fixed machines. In the FJSSP, several feasible machines can perform the same operation, often with distinct processing times.
Two problems must be considered to solve an instance of the FJSSP: the order of operations and the assignment of machines. In [17], the FJSSP is approached heuristically, applying dispatch rules and tabu search and further introducing 15 instances with different numbers of tasks and machines. Since this work, different heuristics and metaheuristics conducting some hybrid searches have been investigated to solve this problem.
An algorithm using TS and a simplified computation of the makespan are explained in [18], highlighting the importance of critical operations for local search. In [19], a hybrid algorithm (HA) using GA and TS is proposed to minimize the makespan. Its model maintains a good balance between exploitation and exploration. In [20], a multi-objective problem (MO-FJSP) is addressed applying the non-sorting genetic algorithm (NSGA-II) together with a bee evolutionary guide (BEG-NSGA-II) to minimize the makespan, the workload of the most-busy machine, and the total workload.
In [21], a combination of genetic algorithm and a variable neighborhood descent method is shown to optimize a multi-objective version that takes into account the makespan, the total workload, and the workload of the most-busy machine, using two methods of local search. Another hybrid algorithm using the PSO and TS is presented in [22], again for a multi-objective problem. The use of different variable neighborhoods (VNs) to refine the local search is proposed in [23] to minimize the makespan. Another algorithm combining TS and VNs is described in [24] for a different multi-target version of the FJSSP. In [25], the FJSSP is studied considering maintenance costs, where a hybrid genetic algorithm (HGA) uses a local search based on simulated annealing (SA). The combination of the Harmony Search (HS) algorithm with other heuristic and random techniques is analyzed in [26] to handle two discrete vectors, one for the sequence of operations and the other for machine allocation, to minimize makespan. Another hybrid algorithm between artificial bees and TS is presented in [27], where the quality and diversity of solutions is rated with three metrics.
Task rescheduling is investigated in [28], for which another hybrid technique was proposed using a two-stage artificial bee colony algorithm with three rescheduling strategies. Another hybrid method is introduced in [29] by applying an artificial bee colony algorithm (ABC) and a TS, introducing new reprogramming processes. In [30], a hybrid algorithm is implemented using PSO with random-restart hill-climbing (RRHC), obtaining a competitive and straightforward algorithm compared with other techniques. Another hybrid algorithm based on GA is presented in [31] for a new specification of the FJSSP that considers the human factor as a multi-objective problem. In [32], a PSO-GA hybrid algorithm is proposed to minimize the workload and the makespan. The minimization of inventory costs and total workflow are studied in [33], which involves a hybrid algorithm between the ABC algorithm and the modified migrating birds optimization (MMBO) algorithm to obtain satisfactory results. Another hybrid method is put forth in [34] based on SA and saving heuristics to minimize the energy consumption of machines, taking into account their deterioration to determine the precise processing time. In [35], the FJSSP with fuzzy times is solved with a hybrid multi-verse optimization (HMVO) algorithm. A hybrid approach to general scheduling problems is discussed in [36], considering batch dimensions, rework, and shelf-life constraints for defective items. The algorithm applies late acceptance hill-climbing (LAHC) and analytic procedures to accelerate the process. The computational results show the benefits of using hybridization techniques. In [37], a two-stage GA (2SGA) is developed; the first stage is to choose the order of operations and the selection of machines simultaneously, and the second is to include new variants that avoid population stagnation. A hybrid algorithm that combines brain storming optimization (BSO) with LAHC is advanced in [38]; the BSO is adapted to the FJSSP to explore the search space by grouping the solutions, and the exploitation is performed with the LAHC. The hybridization of the human learning optimization (HLO) algorithm and the PSO is analyzed in [39], again applying the HLO for global search and an adaptation of the PSO to refine the local search.
The above-mentioned works suggest that hybrid approaches are still a developing research trend in solving the FJSSP and its variants, for which many of these works use GA in conjunction with another technique for local search.
However, to our knowledge, no work has applied GA operators based on a cellular automata-inspired neighborhood to explore solutions and their hybridization with the RRHC for the refinement of these solutions.
This article proposes a new hybrid technique called GA-RRHC that combines two metaheuristic techniques: the first for global search using genetic algorithm (GA) operators and a neighborhood based on concepts of cellular automata (CA) used mostly on the programming of the order of operations. As a second step, each solution is refined by a local search that applies random-restart hill-climbing (RRHC), in particular, to make the best selection of machines for critical operations, which is more convenient for problems with high flexibility. Restart is used as a simple strategy to avoid premature convergence of solutions.
The contribution of this research lies in the original use of two types of easy-to-implement operators to define a robust hybrid technique that finds satisfactory solutions to instances of the FJSSP for minimizing the processing time of all the jobs (or makespan). The GA-RRHC was implemented in Matlab and is available on Github https://github.com/juanseck/GA-RRHC (accessed on 5 August 2022).
The structure of this article is as follows: Section 2 provides the formal presentation of the FJJSP. Section 3 proposes the new GA-RRHC method explaining the genetic operators used, the CA-inspired neighborhood for the evolution of the population of solutions, and the operation of the RRHC to refine each solution. Section 4 discusses the parameter adjustment of the GA-RRHC, the comparison with six other recently published algorithms in four FJSSP datasets commonly used in the literature, making a statistical analysis based on the non-parametric Friedman test and the ranking by the relative percentage deviation (RPD). Section 5 gives the concluding comments of this manuscript.

2. Description of the FJSSP

The flexible job shop scheduling problem (FJSSP) consists of a set J = { J 1 , J 2 , J n } of n jobs and a set M = { M 1 , M 2 , M m } with m machines. Each job J i has n i operations O J i = { O i , 1 , O i , 2 , , O i , n i } . Each operation O i , j can be performed by one machine from a set of feasible machines M i , j M , for  1 i n and 1 j n i . The processing time of O i , j in M k is represented by p i , j , k , and  o = n i is the total number of operations.
It is necessary to carry out all operations to complete a job, respecting the operation precedence. The FJSSP has the following conditions: (1) At the start, all jobs and all machines are available. (2) Each operation can only be performed by one machine. (3) A machine cannot be interrupted when processing an operation. (4) Every machine can perform one operation at the same time. (5) Once defined, the order of operations cannot be changed. (6) Machine breakdowns are not considered. (7) Different jobs have no precedence restrictions of operations. (8) The machines do not depend on each other. (9) The processing time includes the preparation of the machines and the transfer of operations.
One solution of an FJSSP instance includes two parts, a sequence of operations that respects the precedence constraints for each job, and the assignment of a feasible machine to each operation. The objective of this paper is to calculate the sequence of operations and the assignment of machines that minimize the makespan C m a x , the total time needed to complete all jobs, as defined in Equation (1).
m i n { C m a x } w h e r e   C m a x = m a x { C i } , f o r   1 i n
C i is the time when all operations in J i are completed, subject to:
1 i n , 1 j n i , 1 k m   s u c h   t h a t   M k M i , j , p i , j , k > 0
s i , j , k + p i , j , k s i , j + 1 , k
X i , j , k = 1 , i f f   o p e r a t i o n   O i , j   i s   p r o c e s s e d   o n   M k 0 , o t h e r w i s e
k = 1 m X i , j , k = 1
i = 1 n j = 1 n i X i , j , k = 1
In this formulation, the start time of operation O i , j in M k is s i , j , k . The processing time of every operation greater than 0 is reviewed in Equation (2). The precedence between operations of the same job is considered in Equation (3). Equation (4) is an assignment record of one operation to a valid machine; Equation (5) represents that each operation is processed by only one machine. The constraint in Equation (6) guarantees that, at any time, every machine can process only one operation.
Table 1 provides an example of an FJSSP instance with three jobs, two operations per job, and three machines, where all machines can perform all operations. A possible solution to this problem is presented in the Gantt chart in Figure 1.

3. Genetic Algorithm and Random-Restart Hill-Climbing (GA-RRHC)

The idea of using a genetic algorithm with a random-restart hill-climbing (GA-RRHC) algorithm is to propose a method that is easy to understand and implement and simultaneously capable of obtaining competitive results compared with other techniques in the optimization of different FJSSP instances.
The GA-RRHC uses a genetic algorithm as a global search method, mainly using job-based crossover (JBX) and precedence operation crossover (POX), and as mutation, the swapping of two operations and the shift of three-position jobs for the sequences of operations. Two-point crossover and mutation by changing feasible machines at random are used for machine allocation. These operators have been previously used to resolve instances of the FJSSP and have shown good [19] results. A contribution of this work is the method of applying these operators using a neighborhood inspired by cellular automata (CA), where each solution chooses several neighbors. Crossover and mutation are applied for each neighbor, and from all the neighboring solutions, the best one replaces the original. This idea has already been explored in the global–local neighborhood search (GLNSA) algorithm, although using different operators [40].
The local search in the GA-RRHC applies a random-restart hill-climbing (RRHC) algorithm to refine the machine selection for the critical operations of each solution. The RRHC has been used successfully in the FJSSP [30], although not in combination with a GA, which represents another contribution of this work. An advantage of RRHC is its easy implementation, unlike other techniques for discrete problems such as simulated annealing or tabu search [41].
In short, the GA-RRHC generates a random population of solutions. A neighborhood of new solutions produced with genetic operators is taken for each solution, and the best one is chosen. This new solution is refined with the RRHC. The optimization loop repeats until a limit of iterations is met, or a best solution is not calculated after a certain number of repetitions.

3.1. Encoding and Decoding Solutions

Initially, the GA-RRHC generates a random population of S n solutions called smart-cells. Each smart-cell comprises two sequences, one for the operations ( O S ) and the other for the machine assigned to each operation ( M S ). Both sequences have o elements. The GA-RRHC uses the decoding described in [19].
In the sequence O S , each job J i appears n i times (a permutation with repetitions). The sequence M S has o elements and is divided into n parts. The ith part holds the machines selected to process the job J i and has as many elements as n i . Figure 2 depicts the codification of the solution in the Gantt diagram of Figure 1.
To decode the solution, O S is read from left to right. The jth occurrence of J i signifies that operation O i , j must be processed. This encoding allows that any permutation with repetitions represents a rightful sequence O S . For each part i of M S , the jth value represents the machine processing O i , j .

3.2. Qualitative Description of the GA

Each iteration of the GA used in this work involves a selection stage to refine the population of smart-cells, favoring those with lower makespan. Inspired by the CA neighborhood concept [42], for each smart-cell, a neighborhood of new solutions is produced with different crossover and mutation operators. The one with the lowest makespan is chosen as the new smart-cell. The operators used for each part of the GA are described below.

3.2.1. Population Selection

Two types of selection are used in this stage: elitism and tournament. Elitism selects a proportion E p of the best smart-cells for the next iteration without change, guaranteeing that their information remains available to improve the rest of the population. Genetic operators and RRHC will not be applied in elite smart-cells. A tournament selection is used to select the rest of the smart-cells. Random pairs of smart-cells are chosen, and for each pair, the smart-cell with the lowest makespan is selected for the next iteration.

3.2.2. Crossover Operators

Smart-cell crossover uses two operators for O S sequences, each type of crossover is applied with 50 % probability. The first is the precedence operator (POX), where the set of jobs is divided into two random subsets J A and J B such that J A J B = J and J A J B = . For two sequences O S 1 and O S 2 , two new sequences O S 1 and O S 2 are obtained. The operations of jobs J A are placed in O S 1 in the same order as in O S 1 . The operations of J B fill the empty positions of O S 1 , keeping the order from left to right (seriatim) in which they appear in O S 2 . The analogous process is carried out to form O S 2 by first taking the operations of J A at the same positions as O S 2 . The empty spaces of O S 2 are filled with the operations of J B in O S 1 seriatim. Figure 3 exemplifies the POX crossover with three jobs and six operations.
The second operator is the job-based crossover (JBX), which also defines the subsets J A and J B . From two sequences O S 1 and O S 2 , O S 1 is obtained in the same way. The difference lies in the specification of O S 2 , first taking the operations of J B in the same positions as O S 2 . Next, the empty spaces of O S 2 are filled with the operations of J A in O S 1 seriatim. Figure 4 presents an example of a JBX crossover with three jobs and six operations.
A two-point crossover is used for the sequences M S . For two sequences M S 1 and M S 2 , two random positions 1 < a 1 < a 2 < o are chosen. A new sequence M S 1 is obtained by taking the elements of M S 1 at positions [ 1 , a 1 1 ] and [ a 2 + 1 , o ] and from M S 2 at positions [ a 1 , a 2 ] . Similarly, a new sequence M S 2 is formed by exchanging the roles of M S 1 and M S 2 . Figure 5 presents a two-point cross for three machines and six operations.

3.2.3. Mutation Operators

Smart-cells mutation uses two operators over the O S sequences, and each type of mutation is used with 50 % of probability. The first is swap mutation, where two positions of O S are selected and their elements swapped to obtain O S . An example is in Figure 6.
The second mutation is the random change of positions for operations of three different jobs. Three positions of O S belonging to different jobs are selected and their positions randomly swapped to obtain O S . Three operations from different jobs are chosen since exchanging their positions in O S would not generate a new solution if the operations were from the same job. Moreover, a more significant perturbation is achieved by selecting three operations. This is suitable for the exploration stage and to escape local minima, especially in cases with a more significant number of jobs and operations. This type of mutation was proposed by [19], obtaining good results. An example is depicted in Figure 7.
For the sequence M S , a mutation of assigned machines is applied; o / 2 random positions are chosen, and for those positions, feasible random machines, different from the initial selection, are chosen. One example is presented in Figure 8.

3.2.4. CA-Type Neighborhood to Apply Genetic Operators

According to Eiben and Smith [43], an evolutionary algorithm is an optimization strategy modelled around the evolution of different biological systems. If a system shows dynamic behaviors applicable in solving complex problems, it can be a source of inspiration to define new evolutionary algorithms, regardless of whether these systems are natural or artificial.
CAs are elemental discrete dynamical systems capable of generating complex global behaviors [42,44]. The simplest model, known as elementary CA, is discrete in states and time; it consists of a linear array of cells with an initial assigned state, which can be a number or color. Each cell keeps or changes its state depending on its present state and those of its neighbors on either side, having a whole neighborhood of three cells. This process is applied synchronously to update the state of all cells.
This CA-type neighborhood has been successfully applied in different optimization algorithms [45,46,47,48]. However, to our knowledge, this type of neighborhood has not yet been applied with genetic operators for the FJSSP. In this work, to explore the solution space, each smart-cell will generate l new neighboring solutions using crossover and mutation. The best of these l solutions (with the smallest makespan) is selected to update to the original smart-cell. Figure 9 shows the CA-type neighborhood implemented in the proposed algorithm.
The previous genetic operators perform the global search mainly to improve the O S sequence of each smart-cell. Next, the local search method for optimizing the M S sequences of each smart-cell is described.

3.3. Random-Restart Hill-Climbing (RRHC)

A local search method is intended to exploit the information of the smart-cells to decrease the value of their makespan. The general idea is to start from each smart-cell and make small changes to improve the makespan. The metaheuristic used is the random-restart hill-climbing (RRHC) [30]. The RRHC can restart the search from a solution with a makespan worse than the original one after a given number of steps to escape local minima. The whole process ends after a fixed number of steps.
The critical operations of the smart-cell are first detected to apply the RRHC. These operations define the value of the makespan, and these operations are linked by job or by a machine whose sum of processing times from the beginning to the end gives the makespan as a result, with no idle times between these operations [18].
A record of the previous task of each operation is kept when calculating the makespan to know which are the critical operations of a smart-cell, where the initial operations of each job do not have a previous operation. This record allows a fast computation of the critical operations by simply taking one of the last operations with a completion time identical to the makespan. Subsequently, previous operations on the same machine and at the same job are analyzed, taking that with end time equal to the start time of the present operation. A random pick is made if the same completion time is held by both previous operations. The procedure is repeated until an operation with no preceding operation is reached. Figure 10 presents the critical path of the solution represented in the Gantt diagram of Figure 1.
From the set of critical operations, one is taken at random, and a different feasible machine is chosen to have a new sequence M S . Additionally, another critical operation with probability α c is selected, and is swapped with any other operation in O S to have a new sequence O S and generate a different solution. Figure 11 shows a makespan improvement changing the machine assignment of a critical operation.
The RRHC is applied for H n iterations, having a pile of H r < H n new solutions generated through the described process. If one of these solutions improves the makespan, this solution replaces the smart-cell, emptying the pile. If the makespan of the original smart-cell has not been improved after H r iterations, a new random solution is taken from the pile to restart the climbing. Since the RRHC focuses on improving machine allocation, it works best for instances with high flexibility.

3.4. Integration of the GA-RRHC Algorithm

Algorithm 1 depicts the GA-RRHC pseudocode. Figure 12 illustrates the flowchart of the proposed method. After the GA-RRHC parameters are defined, the smart-cell population is generated, evaluated, and selected. GA operators are applied in a CA-like neighborhood for a global search to update every smart-cell. Next, each smart-cell is improved by a local search performed by the RRHC. Finally, the best smart-cell is returned.  
Algorithm 1: Pseudocode of the GA-RRHC
Applsci 12 08050 i001

4. Results of Experiments

The GA-RRHC was coded in Matlab R2015a (TM) on an Intel Xeon W machine with 128 GB of RAM and running at 2.3 GHz. The source code is available on Github https://github.com/juanseck/GA-RRHC (accessed on 5 August 2022). Four datasets were taken to test the effectiveness of the GA-RRHC. These datasets have been widely used in the specialized literature, with instances having different degrees of flexibility. A flexibility rate between 0 and 1 is specified as β = (flexibility average/number of machines). A high value of β means that the same operation can be processed by more machines.
The first experiment takes the Kacem dataset [49], with five instances of which four have a value of β = 1 (full flexibility). The BRdata dataset is used for the second experiment and consists of 10 instances, from 10 to 20 jobs and 4 to 15 machines, with partial flexibility ( β 0.35 ) [17]. The third and fourth datasets are the Rdata and Vdata datasets, with each having 43 problems going from 6 to 30 jobs and 6 to 15 machines. The Rdata set has a maximum value of β 0.4 , while all instances of the V d a t a set have a rate of β = 0.5 , which means that half of the machines can perform each operation [50]. These datasets are available at https:/people.idsia.ch/~monaldo/fjsp.html (accessed on 5 August 2022).

4.1. GA-RRHC Parameter Tuning

The GA-RRHC has nine parameters that control its operation. The total number of iterations is G n ; G b is the limit of stagnation iterations, S n is the number of smart-cells, E p is the proportion of elite smart-cells, and l is the number of neighbors of each smart-cell generated by the genetic operators. Further, α m is the probability of mutation of a solution, H n is the number of RRHC iterations, H r are the iterations to restart the RRHC, and α c is the probability of moving a critical operation in the RRHC.
For the first three parameters ( G n = 250 , G b = 50 , S n = 100 ) , the values used in [19,38] were employed as a reference since these works obtained good results in minimizing the makespan. These parameter values are typically used in specialized publications and are comparable to those utilized by methods employed in the following sections to compare the performance of the GA-RRHC.
To adjust the other parameters, different levels of each parameter were taken to minimize the m t 20 instance of the Vdata dataset and the best combination of parameters was chosen.
For E p , the values 0.02 and 0.04 were tested, the parameter l was tested between 2 and 3, and the values 0.1 and 0.2 were tested for α m . For the RRHC, the values between 80 and 100 and between 30 and 40 were taken to tune H n and H r respectively. For α c , the values 0.025 and 0.05 were proved. In this way, 64 different combinations of parameters were taken; for each combination, 30 independent runs were performed, selecting the set of parameters with the least average makespan. Table 2 shows the GA-RRHC parameters to analyze its results in the rest of the instances.

4.2. Comparison with Other Methods

Six algorithms published between 2016 and 2021 were used to compare the GA-RRHC performance. These algorithms include the global–local neighborhood search algorithm (GLNSA) [40], the hybrid algorithm (HA) [19], the greedy randomized adaptive search procedure (GRASP) [51], the hybrid brain storm optimization and late acceptance hill-climbing (HBSO-LAHC) [38], the improved Jaya algorithm (IJA) [52], and the two-level PSO (TlPSO) [53].
Comparing the execution times of the different algorithms is not appropriate since these were implemented in different architectures and languages and with different programming skills. That is why this work compares these algorithms using their computational complexity concerning the total operation number ( O ( o ) ) . To use a standard notation for all algorithms, the number of solutions will be represented by X, the iteration number by G n , the number of local search iterations by H n , and m is the number of machines. The CA-inspired neighborhood algorithms (GA-RRHC and GLNSA) satisfy that X = S n l , where l is the number of neighbors in the global search.
The GLNSA uses elitism to select solutions and generates l neighbors with insertion, swapping, path-relinking operators, a machine mutation, and a tabu search on S n solutions. The GRASP calculates a Gantt chart and then applies a greedy local search which is quadratic with respect to o. The HA applies four genetic operations for each solution and then a TS to select different machines for the critical operations. The HBSO-LAHC uses a clustering of solutions, which requires calculating the distance between them, and then applies four strategies and three neighborhoods for each solution, with one of them on critical operations. For the local search, a hill-climbing algorithm with late acceptance for H n steps is applied in each solution. The IJA is a modified Java algorithm that applies three exchange procedures to each solution and a local search with the random exchange of blocks of critical operations. Finally, the TlPSO uses two modifications of the PSO: the first is applied to improve the order of operations, and in each iteration of the first PSO, another PSO is used for the allocation of machines.
Table 3 presents the algorithms ordered from least to most complex, with S n < X < H n since H n usually has a high value concerning obtaining a better local search. This table shows that the GA-RRHC has computational complexity comparable to recently proposed state-of-the-art algorithms.
This analysis only considers the complexity inherent in modifying the order of operations and the machine assignment in a solution. The computational complexity for calculating the makespan or tracking the critical operations is not considered since all the algorithms use them. Thus, the analysis only focuses on the computational processes that make each method different.

4.3. Kacem Dataset

In every experiment described in this work, the selected methods were tested with the same datasets as those exposed in their references, making the presented analysis reliable. For each instance, 30 independent runs were executed and the smallest makespan obtained was taken. The best values reported in their respective papers were taken for the other algorithms.
For the Kacem dataset, the HBSO-LAHC does not report results, and only the GLNSA and IJA report complete results. Table 4 presents the outcomes of the GA-RRHC, where n indicates the number of jobs, m the number of machines, and β the flexibility rate of each instance. These problems have total flexibility and start with fewer jobs and machines until they reach a high dimensionality.
Figure 13 presents the number of best results obtained for each algorithm in this dataset. Table 4 exposes that the GA-RRHC calculates the best makespan values besides the GLNSA and IJA, confirming the satisfactory operation of the GA-RRHC for instances with high flexibility.

4.4. Brandimarte Dataset

For the rest of the experiments, the relative percentage deviation ( R P D ) and Friedman’s non-parametric test were employed to compare the GA-RRHC with the other methods [54]. The R P D is defined in Equation (7); B O V is the best-obtained makespan by each algorithm, and B K V is the best-known makespan for each problem.
R P D = B O V B K V B O V × 100
Table 5 provides the results of the GA-RRHC compared with the other methods for the BRdata dataset. In each case, the smallest obtained makespan is marked with *.
It can be seen in Table 5 that the GA-RRHC calculates the least makespan in eight cases, behind the HA and IJA, and with the same optimal results as the GRASP. Figure 14 shows the number of best results obtained for each algorithm in this dataset.
Table 6 shows the ranking of each method based on the average R P D , as well as the value p of the non-parametric Friedman pairwise test, comparing the GA-RRHC with each of the other algorithms. The GA-RRHC secured the second place among the compared algorithms, obtaining a value of p < 0.05 with the BSO-LAHC and GRASP.
Figure 15 gives the signed difference between the average RPD of the GA-RRHC against the other methods in pairs. A negative difference means an inferior performance in contrast to the GA-RRHC.
This analysis indicates that the GA-RRHC is statistically competitive with the other four best algorithms for optimizing the BRdata dataset.

4.5. Rdata Dataset

This experiment takes the 43 instances of the Rdata dataset, with a flexibility rate β ranging from 0.13 to 0.4 . The results generated by the GA-RRHC are compared in Table 7 with those obtained by the GLNSA, HA, and IJA, which are the methods that report results for this dataset. Figure 16 depicts the number of best results obtained for each algorithm in this dataset, where the GA-RRHC calculated the 23 best values, behind the HA and IJA.
Table 8 sorts each algorithm with its average R P D and the comparative Friedman test. From the results, it can be noted that the GA-RRHC ranked third overall, as might be expected in instances with low flexibility. The statistical analysis shows no significant difference with the IJA.
Figure 17 shows the difference between the GA-RRHC and the other methods for the average RPD. A negative value means again an inferior performance compared to the GA-RRHC.
This experiment verifies that the GA-RRHC performs comparably to the IJA for problems with low flexibility and outperforms the GLNSA.

4.6. Vdata Dataset

This experiment takes the 43 instances of the Vdata dataset, all of which have a flexibility rate β = 0.5 . The results generated by the GA-RRHC have been compared again with those obtained by the GLNSA, HA, and IJA (Table 9).
Figure 18 presents the number of best results obtained for each algorithm in this dataset, where the GA-RRHC calculated the 43 best values, the same as the HA and superior to the GLNSA and IJA.
Table 10 shows the ranking, the average R P D , and the non-parametric Friedman pairwise test values of each algorithm. The results show that the GA-RRHC obtained the first place concerning the RPD of the 43 problems, and the results indicate a significant difference with the IJA.
Figure 19 depicts the difference of the average RPD between the GA-RRHC and the other methods.

4.7. Generated Large Dataset

The previous experiments proved the correct performance of the GA-RRHC. However, the largest instance considers only 30 jobs, 10 machines, and 300 operations, which can be insufficient in real-world cases. Consequently, three randomly generated large instances were studied using the parameters established by [17,55]. These instances are named VL01, VL02, and VL03, considering from 50 to 80 jobs, from 20 to 50 machines, and from 704 to 2773 operations, with a flexibility rate β = 0.75 . Since these instances have not been used before in previous studies, we only apply the GLNSA for comparison, whose code is available in Github https://github.com/juanseck/GLNSA-FJSP-2020 (accessed on 5 August 2022).
Table 11 shows that GA-RRHC performs much better than GLNSA, with enhanced performance for an increasing problem dimension. Figure 20 presents three examples, one for each VL instance, of the optimization process achieved by the GA-RRHC. In the Gantt charts, it is remarkable how the idle times decrease from initial random solutions and the continuous convergence to calculate the minimum makespan.
These results corroborate that the GA-RRHC exhibits a performance comparable to recent algorithms recognized for their robustness in optimizing this type of problem, mainly for instances with high flexibility, and with competitive computational complexity.

5. Conclusions

This paper has described a hybrid algorithm that applies a global search with genetic crossover and mutation operators in a CA-like neighborhood. These operators mainly optimize the sequence of operations. Random-restart hill-climbing performs a local search to allocate the best machine to each critical operation. This GA-RRHC feature makes it suitable for FJSSP instances with high flexibility.
The CA-like neighborhood allows the concurrent application of genetic operations, which gives the GA-RRHC a better exploration capability. Hill-climbing uses a similar number of iterations compared to other algorithms and is easy to implement, reflecting a satisfactory complexity. Four popular datasets (covering 101 problems) were used for the numerical experimentation of the GA-RRHC. The results expose good performance compared to recent algorithms taken as reference, especially for instances with high flexibility.
The GA-RRHC opens a new way to apply CA-like neighborhoods that concurrently apply the exploration and exploitation operators to solve task scheduling problems; for instance, the flowshop, the job shop, or the open shop cases.
As a possible future work, it is suggested to use another type of exploitation technique such as simulated annealing, similar to the one proposed in [56], to reduce the complexity of the local search. Other variants of scaling algorithms can be tested for optimizing the sequences of operations to solve FJSSP instances with low flexibility, in addition to extending this methodology for the optimization of multi-objective problems.

Author Contributions

Conceptualization, N.J.E.-S. and J.C.S.-T.-M.; methodology, N.J.E.-S., J.C.S.-T.-M. and J.M.-M.; software, N.J.E.-S. and J.C.S.-T.-M.; validation, J.M.-M., I.B.-V. and J.R.C.-A.; formal analysis, N.J.E.-S., J.C.S.-T.-M. and J.M.-M.; investigation, N.J.E.-S., J.C.S.-T.-M. and I.B.-V.; resources, J.M.-M. and J.C.S.-T.-M.; data curation, N.J.E.-S., J.C.S.-T.-M.; writing—original draft preparation, J.C.S.-T.-M., I.B.-V. and J.R.C.-A.; writing—review and editing, J.C.S.-T.-M., I.B.-V. and J.R.C.-A. visualization, N.J.E.-S., J.C.S.-T.-M. and J.M.-M.; supervision, J.M.-M. and J.R.C.-A.; project administration, J.M.-M. and J.C.S.-T.-M.; funding acquisition, J.M.-M. and J.C.S.-T.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This study was supported by the Autonomous University of Hidalgo (UAEH) and the National Council for Science and Technology (CONACYT) with projects numbers CB-2017-2018-A1-S-43008 and F003/320109. Nayeli Jazmin Escamilla Serna was supported by CONACYT grant number 1013175.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The GA-RRHC source code is available on Github https://github.com/juanseck/GA-RRHC (accessed on 5 August 2022). The datasets taken to test the GA-RRHC are available at https:/people.idsia.ch/~monaldo/fjsp.html (accessed on 5 August 2022).

Conflicts of Interest

The authors declare that they have no known competing financial interest or personal relationships that could have appeared to influence the work reported in this paper.

Abbreviations

The following abbreviations were used in this research:
FJSSPFlexible job shop scheduling problem
GAGenetic algorithm
RRHCRandom-restart hill-climbing algorithm
CACellular automata
POXPrecedence operation crossover
JBXJob-based crossover
OSOperation sequence
MSMachine sequence

References

  1. Chen, H.; Ihlow, J.; Lehmann, C. A genetic algorithm for flexible job-shop scheduling. IEEE Int. Conf. Robot. Autom. 1999, 2, 1120–1125. [Google Scholar] [CrossRef]
  2. Pinedo, M.L. Scheduling Theory, Algorithms, and Systems, 5th ed.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 1–670. [Google Scholar] [CrossRef]
  3. Amjad, M.K.; Butt, S.I.; Kousar, R.; Ahmad, R.; Agha, M.H.; Faping, Z.; Anjum, N.; Asgher, U. Recent research trends in genetic algorithm based flexible job shop scheduling problems. Math. Probl. Eng. 2018, 2018, 9270802. [Google Scholar] [CrossRef]
  4. Pezzellaa, F.; Morgantia, G.; Ciaschettib, G. A genetic algorithm for the flexible job-shop scheduling problem. Comput. Oper. Res. 2008, 35, 3202–3212. [Google Scholar] [CrossRef]
  5. Garey, M.; Johnson, D.; Sethi, R. The complexity of flowshop and jobshop scheduling. Math. Oper. Res. 1976, 1, 117–129. [Google Scholar] [CrossRef]
  6. Qing-dao-er ji, R.; Wang, Y. A new hybrid genetic algorithm for job shop scheduling problem. Comput. Ind. Eng. 2012, 1, 2291–2299. [Google Scholar] [CrossRef]
  7. Yu, Y. A research review on job shop scheduling problem. E3S Web Conf. 2021, 253, 02024. [Google Scholar] [CrossRef]
  8. Morrison David, R.; Jacobson, S.H.; Sauppe, J.J.; Sewell, E.C. Branch-and-bound algorithms: A survey of recent advances in searching, branching, and pruning. Discret. Optim. 2016, 19, 79–102. [Google Scholar] [CrossRef]
  9. Mallia, B.; Das, M.; Das, C. Fundamentals of transportation problem. Int. J. Eng. Adv. Technol. (IJEAT) 2021, 10, 90–103. [Google Scholar] [CrossRef]
  10. Che, P.; Tang, Z.; Gong, H.; Zhao, X. An improved Lagrangian relaxation algorithm for the robust generation self-scheduling problem. Math. Probl. Eng. 2018, 2018, 6303596. [Google Scholar] [CrossRef]
  11. Holland, J.H. Adaptation in Natural and Artificial Systems; MIT Press Ltd.: Cambridge, MA, USA, 1975; p. 232. [Google Scholar]
  12. Goldberg, D.E. Genetic Algorithms in Search, Optimization and Machine Learning; Addison-Wesley: Reading, MA, USA, 1989. [Google Scholar]
  13. Dorigo, M. Optimization, Learning and Natural Algorithms. Ph.D. Thesis, Politecnico di Milano, Milano, Italy, 1992. [Google Scholar]
  14. Kennedy, J.; Eberhart, R. Particle swarm pptimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar] [CrossRef]
  15. Glover, F. Tabu seach—Part I. ORSA J. Comput. Summer 1989, 1, 190–206. [Google Scholar] [CrossRef]
  16. Brucker, P.; Schlie, R. Job-shop scheduling with multi-purpose machines. Computing 1990, 45, 369–375. [Google Scholar] [CrossRef]
  17. Brandimarte, P. Routing and scheduling in a flexible job shop by tabu search. Ann. Oper. Res. 1993, 41, 157–183. [Google Scholar] [CrossRef]
  18. Mastrolilli, M.; Gambardella, L.M. Effective neighbourhood functions for the flexible job shop problem. J. Sched. 2000, 3, 3–20. [Google Scholar] [CrossRef]
  19. Li, X.; Gao, L. An effective hybrid genetic algorithm and tabu search for flexible job shop scheduling problem. J. Prod. Econ. 2016, 174, 93–110. [Google Scholar] [CrossRef]
  20. Deng, Q.; Gong, G.; Gong, X.; Zhang, L.; Liu, W.; Ren, Q. A bee evolutionary guiding nondominated sorting genetic algorithm II for multiobjetive flexible job shop scheduling. Comput. Intell. Neurosci. 2017, 27, 5232518. [Google Scholar] [CrossRef]
  21. Gao, J.; Sun, L.; Gen, M. A hybrid genetic and variable neighborhood descent algorithm for flexible job shop scheduling problems. Comput. Oper. Res. 2008, 35, 2892–2907. [Google Scholar] [CrossRef]
  22. Zhang, G.; Shao, X.; Li, P.; Gao, L. An effective hybrid particle swarm optimization algorithm for multi-objective flexible job-shop scheduling problem. Comput. Ind. Eng. 2009, 56, 1309–1318. [Google Scholar] [CrossRef]
  23. Amiri, M.; Zandieh, M.; Yazdani, M.; Bagheri, A. A variable neighbourhood search algorithm for the flexible job-shop scheduling problem. Int. J. Prod. Res. 2010, 48, 5671–5689. [Google Scholar] [CrossRef]
  24. Li, J.q.; Pan, Q.k.; Liang, Y.C. An effective hybrid tabu search algorithm for multi-objective flexible job-shop scheduling problems. Comput. Ind. Eng. 2010, 59, 647–662. [Google Scholar] [CrossRef]
  25. Dalfarda, V.M.; Mohammadi, G. Two meta-heuristic algorithms for solving multi-objective flexible job-shop scheduling with parallel machine and maintenance constraints. Comput. Math. Appl. 2012, 64, 2111–2117. [Google Scholar] [CrossRef]
  26. Yuan, Y.; Xu, H.; Yang, J. A hybrid harmony search algorithm for the flexible job shop scheduling problem. Appl. Soft Comput. 2013, 13, 3259–3272. [Google Scholar] [CrossRef]
  27. Li, J.Q.; Pan, Q.K.; Tasgetiren, M.F. A discrete artificial bee colony algorithm for the multi-objective flexible job-shop scheduling problem with maintenance activities. Appl. Math. Model. 2014, 38, 1111–1132. [Google Scholar] [CrossRef]
  28. Gao, K.Z.; Suganthan, P.N.; Chua, T.J.; Chong, C.S.; Cai, T.X.; Pan, Q.K. A two-stage artificial bee colony algorithm scheduling flexible job-shop scheduling problem with new job insertion. Expert Syst. Appl. 2015, 42, 7652–7663. [Google Scholar] [CrossRef]
  29. Li, X.; Peng, Z.; Du, B.; Guo, J.; Xu, W.; Zhuang, K. Hybrid artificial bee colony algorithm with a rescheduling strategy for solving flexible job shop scheduling problems. Comput. Ind. Eng. 2017, 113, 10–26. [Google Scholar] [CrossRef]
  30. Rodriguez Kato, E.R.; de Aguiar Aranha, G.D.; Tsunaki, R.H. A new approach to solve the flexible job shop problem based on a hybrid particle swarm optimization and random-restart hill climbing. Comput. Ind. Eng. 2018, 125, 178–189. [Google Scholar] [CrossRef]
  31. Gong, G.; Deng, Q.; Gong, X.; Liu, W.; Ren, Q. A new double flexible job-shop scheduling problem integrating processing time, green production, and human factor indicators. J. Clean. Prod. 2018, 174, 560–576. [Google Scholar] [CrossRef]
  32. Sreekara Reddy, M.; Ratnam, C.; Rajyalakshmi, G.; Manupati, V. An effective hybrid multi objective evolutionary algorithm for solving real time event in flexible job shop scheduling problem. Measurement 2018, 114, 78–90. [Google Scholar] [CrossRef]
  33. Meng, T.; Pan, Q.K.; Sang, H.Y. A hybrid artificial bee colony algorithm for a flexible job shop scheduling problem with overlapping in operations. Int. J. Prod. Res. 2018, 56, 5278–5292. [Google Scholar] [CrossRef]
  34. Wu, X.; Shen, X.; Li, C. The flexible job-shop scheduling problem considering deterioration effect. Comput. Ind. Eng. 2019, 135, 1004–1024. [Google Scholar] [CrossRef]
  35. Lin, J.; Zhu, L.; Wang, Z.J. A hybrid multi-verse optimization for the fuzzy flexible job-shop scheduling problem. Comput. Ind. Eng. 2019, 127, 1089–1100. [Google Scholar] [CrossRef]
  36. Goerler, A.; Lalla-Ruiz, E.; Voß, S. Late acceptance hill-climbing matheuristic for the general lot sizing and scheduling problem with rich constraints. Algorithms 2020, 13, 138. [Google Scholar] [CrossRef]
  37. Defersha, F.M.; Rooyani, D. An efficient two-stage genetic algorithm for a flexible job-shop scheduling problem with sequence dependent attached/detached setup, machine release date and lag-time. Comput. Ind. Eng. 2020, 147, 106605. [Google Scholar] [CrossRef]
  38. Alzaqebah, M.; Jawarneh, S.; Alwohaibi, M.; Alsmadi, M.K.; Almarashdeh, I.; Mohammad, R.M.A. Hybrid brain storm optimization algorithm and late acceptance hill climbing to solve the flexible job-shop scheduling problem. J. King Saud Univ. Comput. Inf. Sci. 2020, 34, 2926–2937. [Google Scholar] [CrossRef]
  39. Ding, H.; Gu, X. Hybrid of human learning optimization algorithm and particle swarm optimization algorithm with scheduling strategies for the flexible job-shop scheduling problem. Neurocomputing 2020, 414, 313–332. [Google Scholar] [CrossRef]
  40. Escamilla-Serna, N.; Seck-Tuoh-Mora, J.C.; Medina-Marin, J.; Hernandez-Romero, N.; Barragan-Vite, I.; Corona Armenta, J.R. A global-local neighborhood search algorithm and tabu search for flexible job shop scheduling problem. PeerJ Comput. Sci. 2021, 7, e574. [Google Scholar] [CrossRef] [PubMed]
  41. Jacobson, S.H.; Yücesan, E. Analyzing the performance of generalized hill climbing algorithms. J. Heuristics. 2004, 10, 387–405. [Google Scholar] [CrossRef]
  42. McIntosh, H.V. One Dimensional Cellular Automata; Luniver Press: Bristol, UK, 2009. [Google Scholar]
  43. Eiben, A.E.; Smith, J.E. Introduction to evolutionary computing. Nat. Comput. Ser. 2015, 2, 287. [Google Scholar] [CrossRef]
  44. Kari, J. Theory of cellular automata: A survey. Theor. Comput. Sci. 2005, 334, 3–33. [Google Scholar] [CrossRef]
  45. Shi, Y.; Liu, H.; Gao, L.; Zhang, G. Cellular particle swarm optimization. Inf. Sci. 2011, 181, 4460–4493. [Google Scholar] [CrossRef]
  46. Lagos-Eulogio, P.; Seck-Tuoh-Mora, J.C.; Hernandez-Romero, N.; Medina-Marin, J. A new design method for adaptive IIR system identification using hybrid CPSO and DE. Nonlinear Dyn. 2017, 88, 2371–2389. [Google Scholar] [CrossRef]
  47. Seck-Tuoh-Mora, J.C.; Medina-Marin, J.; Martinez-Gomez, E.S.; Hernandez-Gress, E.S.; Hernandez-Romero, N.; Volpi-Leon, V. Cellular particle swarm optimization with a simple adaptive local search strategy for the permutation flow shop scheduling problem. Arch. Control Sci. 2019, 29, 205–226. [Google Scholar]
  48. Hernández-Gress, E.S.; Seck-Tuoh-Mora, J.C.; Hernández-Romero, N.; Medina-Marín, J.; Lagos-Eulogio, P.; Ortíz-Perea, J. The solution of the concurrent layout scheduling problem in the job-shop environment through a local neighborhood search algorithm. Expert Syst. Appl. 2020, 144, 113096. [Google Scholar] [CrossRef]
  49. Kacem, I.; Hammadi, S.; Borne, P. Approach by localization and multiobjective evolutionary optimization for flexible job-shop scheduling problems. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2002, 32, 1–13. [Google Scholar] [CrossRef]
  50. Hurink, J.; Jurisch, B.; Thole, M. Tabu search for the job-shop scheduling problem with multi-purpose machines. Oper. Res. Spektrum 1994, 15, 205–215. [Google Scholar] [CrossRef]
  51. Baykasoğlu, A.; Madenoğlu, F.S.; Hamzadayı, A. Greedy randomized adaptive search for dynamic flexible job-shop scheduling. J. Manuf. Syst. 2020, 56, 425–451. [Google Scholar] [CrossRef]
  52. Caldeira, R.H.; Gnanavelbabu, A. Solving the flexible job shop scheduling problem using an improved Jaya algorithm. Comput. Ind. Eng. 2019, 137, 106064. [Google Scholar] [CrossRef]
  53. Zarrouk, R.; Bennour, I.E.; Jemai, A. A two-level particle swarm optimization algorithm for the flexible job shop scheduling problem. Swarm Intell. 2019, 13, 145–168. [Google Scholar] [CrossRef]
  54. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
  55. Sun, L.; Lin, L.; Li, H.; Gen, M. Large scale flexible scheduling optimization by a distributed evolutionary algorithm. Comput. Ind. Eng. 2019, 128, 894–904. [Google Scholar] [CrossRef]
  56. Sajid, M.; Jafar, A.; Sharma, S. Hybrid Genetic and Simulated Annealing Algorithm for Capacitated Vehicle Routing Problem. In Proceedings of the 2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC), Waknaghat, India, 6–8 November 2020; pp. 131–136. [Google Scholar]
Figure 1. Gantt diagram of one possible solution for the FJSSP in Table 1.
Figure 1. Gantt diagram of one possible solution for the FJSSP in Table 1.
Applsci 12 08050 g001
Figure 2. Coding of the solution in the Gantt diagram of Figure 1 in sequences O S and M S .
Figure 2. Coding of the solution in the Gantt diagram of Figure 1 in sequences O S and M S .
Applsci 12 08050 g002
Figure 3. Example of a precedence operation crossover (POX).
Figure 3. Example of a precedence operation crossover (POX).
Applsci 12 08050 g003
Figure 4. Example of a job-based crossover (JBX).
Figure 4. Example of a job-based crossover (JBX).
Applsci 12 08050 g004
Figure 5. A two-point crossover example.
Figure 5. A two-point crossover example.
Applsci 12 08050 g005
Figure 6. Example of swapping mutation.
Figure 6. Example of swapping mutation.
Applsci 12 08050 g006
Figure 7. Example of three job mutation.
Figure 7. Example of three job mutation.
Applsci 12 08050 g007
Figure 8. Example of machine mutation.
Figure 8. Example of machine mutation.
Applsci 12 08050 g008
Figure 9. CA-type neighborhood used in the GA-RRHC. Colors represent the selection and modification of different smart-cells.
Figure 9. CA-type neighborhood used in the GA-RRHC. Colors represent the selection and modification of different smart-cells.
Applsci 12 08050 g009
Figure 10. Critical path of the solution described in Figure 1.
Figure 10. Critical path of the solution described in Figure 1.
Applsci 12 08050 g010
Figure 11. New solution obtained by selecting another random machine of a critical operation.
Figure 11. New solution obtained by selecting another random machine of a critical operation.
Applsci 12 08050 g011
Figure 12. GA-RRHC flowchart.
Figure 12. GA-RRHC flowchart.
Applsci 12 08050 g012
Figure 13. Number of best results per algorithm for the Kacem dataset.
Figure 13. Number of best results per algorithm for the Kacem dataset.
Applsci 12 08050 g013
Figure 14. Number of best results per algorithm for the BRdata dataset.
Figure 14. Number of best results per algorithm for the BRdata dataset.
Applsci 12 08050 g014
Figure 15. Comparison of the RPD difference per algorithm for the BRdata dataset.
Figure 15. Comparison of the RPD difference per algorithm for the BRdata dataset.
Applsci 12 08050 g015
Figure 16. Number of best results per algorithm for the Rdata dataset.
Figure 16. Number of best results per algorithm for the Rdata dataset.
Applsci 12 08050 g016
Figure 17. Comparison of the RPD difference per algorithm for the Rdata dataset.
Figure 17. Comparison of the RPD difference per algorithm for the Rdata dataset.
Applsci 12 08050 g017
Figure 18. Number of best results per algorithm for the Vdata dataset.
Figure 18. Number of best results per algorithm for the Vdata dataset.
Applsci 12 08050 g018
Figure 19. Comparison of the RPD difference per algorithm for the Vdata dataset.
Figure 19. Comparison of the RPD difference per algorithm for the Vdata dataset.
Applsci 12 08050 g019
Figure 20. Comparison of the RPD difference per algorithm for the Vdata dataset.
Figure 20. Comparison of the RPD difference per algorithm for the Vdata dataset.
Applsci 12 08050 g020
Table 1. Example of an FJSSP with 3 jobs, 3 operations per job, and 3 machines.
Table 1. Example of an FJSSP with 3 jobs, 3 operations per job, and 3 machines.
JobOp. M 1 M 2 M 3
J 1 O 1 , 1 344
O 1 , 2 121
J 2 O 2 , 1 233
O 2 , 2 332
J 3 O 3 , 1 333
O 3 , 2 221
Table 2. Parameters selected for the execution of the GA-RRHC.
Table 2. Parameters selected for the execution of the GA-RRHC.
G n total iterations of the algorithm250
G b limit of stagnation iterations50
S n number of smart-cells100
E p proportion of elite smart-cells0.02
lneighbors of each smart-cell3
α m mutation probability0.1
H n iterations of the RRHC100
H r iterations to restart the RRHC30
α c probability of moving a critical operation in the RRHC0.05
Table 3. Computational complexity of the methods used in the experiments.
Table 3. Computational complexity of the methods used in the experiments.
MethodComplexityRank
TlPSO O ( o ( G n ( 2 X + G n 2 X ) ) ) 1
GA-RRHC O ( o ( G n ( S n + X + S n H n m ) ) ) 2
GLNSA O ( o ( G n ( S n + X + S n H n m ) ) ) 2
IJA O ( o ( G n ( 3 X + X H n m ) ) 3
HA O ( o ( G n ( 4 X + X H n m ) ) ) 4
HBSO-LAHC O ( o ( G n ( 4 X + X H n m ) ) ) 4
GRASP O ( o 2 ( G n H n ) ) 5
Table 4. Kacem dataset results.
Table 4. Kacem dataset results.
Instancen × m β GA-RRHCGLNSAGRASPHAIJATlPSO
K14 × 51.11111111
K28 × 80.81141414141414
K310 × 71111111
K410 × 101777777
K515 × 1011111111111
Table 5. BRdata dataset results.
Table 5. BRdata dataset results.
Instancen × m β BKVBSO-LAHCGA-RRHCGLNSAGRASPHAIJATlPSO
MK0110 × 60.23640 *40 *40 *40 *40 *40 *40 *
MK0210 × 60.352426 *26 *26 *26 *26 *2726 *
MK0315 × 80.3204204 *204 *204 *204 *204 *204 *204 *
MK0415 × 80.24860 *60 *60 *60 *60 *60 *60 *
MK0515 × 40.15168173172 *173172 *172 *172 *173
MK0610 × 150.3336158586457 *57 *60
MK0720 × 50.3133141139 *139 *139 *139 *139 *139 *
MK0820 × 100.15523523 *523 *523 *523 *523 *523 *523 *
MK0920 × 100.3299307 *307 *307 *307 *307 *307 *307 *
MK1020 × 150.2165204198205205197 *197 *205
Table 6. Algorithm ranking and Friedman test value for BRdata dataset.
Table 6. Algorithm ranking and Friedman test value for BRdata dataset.
Algorithm:BSO-LAHCGA-RRHCGLNSAGRASPHAIJATlPSO
Average R P D :11.388110.671011.012111.489010.528911.266511.2017
Rank:6237154
p-value:0.0455~0.15730.04350.16430.08030.0833
Table 7. Rdata dataset results, best makespan values are marked with *.
Table 7. Rdata dataset results, best makespan values are marked with *.
Instancen × m β BKVGA-RRHCGLNSAHAIJA
mt066 × 6 0.33 4747 *47 *47 *47 *
mt1010 × 10 0.2 686686 *686 *686 *686 *
mt2020 × 5 0.4 10221022 *1022 *10241024
la0110 × 5 0.4 570571571570 *571
la0210 × 5 0.4 529530 *530 *530 *530 *
la0310 × 5 0.4 477477 *477 *477 *477 *
la0410 × 5 0.4 502502 *502 *502 *502 *
la0510 × 5 0.4 457457 *457 *457 *457 *
la0615 × 5 0.4 799799 *799 *799 *799 *
la0715 × 5 0.4 749749 *749 *749 *749 *
la0815 × 5 0.4 765765 *765 *765 *765 *
la0915 × 5 0.4 853853 *853 *853 *853 *
la1015 × 5 0.4 804804 *804 *804 *804 *
la1120 × 5 0.4 10711071 *1071 *1071 *1071 *
la1220 × 5 0.4 936936 *936 *936 *936 *
la1320 × 5 0.4 10381038 *1038 *1038 *1038 *
la1420 × 5 0.4 10701070 *1070 *1070 *1070 *
la1520 × 5 0.4 10891089 *1089 *10901090
la1610 × 10 0.2 717717 *717 *717 *717 *
la1710 × 10 0.2 646646 *646 *646 *646 *
la1810 × 10 0.2 666666 *666 *666 *666 *
la1910 × 10 0.2 647700 *700 *700702
la2010 × 10 0.2 756756 *756 *756 *760
la2115 × 10 0.2 808850852835 *854
la2215 × 10 0.2 737770774760 *760 *
la2315 × 10 0.2 816850854840 *852
la2415 × 10 0.2 775810826806 *806 *
la2515 × 10 0.2 752800803789 *803
la2620 × 10 0.2 1056107010751061 *1061 *
la2720 × 10 0.2 1085110011091089 *1109
la2820 × 10 0.2 1075109010961079 *1081
la2920 × 10 0.2 9939991008997 *997 *
la3020 × 10 0.2 1068108810961078 *1078 *
la3130 × 10 0.2 15201521 *15271521 *1521 *
la3230 × 10 0.2 1657166716671659 *1659 *
la3330 × 10 0.2 1497150015041499 *1499 *
la3430 × 10 0.2 1535153915401536 *1536 *
la3530 × 10 0.2 1549155315551550 *1555
la3615 × 15 0.13 1016105010531028 *1050
la3715 × 15 0.13 989109210931074 *1092
la3815 × 15 0.13 943995999960 *995
la3915 × 15 0.13 966103010341024 *1031
la4015 × 15 0.13 955998997970 *993
Table 8. Algorithm ranking and Friedman test value for Rdata dataset.
Table 8. Algorithm ranking and Friedman test value for Rdata dataset.
Algorithm:GA-RRHCGLNSAHAIJA
Average R P D : 1.5761 1.7768 1.0872 1.5155
Rank:3412
p-value:~ 0.0001 0.0001 0.9195
Table 9. Vdata dataset results, best makespan values are marked with *.
Table 9. Vdata dataset results, best makespan values are marked with *.
Instancen × m β BKVGA-RRHCGLNSAHAIJA
mt066 × 6 0.5 4747 *47 *47 *47 *
mt1010 × 10 0.5 655655 *655 *655 *655 *
mt2020 × 5 0.5 10221022 *1022 *1022 *1024
la0110 × 5 0.5 570570 *570 *570 *571
la0210 × 5 0.5 529529 *529 *529 *529 *
la0310 × 5 0.5 477477 *477 *477 *477 *
la0410 × 5 0.5 502502 *502 *502 *502 *
la0510 × 5 0.5 457457 *457 *457 *457 *
la0615 × 5 0.5 799799 *799 *799 *799 *
la0715 × 5 0.5 749749 *749 *749 *749 *
la0815 × 5 0.5 765765 *765 *765 *765 *
la0915 × 5 0.5 853853 *853 *853 *853 *
la1015 × 5 0.5 804804 *804 *804 *804 *
la1120 × 5 0.5 10711071 *1071 *1071 *1071 *
la1220 × 5 0.5 936936 *936 *936 *936 *
la1320 × 5 0.5 10381038 *1038 *1038 *1038 *
la1420 × 5 0.5 10701070 *1070 *1070 *1070 *
la1520 × 5 0.5 10891089 *1089 *1089 *1089 *
la1610 × 10 0.5 717717 *717 *717 *717 *
la1710 × 10 0.5 646646 *646 *646 *646 *
la1810 × 10 0.5 663663 *663 *663 *665
la1910 × 10 0.5 617617 *617 *617 *618
la2010 × 10 0.5 756756 *756 *756 *758
la2115 × 10 0.5 800804 *806804 *806
la2215 × 10 0.5 733737 *737 *738738
la2315 × 10 0.5 809813 *813 *813 *813 *
la2415 × 10 0.5 773777 *777777 *778
la2515 × 10 0.5 751754 *754754 *754 *
la2620 × 10 0.5 10521053 *10541053 *1054
la2720 × 10 0.5 10841085 *1085 *1085 *1085 *
la2820 × 10 0.5 10691070 *1070 *1070 *1070 *
la2920 × 10 0.5 993994 *994 *994 *994 *
la3020 × 10 0.5 10681069 *1069 *1069 *1069 *
la3130 × 10 0.5 15201520 *1520 *1520 *1521
la3230 × 10 0.5 16571658 *1658 *1658 * 1658 *
la3330 × 10 0.5 14971497 *1497 *1497 *1497 *
la3430 × 10 0.5 15351535 *1535 *1535 *1535 *
la3530 × 10 0.5 15491549 *1549 *1549 *1549 *
la3615 × 15 0.5 948948 *948 *948 *950
la3715 × 15 0.5 986986 *986 *986 *986 *
la3815 × 15 0.5 943943 *943 *943 *943 *
la3915 × 15 0.5 922922 *922 *922 *922 *
la4015 × 15 0.5 955955 *955 *955 *956
Table 10. Algorithm ranking and Friedman test value for Vdata dataset.
Table 10. Algorithm ranking and Friedman test value for Vdata dataset.
Algorithm:GA-RRHCGLNSAHAIJA
Average R P D : 0.0693 0.0772 0.0724 0.1177
Rank:1324
p-value:~ 0.1573 0.3173 0.0005
Table 11. Experiment with large instances.
Table 11. Experiment with large instances.
Instance n × m oGLNSAGA-RRHC
BestAvg.BestAvg.
VL01 50 × 20 704592 617.3 551 570.9
VL02 60 × 30 1246759 781.3 705 717.8
VL03 80 × 50 27731155 1183.1 1041 1058.4
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Escamilla-Serna, N.J.; Seck-Tuoh-Mora, J.C.; Medina-Marin, J.; Barragan-Vite, I.; Corona-Armenta, J.R. A Hybrid Search Using Genetic Algorithms and Random-Restart Hill-Climbing for Flexible Job Shop Scheduling Instances with High Flexibility. Appl. Sci. 2022, 12, 8050. https://doi.org/10.3390/app12168050

AMA Style

Escamilla-Serna NJ, Seck-Tuoh-Mora JC, Medina-Marin J, Barragan-Vite I, Corona-Armenta JR. A Hybrid Search Using Genetic Algorithms and Random-Restart Hill-Climbing for Flexible Job Shop Scheduling Instances with High Flexibility. Applied Sciences. 2022; 12(16):8050. https://doi.org/10.3390/app12168050

Chicago/Turabian Style

Escamilla-Serna, Nayeli Jazmin, Juan Carlos Seck-Tuoh-Mora, Joselito Medina-Marin, Irving Barragan-Vite, and José Ramón Corona-Armenta. 2022. "A Hybrid Search Using Genetic Algorithms and Random-Restart Hill-Climbing for Flexible Job Shop Scheduling Instances with High Flexibility" Applied Sciences 12, no. 16: 8050. https://doi.org/10.3390/app12168050

APA Style

Escamilla-Serna, N. J., Seck-Tuoh-Mora, J. C., Medina-Marin, J., Barragan-Vite, I., & Corona-Armenta, J. R. (2022). A Hybrid Search Using Genetic Algorithms and Random-Restart Hill-Climbing for Flexible Job Shop Scheduling Instances with High Flexibility. Applied Sciences, 12(16), 8050. https://doi.org/10.3390/app12168050

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop