Next Article in Journal
Target Detection Based on Improved Hausdorff Distance Matching Algorithm for Millimeter-Wave Radar and Video Fusion
Next Article in Special Issue
Industrial Soft Sensor Optimized by Improved PSO: A Deep Representation-Learning Approach
Previous Article in Journal
A Simple Optical Sensor Based on Multimodal Interference Superimposed on Additive Manufacturing for Diameter Measurement
Previous Article in Special Issue
A Population-Based Iterated Greedy Algorithm for Maximizing Sensor Network Lifetime
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Frequency Analysis Operator for Population Improvement in Genetic Algorithms to Solve the Job Shop Scheduling Problem †

by
Monique Simplicio Viana
1,*,
Rodrigo Colnago Contreras
2,3 and
Orides Morandin Junior
1
1
Department of Computing, Federal University of Sao Carlos, Sao Carlos 13565-905, SP, Brazil
2
Department of Computer Science and Statistics, Institute of Biosciences, Letters and Exact Sciences, Sao Paulo State University, Sao Jose do Rio Preto 15054-000, SP, Brazil
3
Department of Applied Mathematics and Statistics, Institute of Mathematical and Computer Science, University of Sao Paulo, Sao Carlos 13566-590, SP, Brazil
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Viana, M.S.; Contreras, R.C.; Morandin, O., Jr. A new genetic improvement operator based on frequency analysis for Genetic Algorithms applied to Job Shop Scheduling Problem. In Proceedings of the Artificial Intelligence and Soft Computing (ICAISC 2021), Zakopane, Poland, 20–24 June 2021.
Sensors 2022, 22(12), 4561; https://doi.org/10.3390/s22124561
Submission received: 14 May 2022 / Revised: 10 June 2022 / Accepted: 12 June 2022 / Published: 17 June 2022

Abstract

:
Job Shop Scheduling is currently one of the most addressed planning and scheduling optimization problems in the field. Due to its complexity, as it belongs to the NP-Hard class of problems, meta-heuristics are one of the most commonly used approaches in its resolution, with Genetic Algorithms being one of the most effective methods in this category. However, it is well known that this meta-heuristic is affected by phenomena that worsen the quality of its population, such as premature convergence and population concentration in regions of local optima. To circumvent these difficulties, we propose, in this work, the use of a guidance operator responsible for modifying ill-adapted individuals using genetic material from well-adapted individuals. We also propose, in this paper, a new method of determining the genetic quality of individuals using genetic frequency analysis. Our method is evaluated over a wide range of modern GAs and considers two case studies defined by well-established JSSP benchmarks in the literature. The results show that the use of the proposed operator assists in managing individuals with poor fitness values, which improves the population quality of the algorithms and, consequently, leads to obtaining better results in the solution of JSSP instances. Finally, the use of the proposed operator in the most elaborate GA-like method in the literature was able to reduce its mean relative error from 1.395% to 0.755%, representing an improvement of 45.88%.

1. Introduction

Combinatorial optimization problems (COPs) consist of situations in which it is necessary to determine, through permutations of elements of a finite set, the configuration of parameters that is more advantageous [1]. Due to its high degree of applicability, many researchers have been using COPs in different contexts—for example, applications in the logistics [2], vehicle routing [3] and railway transport control [4], among other current problems [5]. In particular, one of the most addressed COPs in the literature is production scheduling [6], which, according to Groover [7], is part of the Production Planning and Control activities and is responsible for determining the design of operations that will be conducted, such as the environment in which products are processed, what resources are used and what is the start and end time for each production order.
Academic research and the development of solution methodologies have focused on a limited number of classic planning and production scheduling problems, one of the most researched is the variation known as Job Shop Scheduling Problem (JSSP) [8], in which a finite set of jobs must be processed by a finite set of machines. In this category of problems, the objective is usually to determine a configuration in the order of processing of a set of jobs or tasks to minimize, for example, the time of using resources [9]. In this case, several performance measures are useful to evaluate how satisfactory a given configuration is for a JSSP, with a makespan [10] that corresponds to the total time needed to finish the production of a set of jobs (one of the most used).
Belonging to the well-known class of problems NP-Hard, JSSP presents itself as a computational challenge, since it is not a trivial task to develop an approach to determine exact solutions that represent a configuration with an adequate performance measure, within a reasonable time, even considering small and moderate cases [11]. From this need, algorithms that present approximate results in a feasible computational time were developed and applied to JSSP. The main methods used are those composed of meta-heuristics [12], mainly by the Evolutionary Algorithm (EA) known as Genetic Algorithm (GA) [13,14,15,16,17]. Even so, the JSSP consists of a class of problems that remain open [18] and with many instances still unsolved in the well-known benchmarks of the area [19]. This is because the existing methods do not have the necessary efficiency to guarantee their practical use.
More specifically, it is possible to highlight certain disadvantages in the use of GA in solving COPs [20,21]. In detail, it is common for this set of techniques to become stagnant [22], during their iterations in solutions that are local minimums, which configures the phenomenon known as premature convergence [23]. Furthermore, GAs may require high computational time [24] to obtain good solutions to this type of problem. Therefore, for complex problems, GA needs to be assimilated to specific problem routines to make the approach effective. Hybridization can be a deeply effective way to improve the performance of these techniques. The most common form of hybridization is the addition of GAs to local search strategies and the incorporation of domain-specific knowledge in the search process [25].
In the latter, there are genetic improvement operators through manipulations in specific genes on a chromosome. These have a main objective to provide reinforcement coming from one or more individuals who have been successful in the adaptation process to individuals who are not able to stand out in a population. In other words, these operators direct the worst individuals in a population to areas known to be good in the search space.
The authors do Amaral and Hruschka JR [26,27] presented an operator in this line of reasoning, entitled a transgenic operator, which simulates the process of genetic improvement. To conduct such a procedure, in one of the stages of the GA, the population of the same is replicated to four parallel sub-populations, and in each of these four populations, the best individuals transfer up to four genes, based on historical information, to selected individuals. Then, only the best individuals among the four sub-populations remain.Viana, Morandin Junior and Contreras [15] proposed an adaptation of the transgenic operator of do Amaral and Hruschka JR [27] to solve a JSSP with GA. The authors propose the identification of relevance in the genes used in the transgenic process through a preprocessing step. However, such preprocessing is computationally time-consuming and may not be viable in large JSSPs.
In this work, we propose a new population guidance operator for GAs: the Genetic Improvement based on Frequency Analysis (GIFA) Operator. Our method consists of a new way to determine the genetic relevance based on the frequency analysis of the genes of individuals who have good fitness values in the population. We also propose the construction of a representative individual that represents this group of good individuals and that is used in the process of genetic manipulation to guide the worst individuals towards good solutions and, potentially, that these become positive highlights in the population.
This paper is an extended version of our preliminary work [28]. In this manuscript, we add a literature review section, and we consider more testing instances in our experimental evaluations. Furthermore, all steps of the method are outlined and detailed in the form of algorithms that simplify the reproducibility of the technique. This work is divided into six sections. Specifically, we discuss, in Section 2, works related to ours. In Section 3, we describe the JSSP fundamentation. We present, in Section 4, the details about the proposed GIFA operator and the requirements that a GA needs to satisfy to use it. Experimental results on different GAs using GIFA and the advancement in the state of the art of JSSPs are presented in Section 5. The work is finished in Section 6 with conclusions about the developments as well as future projections for improving the method and possible applications.

2. Related Works

Several meta-heuristics have been proposed in the literature to treat the JSSP, such as GA [11,13,15,20,29,30,31,32,33,34,35]; Simulated Annealing [36,37]; Hybrid social spider optimization [38]; Harris hawk optimizer [39]; Grey Wolf Optimization [40]; Bat Algorithm [41]; Chicken Swarm Optimization [42], Single seekers society:[43] and Particle Swarm Optimization: [44]. However, GAs remain one of the most common approaches used in resolving JSSPs. In the following paragraphs, we discuss certain works in the literature that deal with the JSSP production scheduling problem through meta-heuristics. These works were chosen because they have a great impact on the specialized literature and/or represent the state-of-the-art. The following detailed works are those authored by Ombuki and Ventresca [29], Watanabe et al. [30], Asadzadeh [20], Jorapur et al. [31], Wang et al. [11], Wang et al. [36], Dao et al. [41], Jiang [40], Semlali et al. [42] and Kurdi [32].
The authors Ombuki and Ventresca [29] proposed the Local Search Genetic Algorithm (LSGA) meta-heuristic to treat JSSP. The proposed LSGA is a Genetic Algorithm (GA) with local search, which has an operator similar to the mutation that is focused on local research, with the aim of further improving the quality of the solution. The LSGA of Ombuki and Ventresca [29] is a hybrid strategy that uses GA with the addition of a Tabu Search (TS) routine. LSGA was one of the first works to incorporate a more elaborate local search strategy, which proved to be efficient in the GA-like methods of the time. However, the technique was not able to find the optimal values of medium difficulty instances, such as FT10 [45].
Watanabe et al. [30] proposed a meta-heuristic based on a modified GA with search area adaptation (GSA). The proposed GSA has an adaptation of the search area with the ability to adapt to the structure of the solution space and to control the balance between global and local searches. The crossover operation of the GSA consists of performing the crossover several times on all pairs of parents each time a new cutoff point is drawn. The crossover is repeated until a child better than the worst individual in the population is found or until a certain number of iterations is reached. The GSA mutation operation consists of executing perturbations several times on all children and performing several swaps in their genes. The mutation is repeated until a mutant child better than the worst individual in the population is found or until a certain number of iterations is reached. As it was one of the first methods in this sense, the GSA was evaluated in a few instances and presented results far below the most recent methods.
On the same theme, Asadzadeh [20] presented the meta-heuristic Local Search Genetic Algorithm (aLSGA) with the inclusion of intelligent agents. The method is composed of a multi-agent system, in which each agent has a specialized behavior to implement the local search. The aLSGA combines local search heuristics with crossover and mutation operators. The use of multiple mutation functions expands the search power of the method; however, for the more elaborate search strategy of the method, only one function is considered in aLSGA.
The authors Jorapur et al. [31] proposed the Promising Initial Population-Based Genetic Algorithm (IPBGA) meta-heuristic. The IPBGA algorithm is a combination of GA with a new job-based modeling for the construction of the initial population. The objective of the work was to present an alternative population modeling for GA and, also, to present the impact that this type of alteration can obtain concerning the effectiveness of GA. However, in addition to the IPBGA achieving the best-known solution in few instances; in the others, the results obtained were significantly far from the optimal solution.
The Adaptive Multi-population Genetic Algorithm (AMGA) meta-heuristic was proposed by Wang et al. [11]. The  idea of AMGA is based on an GA that uses multi-population and has an adaptive probability of crossover and mutation, intending to expand the scope of the search and improve its performance. The work has some points that differ from other works that deal with the JSSP problem with GA. The first point is the insertion of multi-population in GA, the second point is to have an adaptive probability of crossover and mutation and the third point is that the elite individuals (individuals with better fitness) from each population are directly evolved into the next generation.
AMGA was tested on 39 instances, and it was able to find the best-known solution in 38 of those instances. The computational results showed that the AMGA can produce optimal or near-optimal values in almost all the benchmark instances tested; however, in the instances of Lawrence [46], not all were tested, and the instances that were left without evaluation are precisely the instances that present a greater complexity.
Jiang [40] developed the Hybrid Gray Wolf Optimization (HGWO) meta-heuristic. The HGWO is composed of the combination of the GWO algorithm with the local VNS algorithm as well as the addition of genetic operators (crossover and mutation) to balance the capacity of local and global exploration of the algorithm. In the proposal, three neighborhood structures were used: Swap, Insert and Inverse. The proposed algorithm obtained competitive results when compared with relevant works in the literature; however, of the 40 instances proposed by [46], only the 20 smallest were considered, and thus there is no way to evaluate the behavior of the algorithm in instances with greater complexity.
Wang et al. [36] proposed the TSAUN meta-heuristic, which is a hybrid local search algorithm. TSAUN is composed of the combination of the Simulated Annealing (SA) method and the Tabu Search (TS) method. The TSAUN structure runs an SA core and applies the TS technique to a local search. This hybrid algorithm takes advantage of stochastic SA to escape local minimums, and at the same time, improves the search performance through a TS. TSAUN did not achieve the best results in the tested instances; however, the method proved to be competitive with other works present in the state of the art. The work presents a contribution in the area of hybrid algorithms with the insertion of local search techniques, and  through the results obtained, the improvement that these combinations of techniques can achieve is reinforced.
In the article of Dao et al. [41], the meta-heuristic Parallel Bat Algorithm (PBA) was proposed, which is composed of the meta-heuristic Bat Algorithm (BA) with the inclusion of parallel processing. The objective of adding parallel processing to BA was that, with communication strategies, it is possible to correlate individuals in each cluster and share information among them. Communications provide improved diversity and accelerate the search for satisfactory solutions. Neighborhood operators of the types Swap, Insert and Inverse were also included in the proposal. It is clear from the work that the BA with the inclusion of parallel processing can achieve better solutions in JSSPs than can a basic BA.
Semlali et al. [42] proposed the meta-heuristic Memetic Chicken Swarm Optimization (MeCSO). The method integrates the Chicken Swarm Optimization (CSO) algorithm with local search method 2-opt of Croes [47]. The CSO algorithm was established by Meng et al. [48] and was inspired by the behavior of a swarm of chickens while looking for food. The algorithm had good efficiency in instances of smaller sizes; however, in larger instances, the method presented a great deal of difficulty. In this case, in observing the results, it is possible to notice that the algorithm has a tendency to become stuck in local optima and cannot go beyond certain points when considering larger instances.
Kurdi [32] investigated the impacts of selecting the genetic materials exchanged during the crossover with prior information about the critical paths that exist in the domain rather than randomly selecting them. Through the presented results, the author was able to present the impact that this area of study brings. According to the author, the basic proposed idea for the identification of the genes that hold the most important characteristics is a promising area of research and deserves further investigation since it produces significant improvements when applied in the JSSP.
Hamzadayı et al. [43] proposed to adapt some components of the Single Seekers Society (SSS) metaheuristic to deal with combinatorial optimization problems. The proposed SSS was applied in two types of production scheduling: the Flow Shop Scheduling Problem and the JSSP. The SSS algorithm consists of a metaheuristic that allows cooperation between different search heuristics. In this case, SSS incorporates several metaheuristics, such as Simulated Annealing, Threshold Accepting, Greedy Search (GS), and all information from each method works in an integrated way. To generate new solutions, SSS shares information via crossover and handles the search by integrating the information via neighborhood structure. SSS was not the method that obtained the best results for JSSP instances; however, it obtained competitive results and was able to find the best known value in 14 instances of the 20 instances that were tested. The method proved to be able to maintain its effectiveness in similar combinatorial optimization problems, obtaining satisfactory results in both the production and scheduling problems evaluated.
Yu et al. [44] proposed an improved hybrid PSO with non-linear inertia weight and Gaussian mutation (NGPSO) to solve the JSSP. The nonlinear inertia weight was added to the method in order to improve the local search capability and the Gaussian mutation was added in order to improve the global search capability. The method seeks to maintain a balance between local searches and ensuring population diversity, thereby, reducing the probability of the algorithm falling into a local optimal solution. The experimental results indicated that the NGPSO algorithm had satisfactory performance and high capacity in JSSP resolution, and was able to find the best known value in 38 of 62 instances. The techniques added to improve local search and global search significantly improved the PSO; however, for more complex instances, a better balance between searches is needed.
A hybrid discrete Cuckoo Search (CS) method with Simulated Annealing, called DCSA, was proposed by Alkhateeb et al. [49] to handle JSSP instances. DCSA incorporates the SA optimization operators into the CS search algorithm. A combination of VNS and Lévy flight methods is used for a better exploration of the search space. In the performed evaluations, the DCSA presented a faster convergence than the other compared methods and was able to find the best known solution in 29 instances of the 34 that were selected for testing. The DCSA also presented a lower computational cost compared with the other methods compared. The authors assumed that the improvement must be attributed to the integration of SA to CS and to the use of different exploration methods, such as VNS and Lévy flight. However, in the work, the instances considered more difficult were not considered, and  most of the methods in the literature usually became stuck in local minima.
In the works of Viana et al. [13] and Viana et al. [14], a new GA approach with improved local search and multi-crossover techniques (mXLSGA) was proposed. Three operators specialized in local search were proposed: one built into the mutation operator; one with massive behavior; and another with multi-crossover routines. Viana et al. [15] proposed a genetic algorithm with the inclusion of an operator called “Transgenic”. This operator is based on the idea of genetically modified organisms and with the proposal to guide individuals, who have the worst fitness values in the population, to a region of the search space that would be more favorable for solving the problem.
This operator selects significant genes from individuals that have been well evaluated and inserts those genes into the worst individuals through a preprocessing step in the form of JSSP resolution simulations. In this work, we propose an alternative to the transgenic operator in the sense that a preprocessing step, which is usually expensive, is unnecessary, since the calculation of gene importance is performed during each generation of the method.
We can see through the bibliographic review that JSSP has attracted the attention of several researchers due to having combinatorial behavior and being classified as NP-Hard. Several approaches using meta-heuristics applied in JSSP have been proposed, and some have included intelligent agents, parallel populations or the hybridization of meta-heuristics with other techniques. It appears, through the works reported, that hybridization is an effective way to improve the performance and effectiveness of several meta-heuristics. Some forms of hybridization successfully applied in the literature are the union of local search strategies and the incorporation of specific knowledge of the domain in the search process.

3. Formulation of the Job Shop Scheduling Problem

We can define JSSP as a COP that has a set of N jobs that must be processed on a set of M machines. Furthermore, each job has a script that determines the order in which it must pass through the machines for its process to be completed. Each job processing per machine represents an operation and the objective of a JSSP can be interpreted as being the challenge of determining the optimal sequencing of operations with one or more performance measures as a guide. The components of this problem follow certain restrictions [9]:
  • Each job can be processed on a single machine at a time.
  • Each machine can process only one job at a time.
  • Operations are considered non-preemptive, i.e., they cannot be interrupted.
  • Configuration times are included in the processing times and are independent of the sequencing decisions.
In this work, we adopted makespan (MKS) as a performance measure. The MKS is the total time that a JSSP instance takes to complete the processing of a set of jobs on a set of machines considering a given operation sequence.
Mathematically, let us assume the following components of a JSSP:
  • J = { J 1 , J 2 , , J N } is the set of jobs.
  • M = { m 1 , m 2 , , m M } is the set of machines.
  • O = ( O 1 , O 2 , , O N · M ) is an operation sequence that sets the priority order for processing the set of jobs in the set of machines.
  • T i ( O ) represents the time taken by the job J i to be processed by all machines in its script according to the operation sequence defined in O.
Then, according to [13,14,15], the MKS can be defined as the total time that all jobs take to be processed according to a given operation sequence, as presented in Equation (1).
MKS = max i T i ( O ) .
It is worth mentioning that, in this work, a more intuitive notation was adopted for modeling the JSSP constraints and measures. However, mathematically elaborate formulations involving constrained optimization can be found in the specialized literature. For that, we suggest the survey of Xiong et al. [50] to the interested reader.

4. A New Genetic Improvement Operator Based on Frequency Analysis for GA Applied to JSSP

In this section, we present in detail how the proposed method works. We specify the idea of determining genetic relevance by analyzing the frequency of genes that represent good characteristics in individuals with adequate fitness values in the population and, with that, we intend to obtain innovation with the following three topics:
  • A new strategy for defining genetic relevance in GAs chromosomes.
  • A new genetic improvement operator that is versatile and can be used in GA variations.
  • Improving the state of the art of JSSP benchmark results.

4.1. Genetic Representation

Our operator was developed to operate in all GA-like methods with minor modifications. In the meantime, we conduct its fundamentation on a specific encoding. In this case, we use the “coding by operation order” [51]. In this representation [13], the feasible space of a JSSP instance defined by N jobs and M machines is formed by chromosomes c N N · M , such that exactly M coordinates of c are equal to i (representing the job index i), for every i { 1 , 2 , , N } .
This encoding determines in the chromosome the operation priority with respect to the machine allocation. For example, as in [14], let us assume c = ( 2 , 1 , 2 , 2 , 1 , 1 ) as a feasible solution in a JSSP instance with dimension 2 × 3 ( N = 2 and M = 3 ). Thus, according to the operations defined in c, the following actions must be conducted in parallel or if the previous action has already been completed:
  • (First) Job 2 must be processed by the first machine of its script.
  • (Second) Job 1 must be processed by the first machine of its script.
  • (Third) Job 2 must be processed by the second machine of its script.
  • (Fourth) Job 2 must be processed by the third machine of its script.
  • (Fifth) Job 1 must be processed by the second machine of its script.
  • (Sixth) Job 1 must be processed by the third machine of its script.

4.2. Fitness Function

The encoding used makes it natural to define the fitness function of the problem as the makespan of a JSSP instance given according to the stipulated operation sequence—that is, the fitness function [15] used is given according to Equation (2):
F : O R O F ( O ) : = max i T i ( O ) ,
in which O is the set of all possible operation sequences for the defined JSSP instance.
In this way, for this fitness function, the MKS of the JSSP instance is calculated according to a given operation sequence, and then the meta-heuristic must look for an operation sequence in which the MKS is as small as possible and, consequently, the set of jobs must be processed by the set of machines taking the shortest possible time.

4.3. Proposed Genetic Improvement Based on Frequency Analysis Operator

In this work, we propose a new genetic improvement operator for evolutionary algorithms: the GIFA operator. The operator is based on a frequency analysis matrix calculated during the iterations of each GA. GIFA aims to calculate which genes on a chromosome can direct individuals with poor fitness values to better solutions and better search spaces. GIFA has two main stages: the first being defined by the making of the representative individual—that is, an individual that is determined by the configuration of the most frequent genes in the best individuals in the population; and the second stage consists of the use of the representative individual in the transgenic process—that is, the genetic manipulation through the insertion of specific genes of the representative individual in genes of the worst individuals in the population. Below, we present these steps in detail.
Stage 1: Composition of the representative individual Initially, a portion of the population that presents the best fitness values is selected. Specifically, we select N Top individuals who are considered to be good examples of solutions in the population. This selection is made according to an order based on the fitness value of individuals in the population, as presented in Algorithm 1.
Algorithm 1 Defining the N Top best individuals.
Input: N Pop           Number of chromosomes in the population
P = p 1 , p 2 , , p N Pop   Population
F            Fitness function
N Top           Number of good individuals
1:  F : = { }
2: for  i = 1 to N Pop  do
3:   ω i : = F ( p i )
4:   F : = F ω i
5: end for
6:  ω i 1 , ω i 2 , , ω i N Pop : = f sort F         ▹ The function f sort · ascending sorts the elements in a given set. In this case, i 1 is the index of the lowest fitness, i 2 is the index of the second-lowest fitness and so on, up to i N Pop , which is the index of the highest fitness value.
7:  p i 1 , p i 2 , , p i N Pop  ▹ We arrange individuals according to their fitness, from the best to the worst.
8: for j = 1 to N Top  do
9:   c j : = p i j         ▹ Let us define the N Top best individuals.
10: end for
 Output:   c 1 , c 2 , , c N Top    N Top best individuals
In the sequence, for each index job i, a frequency vector v i R N · M is associated, in which the number of its occurrences is stored in each coordinate where the product i appears exactly at the position of this coordinate on the chromosomes selected for comparison. In Algorithm 2, the construction of vectors v i is detailed.
In Figure 1, an example of the calculation of the frequency vectors v i is presented when considering four individuals c 1 , c 2 , c 3 and c 4 with the best values of fitness in a 3 × 2 dimension JSSP instance.
Algorithm 2 Calculating the genetic frequency of the best adapted individuals.
Input: c 1 , c 2 , , c N Top N Top best individuals
N × M        Dimension of JSSP instance
1: for i = 1 to N do
2:   v i : = 0 N · M ▹ Initializing the frequency vectors. In this case, 0 N · M is the null vector with N · M coordinates.
3: end for
4: for i = 1 to N do
5:  for  j = 1 to N · M do    ▹ The value N · M is the number of coordinates of chromosomes.
6:   for  k = 1 to N Top  do
7:    if  c k , j = i then       ▹ Is job i in the j-th coordinate of c k ?
8:      v i , j : = v i , j + 1 ▹ If so, let us add 1 to the j-th coordinate of the frequency vector v i .
9:     end if
10:   end for
11:  end for
12: end for
 Output:   v 1 , v 2 , , v N   Frequency vectors
Once the vector v i has been made for every job index i, a chromosome whose coordinates are determined by the job with the highest frequency in this coordinate is defined as a representative individual. That is, each gene (coordinate) of the representative individual is defined as the job index that is most present in this coordinate in the best individuals in the population. It is also possible to establish an order of genetic relevance according to the frequency vectors v i .
That is, it is possible to define which genes of the representative individual are more suitable to be transferred in the process of genetic improvement. Such relevance is also defined according to the frequency that the products present in each coordinate of the best individuals so that the genes that present the same job in many good individuals can categorize a “trend” that leads to good fitness values. Therefore, these genes must be considered to be relevant, since they describe a positive characteristic in several individuals that stand out in the population. Mathematically, the representative individual and its genetic relevance are made according to the following procedure:
  • Let c be the representative individual and w a vector that designates a score for each of its coordinates, initially null. In the following items, the coordinates of c and w are made.
  • We define I 1 as being arg max i { v i , 1 } . That is, I 1 is the index of the job that has the highest frequency in the first coordinate of the exemplary individuals. Therefore, the first coordinate of the representative individual is defined as I 1 . Mathematically,
    c 1 : = I 1 .
    In addition, a  w 1 score defined as the maximum frequency shown in the first coordinate of the best individuals is associated with the first coordinate of c . That is,
    w 1 : = max i { v i , 1 } = v I 1 , 1 .
  • Assign the value 2 to j.
  • We define I j as being arg max i { v i , j } , that is, I j is the most frequent product index in the j coordinate in the N Top individuals. However, in order to guarantee the feasibility of the representative individual, it is necessary to establish two more restrictions:
    4.1
    If the product I j is not in M coordinates of c , then it is defined as I j the j-th coordinate of the representative individual. That is,
    c j : = I j .
    In this case, the respective score is associated with the j -th coordinate of the representative individual as the maximum possible value presented in the j-th coordinate of the best individuals. That is,
    w j : = max i { v i , j } = v I j , j .
    4.2
    Otherwise, to guarantee the feasibility of c , the frequencies of the index job I j are disregarded, since it is already arranged in M coordinates of c and, therefore, cannot occupy any more of its coordinates. To do so, we must cancel its respective frequency vector, that is,
    v I j : = 0 .
    To make a new attempt, we must return to item 4.
  • If j N · M then j : = j + 1 and we must return to item 4. Otherwise, the procedure is finished, and we have the representative individual pair and its respective genetic score ( c , w ) .
It is not necessary to project the representative individual in the feasible space of the problem since, due to its construction and the item 4 above, it is already feasible. In Figure 2, an example of the calculation of the representative individual ( c ) and the relevance of its genes (w) in a JSSP instance of dimension 4 × 3 is presented, taking, as the best individuals, the  N Top = 5 individuals with the lowest fitness values available in the population.
The details of the construction of the representative individual and the vector of relevance of its genes are presented in Algorithm 3.
Algorithm 3 Calculating the representative individuals and their genetic relevance.
Input: v 1 , v 2 , , v N    Frequency vectors
N × M        Dimension of JSSP instance
1:  c = 0 N · M         ▹ Initialize the representative individual as being null.
2:  I 1 : = arg max i v i , 1 ▹ Most frequent job in the first coordinate of the best individuals.
3:  w 1 : = max i v i , 1   ▹ Number of times that the most frequent Job appears in the first coordinate of the best individuals.
4:  c 1 : = I 1     ▹ Job I 1 occupies the first coordinate of the representative individual.
5:  j = 2   ▹ Concerning the next coordinates, we proceed similarly but guaranteeing feasibility.
6: while j N · M do
7:   I j : = arg max i v i , j   ▹ Let us calculate the most recurring job in the j-th coordinate of the frequency vectors.
8:   countJob : = 0 ▹ To guarantee feasibility, each job must be in only M coordinates of the representative individual.
9:  for  k = 1 to N · M  do
10:   if  c k = I j  then
11:     countJob : = countJob + 1
12:   end if
13:  end for
14:  if  countJob < M then ▹ In case of feasibility, then we define the j-th coordinate of c .
15:    w j : = max i v i , j
16:    c j : = I j
17:    j = j + 1
18:  else ▹ Otherwise, the next most recurring job in the j-th coordinate of the frequency vectors must be evaluated.
19:    v I j = 0 N · M ▹ Since I j makes c unfeasible, then we must disregard it for the next calculations.
20:  end if
21: end while
 Output: c   Representative individual
w  Genetic relevance vector

4.3.1. Stage 2: Use of the Representative Individual in Genetic Improvement

Once the representative individual and the relevance of each of its genes are calculated, then we propose that its most relevant genes be transferred to the worst individuals in the population, thus, simulating a mechanism for genetic improvement, or transgenics. For this, we take P Worst : = { x 1 , x 2 , , x N Worst } as the set of the worst N Worst individuals in a population. Subsequently, the most significant, or most relevant, N Genes genes of the representative individual are transferred to all individuals in the P Worst maintaining their original positions. This procedure can generate infeasible solutions.
Thus, it is necessary to conduct a correction, or projection, process on the individuals resulting from this operation. For this, we carry out the projection through the Hamming distance [52] modifying only the genes that were not received from the representative individual. In this way, the individuals generated in this procedure are projected on the feasible set of the problem, giving rise to the genetically improved individuals P Improved = { x ^ 1 , x ^ 2 , , x ^ N Worst } .
It is also necessary to establish how many genes will be transferred from the representative individual to the individuals of P Worst . For this, we follow a procedure similar to that of Viana, Morandin Junior and Contreras [15], which empirically determines that the adequate amount of genes used in the genetic improvement process is given by the root of the number of coordinates of the chromosome. Thus, the process remains advantageous and does not cause early convergence in the population. Thus, in this work, N Genes is defined as round N · M . In Figure 3, an example of the determination of the most significant genes of a representative individual c when it is given the scores of their genes w while addressing a JSSP with dimension 4 × 3 .
In Algorithm 4, we present with comments all the steps of the proposed population improvement method.
Algorithm 4 Population improvement using representative individual and genetic relevance.
Input: c                   Representative individual
w                 Genetic relevance vector
P Worst = x 1 , x 2 , , x N Worst    The N Worst worst individuals
N × M               The JSSP dimension
1:  N Genes : = round N · M         ▹ Number of genes to be transferred.
2: for i = 1 to N Genes  do
3:   J i : = a r g max k w k      c coordinates that represent its most important genes.
4:   w J i : = 0
5: end for
6: for i = 1 to N Genes  do
7:  for  k = 1 to N Worst  do
8:    x k , i : = c J i      ▹Transferring the best genes from c to individuals in P Worst .
9:  end for
10: end for
11:  P Improved : = { }
12: for k = 1 to N Worst  do
13:   x ^ k : = proj Hamming x k ▹ Correcting infeasible solutions with Hamming projection.
14:   P Improved : = P Improved x ^ k
15: end for
 Output:   P Improved = x ^ 1 , x ^ 2 , , x ^ N Worst   Improved individuals
Assuming N Worst = 3 and P Worst = { x 1 , x 2 , x 3 } as the set of the worst 3 individuals in a population, the improvement process is shown in Figure 4 genetic that transfers the N Genes best genes from the representative individual c of to all individuals in the set P Worst .
The genetic improvement procedure must be performed after the standard operators of the GA, or the GA-like method used and right after the generation of a new population. Thus, the set P Worst must be formed by individuals of the new population of the method. In addition, after applying genetic improvement, the evaluation of improvement or worsening of affected individuals is made so that the genetic changes made will only be saved in individuals who have obtained an improvement in fitness. That is, only individuals who have gained an advantage in the process of genetic improvement will be replaced in the population; the other individuals should be discarded and replaced by new individuals generated randomly as detailed in the next subsection.

4.3.2. Generating New Individuals with the Lévy Flight Strategy

The proposed genetic improvement strategy was developed to be as versatile as possible in the sense that it can be attached to any GA-type method. Thus, the proposed operator (GIFA) must be used after the execution of the original operators of the considered algorithm in order to guide the solutions that were not able to stand out using such operators. In addition, the use of the proposed genetic improvement operator must be performed together with a genetic diversity maintenance strategy in order to not corroborate the premature genetic stagnation of the population.
One of the most commonly used routines in the literature for this purpose is the replacement of individuals from the population with new individuals generated from the Lévy distribution. Intuitively in [53], this distribution was associated with random walks whose steps are defined by exponential distributions—that is, L é vy ( s ) | s | 1 β , with  β ( 0 , 2 ] . Mathematically, as in [54], a random number generated by a Lévy distribution obeys the following distribution:
L é vy ( s , γ , μ ) = γ 2 π e γ 2 ( s μ ) ( s μ ) 3 2 , 0 < μ < s < , 0 , s 0 ,
where μ is the minimum step of the random walk and γ is a scale factor.
In this operator, to generate new individuals, a function f shuffle : O O is used, which is defined as an index shuffler operator except for generating random numbers with a Lévy distribution. Specifically, it is necessary to evaluate the individuals who should receive genetic improvement before and after the procedure, and those who cannot show improvement should be replaced by new individuals generated with f shuffle . In detail, the steps that define the genetic improvement operator are presented in Algorithm 5 below.
Algorithm 5 Population improvement with diversity maintenance.
Input: c                   Representative individual
w                 Genetic relevance vector
P Worst = x 1 , x 2 , , x N Worst    The N Worst worst individuals
N × M               The JSSP dimension
F                  Fitness function
1:  P Improved : = Algorithm 4 ( c , w , P Worst , N , M ) ▹ The set of individuals improved by the genetic improvement process is obtained through Algorithm 4.
2: for i = 1 to N Genes do ▹ All improved individuals x ^ i should be evaluated to ensure that the fitness has improved.
3:   F no improvement : = F ( x i )              x i is the original individual.
4:   F improvement : = F ( x ^ i )               x ^ i is the improved individual.
5:  if  F no improvement F improvement then    ▹ In case there is no improvement, the individual in question must be replaced by a new individual generated from the random permutation, with this being defined by the Lévy distribution, of a feasible solution.
6:     P Improved : = P Improved { x ^ i } .   ▹ The individual from P Improved is removed.
7:     x ^ i : = f shuffle ( ( 1 , 1 , , 1 , 2 , 2 , , 2 , , N , N , , N ) ) ▹ The new individual is generated using the Lévy distribution.
8:     P Improved : = P Improved { x ^ i } . ▹ The generated individual assumes its position in P Improved .
9:  end if
10: end if
 Output:   P Improved = x ^ 1 , x ^ 2 , , x ^ N Worst   Improved individuals

4.4. Scheme of Use for Proposed Operators: Algorithm Structure

The proposed genetic improvement strategy was developed to be as versatile as possible in the sense that it can be attached to any GA-like method. Thus, the proposed operator must be used after the execution of the original operators of the method considered in order to guide solutions that were not able to stand out through the traditional strategies defined in the method. In other words, to use the proposed operator in a given GA-like method, we must obey the following steps:
  • Define the initial parameters and specifics of the chosen GA-like method.
  • Execute the operators that make up the GA-like method. These being, for example, the operators of crossover, mutation, local search, creation of new population, etc.
  • At the end of an iteration involving the traditional operators of the selected GA-like method, we make a sub-population P Worst with the worst N Worst individuals in the current population.
  • At the same time, we select the best N Top individuals in the population to compose the representative individual.
  • Build the representative individual using the strategy described in Stage 1 of Section 4.3.
  • Determine a relevance scale to the genes of the representative individual.
  • Conduct the genetic improvement of the P Worst individuals using the most relevant N Genes genes of the representative individual.
  • Replace in the current population of the method all individuals who obtained an improvement in the fitness value in the process of genetic improvement and return in the execution of the original operators of the considered GA-like method. Those who have not improved should be replaced by new individuals randomly generated according to Levy’s exponential distribution, following the procedure of Al-Obaidi and Hussein [55].
In Figure 5, we present a flowchart that illustrates the sequence of steps of the proposed genetic improvement process.

5. Implementation and Experimental Results

5.1. Experimental Environment

For the conduction of the experiments, we considered two different situations: in the first, we evaluated the impact that the proposed operator causes on five GA-like methods, all of which were obtained using the framework of Viana, Morandin Junior and Contreras [13], in eight JSSP instances of varying complexity; in the second, we compare with recent methods in the literature the ability of the proposed operator to look for good solutions in 58 instances of JSSP that compose the area benchmark, with 3 from Fisher and Thompson (FT) [45], 40 from Lawrence (LA) [46], 10 from Applegate and Cook (ORB) [56] and 5 from Adams, Balas and Zawack (ABZ) [57]. In detail, in this second situation, we consider relevant and recent methods which deal with the JSSP with the same specific instances and, when existing, presented in papers published in the last three years. In all, we consider for comparison the following methods: mXLSGA [13], NGPSO [44], SSS [43], GA-CPG-GT [32], DWPA [58], MeCSO [42], GWO [59], IPB-GA [31], NIMGA [60], aLSGA [20], PaGA [61]. The proposed algorithm is coded in MATLAB and we performed the evaluations on a computer with 2.4GHz Intel(R) Core i7 CPU and 16GB of RAM.

5.2. Results and Comparison with Other Algorithms

For the first testing situation, we consider five variations of the Viana, Morandin Junior and Contreras [13] framework: a basic GA (GA), GA with Search Area Adaptation (GSA) [30], GA with Local Search (LSGA) [29], GA with Elite Local Search and agent adjustment (aLSGA) [20] and GA with multi-crossover and massive local search (mXLSGA) [13]. In each of these versions that represent the state-of-the-art in GA-type techniques for JSSP solution, the proposed genetic improvement operator, GIFA, is added, and evaluations were performed on eight JSSP instances of varying complexity that compose the benchmark of the area, as 1 by Fisher and Thompson (1963) (FT) and 7 by Lawrence (1984) (LA): the FT 06, of dimension 6 × 6 , and best known solution (BKS) equal to 55; the LA 01, of dimension 10 × 5 , and the BKS is equal to 666; LA 06, of dimension 15 × 5 , and the BKS is equal to 926; the LA 11, of dimension 20 × 5 , and the BKS is equal to 1222; LA 16, of dimension 10 × 10 , and the BKS is equal to 945; LA 23, of dimension 15 × 10 , and the BKS is equal to 1032; LA 26, of dimension 20 × 10 , and the BKS is equal to 1218; and LA 31, with a dimension of 30 × 10 , and the BKS is equal to 1784. Thus, each GA-like method considered has a version with the proposed operator, represented by the acronym GIFA next to its standard acronym.
Our main purpose was to evaluate the impact of using GIFA in each of the GA-like methods; therefore, we kept the best possible configuration of each of the methods available in the original works, with the exception that everyone had 100 individuals in their populations, and we ran it for 100 generations. Furthermore, we added to each of them the configuration referring to GIFA, which is defined as follows: N Top = N Worst = 10 . It is worth noting that the choice for these last two parameters is not random. We experimentally verified that this would be the fairest possible common configuration when considering all the GA-like methods mentioned here. For that, we analyzed some specific performance metrics. In detail, let S Φ ( N Top , N Worst ) be the solution obtained by the method Φ without using the proposed operator and S Φ + G I F A ( N Top , N Worst ) the solution obtained using the GIFA operator, both of which use the same values for N Top and for N Worst . Furthermore, we define Imp Φ ( N Top , N Worst ) to be the improvement that using the GIFA operator gives to the Φ method considering N Top and N Worst as:
Imp Φ ( N Top , N Worst ) : = max S Φ ( N Top , N Worst ) S Φ + G I F A ( N Top , N Worst ) S Φ ( N Top , N Worst ) , 0 .
The objective is to analyze an average of improvement values Imp Φ ( N Top , N Worst ) in several executions of the method on the same specific instance and on the same parameter configuration. Furthermore, we consider an average value for this improvement measure depending on the methods considered and the configuration given for the GIFA parameters: the value AvgImp ( N Top , N Worst ) , defined in Equation (5):
AvgImp ( N Top , N Worst ) : = 1 5 Φ { GA , GSA , LSGA , aLSGA , mXLSGA } 1 N run i = 1 N run Imp Φ i ( N Top , N Worst ) ,
where each method is executed N run times and Imp Φ i ( N Top , N Worst ) represents the GIFA improvement with respect to i-th execution of that method.
In Figure 6, we represent through heatmaps the values of AvgImp ( N Top , N Worst ) calculated considering N run = 35 runs of each GA -like method and with respect to three example bases: LA01, which consists of a simple instance; and instances LA21 and LA25, considered with high difficulty. In addition, the following set of possible N Top and N Worst configurations was considered for the creation of these images: { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 20 , 30 , 40 , 50 } . In these conditions, we noticed that the use of the operator contributed with greater intensity in the more complex bases than in the simple base.
This is because the GA-Like methods can find good solutions for instance LA01 without the help of the operator; however, they have great difficulty finding good solutions for instances LA21 and LA25. Therefore, we noticed that the influence of the proposed operator is greater for more complex instances. Furthermore, we noticed a certain tendency in the sense that the operator is not able to positively influence the GA-like method when we choose small values, i.e., close to 1, and large values, greater than 15, for N Top and N Worst . This is because very low values for these parameters reduce the functionality and influence of the operator since it has a low population sample, because N Top is low and a low influence in poorly adapted individuals, as N Worst is also low.
Furthermore, if the values assigned to these parameters are too large, the diversity of the population will be compromised, since a large portion of the population, defined by N Worst individuals, will receive the genes defined by the other portion of the population, formed by N Top individuals. For this reason we see a concentration of higher average AvgImp ( N Top , N Worst ) in the central regions of the heatmaps, that is when N Top and N Worst assume values close to 10 individuals. Therefore, we consider this configuration for the following analyses.
In this case, the best value, the worst value, the mean and the standard deviation (SD) of the makespan values calculated at 35 independent executions of each method on the eight JSSP instances considered are presented in Table 1. The number of times the method reached the best known solution is also presented (Number of optima); the number of iterations (Iteration of the Optimum) required to reach the best known solution; and the average time (Time (s)) in seconds that the technique takes to perform 100 iterations.
In Table 1, it is possible to observe that the operator made all methods more stable, reducing the amplitude of the mean and standard deviation in all situations in which improvement was possible. Furthermore, in most cases, the addition of the GIFA operator resulted in a decrease in the worst makespan value found. In fact, the operator was not able to improve this indicator only with respect to the mXLSGA method and considering three instances: LA 23, LA 26 and LA 31. An analogous phenomenon can be observed with respect to the best value obtained by each technique, since that, in most cases, the use of the GIFA operator makes the original technique able to reach a value closer to the best-known solution for the evaluated instance. In this case, it is possible to observe that the use of the GIFA operator increased the number of best-known solutions found by the techniques in all instances.
This fact is observed mainly in instances of lesser complexity. However, in more complex instances, specifically from the instance LA 16, the proposed operator was able to help a base technique to find the best-known solution only in the cases of the aLSGA and mXLSGA techniques, the latter being able to find these values without the use of the genetic improvement operator. This serves as an indication that the proposed operator offers a considerable increase in the stability of the method; however, the ability to explore the search space still has a strong dependence on the original technique. This occurs because GIFA guides individuals with makespan values considered bad in regions where individuals with good fitness values are known to exist, in order to increase local exploration and, therefore, find good solutions; however, it is up to the original technique to indicate good search regions. In addition, it is worth noting that the improvement that the GIFA operator provides to a base technique does not have much relation to the computational time required for its execution, since this is defined between 0.2 and 0.3 seconds, unlike the transgenic operator of [15], which requires an expensive preprocessing and simulation step to determine the genetic relevance.
Thus, the second situation considered should serve as an experiment in this sense so that we can evaluate the ability of the proposed operator to increase the search and exploration power of a given technique. For this, we added the proposed GIFA operator in a technique already known to be effective in finding good solutions in the JSSP instances that compose the benchmark today: the mXLSGA [13]. In this case, we evaluate GIFA-mXLSGA at 3 FT instances; 40 instances LA; 10 ORB instances; and 5 ABZ instances. In Table 2, Table 3, Table 4 and Table 5, we presented the results derived from 10 independent executions of our method on each instance. The columns indicate, respectively, the instance that was tested, the instance size (number of Jobs × number of Machines), the optimal solution of each instance, the results achieved by each method considering all the executions (best solution found and error percentage (Equation (6)) and the mean of the error with respect to each benchmark (MErr).
E % = 100 × Best BKS BKS ,
in which E % is the relative error, “BKS” is the best known Solution and “Best” is the best value obtained by executing the algorithm 10 times for each instance.
Analyzing Table 2, Table 3, Table 4 and Table 5, we can see that the proposed genetic improvement operator was able to improve the search capability of the mXLSGA method (the GA-like method with the best results for JSSP). Specifically, considering only the LA instances, the use of the proposed operator was able to reduce the magnitude of the mean relative error by 0.12, which corresponds to a reduction of 19.67% of its value. In other words, the GIFA operator made the mXLSGA method able to find the best known makespan in 72.5% of the LA instances, obtaining a mean relative error of 0.50, the lowest among all methods. With respect to FT instances, the proposed operator GIFA did not compromise the search capability of mXLSGA, causing the best known solutions to be found in all instances. In the case of ORB instances, GIFA improved the performance of mXLSGA in ORB05 instance and made the method capable of finding the best known solution in ORB06, reducing the average error of the technique from 0.54 to 0.46. Furthermore, with respect to the ABZ instances, the average error of GIFA-mXLSGA is less than half the error of mXLSGA, since the proposed genetic improvement operator improved the results of the base technique in the ABZ07 and ABZ09 instances. In summary, some points can be highlighted when analyzing the results referring to Table 2, Table 3, Table 4 and Table 5:
  • There was no worsening of the results in any instance with the use of the GIFA operator;
  • The GIFA-mXLSGA method had the smallest relative error;
  • The GIFA operator made mXLSGA able to find the best known solution in instance LA 22;
  • GIFA operator improved mXLSGA results in 7 LA instances;
  • The GIFA operator made mXLSGA able to find the best known solution in instance ORB06 and improved the solution obtained in ORB05,
  • The GIFA operator reduced the average error of mXLSGA by 53% in ABZ instances.
With these results, it can be seen that in the tested JSSP instances, the proposed genetic improvement operator is effective in increasing the efficiency of the mXLSGA base technique in finding good solutions.

6. Conclusions

To obtain advances in the solution of instances of the well-known JSSP, this work proposed the development of a genetic improvement operator based on the analysis of the frequency of genes present in well-adapted individuals of the population: the GIFA operator. This operator was proposed in a versatile way so that it can be easily integrated into any GA-like method. In this work, its performance was proven in 58 well-known JSSP instances of different complexities. The considered instances were FT [45], LA [46], ORB [56] and ABZ [57]. GIFA results were compared with other approaches in related works: mXLSGA [13], NGPSO [44], SSS [43], GA-CPG-GT [32], DWPA [58], GWO [59], IPB-GA [31], aLSGA [20], among others.
To evaluate GIFA’s performance, the operator was attached to five different GA-like metaheuristics that represent the state of the art in the specialized literature, with mXLSGA as one. All techniques and their versions with GIFA were performed 35 times, and some facts were observed. In this case, during the evaluations, we found that GIFA made all the metaheuristics more stable since it reduced the mean and standard deviation in all cases where this was possible.
In addition, the worst value presented by each technique during its executions was also reduced in most cases. Something similar occurred with the indicator of the best solution found with each technique. These facts corroborate the assumption that GIFA helps GA-like metaheuristics to find better solutions. However, the search capability of the GA-like method itself has a strong influence on the operator, as the operator guides poorly adapted individuals to good regions of the search space; however, it is up to the base technique to detect these regions. In the second situation, the ability of the proposed material to calculate good solutions for JSSP instances was evaluated and, for that, the obtained results with metaheuristics of the most varied types and inspirations were compared.
Thus, it was possible to observe that mXLSGA with GIFA presents a competitive search power compared to the works that comprise the state of the art on the subject. In this case, the method presented the smallest mean relative error in most situations considered, having surpassed all the techniques on which its components were based. This also serves as confirmation for the assumption that GIFA is capable of helping GA-like metaheuristics increase their search power.
Numerically, the GIFA operator was able to improve the relative error MErr of the mXLSGA method from 0.61% to 0.49% in the case of LA bases, from 0.54% to 0.46% in the case of ORB bases and from 4.43% to 2.07% in the case of ABZ bases, representing, respectively, improvements of 19.67%, 14.81% and 53.27% for that measure. Consequently, the average between the MErr of the mXLSGA considering all four test bases, corresponding to 1.395%, was reduced to 0.7555% with the use of the proposed operator—a reduction of 45.88% from the original value. Thus, we concluded that the proposed method is robust with the ability to obtain good results in instances of varied complexities, since GIFA-mXLSGA presented better or at least competitive results when compared with the other methods present in the specialized literature.
For future advances and developments, we intend to consider deep-learning techniques, mainly reinforcement-learning methods, to detect genetic influences on chromosomes from a GA-like method population. Furthermore, we intend to expand the developed material to other problems in the same field of application, such as Flexible Job Shop Scheduling [62] and to other classes of problems that demand combinatorial optimization, such as pseudo-colorization problems in graphs [21,22].

Author Contributions

Conceptualization, M.S.V. and O.M.J.; methodology, M.S.V. and O.M.J.; software, M.S.V.; validation, M.S.V. and O.M.J.; formal analysis, M.S.V., O.M.J. and R.C.C.; investigation, M.S.V. and O.M.J.; writing—original draft preparation, M.S.V.; writing—review and editing, O.M.J. and R.C.C.; visualization, M.S.V. and R.C.C.; supervision, O.M.J.; project administration, O.M.J. All authors have read and agreed to the published version of the manuscript.

Funding

The Article Processing Charges (APC) was funded by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior—Brasil (CAPES). This study was financed in part by the CAPES—Finance Code 001, by the Brazilian National Council for Scientific and Technological Development, process #381991/2020-2 and by the São Paulo Research Foundation (FAPESP), process #2022/05186-4.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Pardalos, P.M.; Du, D.Z.; Graham, R.L. Handbook of Combinatorial Optimization; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  2. Sbihi, A.; Eglese, R.W. Combinatorial optimization and green logistics. Ann. Oper. Res. 2010, 175, 159–175. [Google Scholar] [CrossRef] [Green Version]
  3. James, J.; Yu, W.; Gu, J. Online vehicle routing with neural combinatorial optimization and deep reinforcement learning. IEEE Trans. Intell. Transp. Syst. 2019, 20, 3806–3817. [Google Scholar]
  4. Matyukhin, V.; Shabunin, A.; Kuznetsov, N.; Takmazian, A. Rail transport control by combinatorial optimization approach. In Proceedings of the 2017 IEEE 11th International Conference on Application of Information and Communication Technologies (AICT), Moscow, Russia, 20–22 September 2017; pp. 1–4. [Google Scholar]
  5. Ehrgott, M.; Gandibleux, X. Multiobjective combinatorial optimization—Theory, methodology, and applications. In Multiple Criteria Optimization: State of the Art Annotated Bibliographic Surveys; Springer: Boston, MA, USA, 2003; pp. 369–444. [Google Scholar]
  6. Parente, M.; Figueira, G.; Amorim, P.; Marques, A. Production scheduling in the context of Industry 4.0: Review and trends. Int. J. Prod. Res. 2020, 58, 5401–5431. [Google Scholar] [CrossRef]
  7. Groover, M.P. Fundamentals of Modern Manufacturing: Materials Processes, and Systems; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar]
  8. Hart, E.; Ross, P.; Corne, D. Evolutionary scheduling: A review. Genet. Program. Evolvable Mach. 2005, 6, 191–220. [Google Scholar] [CrossRef]
  9. Xhafa, F.; Abraham, A. Metaheuristics for Scheduling in Industrial and Manufacturing Applications; Springer: Berlin/Heidelberg, Germany, 2008; Volume 128. [Google Scholar]
  10. Wu, Z.; Sun, S.; Yu, S. Optimizing makespan and stability risks in job shop scheduling. Comput. Oper. Res. 2020, 122, 104963. [Google Scholar] [CrossRef]
  11. Wang, L.; Cai, J.C.; Li, M. An adaptive multi-population genetic algorithm for job-shop scheduling problem. Adv. Manuf. 2016, 4, 142–149. [Google Scholar] [CrossRef]
  12. Mhasawade, S.; Bewoor, L. A survey of hybrid metaheuristics to minimize makespan of job shop scheduling problem. In Proceedings of the 2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), Chennai, India, 1–2 August 2017; pp. 1957–1960. [Google Scholar]
  13. Viana, M.S.; Morandin Junior, O.; Contreras, R.C. A Modified Genetic Algorithm with Local Search Strategies and Multi-Crossover Operator for Job Shop Scheduling Problem. Sensors 2020, 20, 5440. [Google Scholar] [CrossRef]
  14. Viana, M.S.; Morandin Junior, O.; Contreras, R.C. An Improved Local Search Genetic Algorithm with Multi-Crossover for Job Shop Scheduling Problem. In Proceedings of the International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, 12–14 October 2020; Springer: Zakopane, Poland, 2020; pp. 464–479. [Google Scholar]
  15. Viana, M.S.; Morandin Junior, O.; Contreras, R.C. Transgenic Genetic Algorithm to Minimize the Makespan in the Job Shop Scheduling Problem. In Proceedings of the 12th International Conference on Agents and Artificial Intelligence—Volume 2: ICAART. INSTICC, Valletta, Malta, February 22–24; SciTePress: Setubal, Portugal, 2020; pp. 463–474. [Google Scholar] [CrossRef]
  16. Lu, Y.; Huang, Z.; Cao, L. Hybrid immune genetic algorithm with neighborhood search operator for the Job Shop Scheduling Problem. IOP Conf. Ser. Earth Environ. Sci. 2020, Volume 474, 052093. [Google Scholar]
  17. Milovsevic, M.; Lukic, D.; Durdev, M.; Vukman, J.; Antic, A. Genetic algorithms in integrated process planning and scheduling—A state of the art review. Proc. Manuf. Syst. 2016, 11, 83–88. [Google Scholar]
  18. Çaliş, B.; Bulkan, S. A research survey: Review of AI solution strategies of job shop scheduling problem. J. Intell. Manuf. 2015, 26, 961–973. [Google Scholar] [CrossRef]
  19. Demirkol, E.; Mehta, S.; Uzsoy, R. Benchmarks for shop scheduling problems. Eur. J. Oper. Res. 1998, 109, 137–141. [Google Scholar] [CrossRef]
  20. Asadzadeh, L. A local search genetic algorithm for the job shop scheduling problem with intelligent agents. Comput. Ind. Eng. 2015, 85, 376–383. [Google Scholar] [CrossRef]
  21. Contreras, R.C.; Morandin Junior, O.; Viana, M.S. A New Local Search Adaptive Genetic Algorithm for the Pseudo-Coloring Problem. In Advances in Swarm Intelligence; Springer: Cham, Switzerland, 2020; pp. 349–361. [Google Scholar]
  22. Viana, M.S.; Morandin Junior, O.; Contreras, R.C. An Improved Local Search Genetic Algorithm with a New Mapped Adaptive Operator Applied to Pseudo-Coloring Problem. Symmetry 2020, 12, 1684. [Google Scholar] [CrossRef]
  23. Zang, W.; Ren, L.; Zhang, W.; Liu, X. A cloud model based DNA genetic algorithm for numerical optimization problems. Future Gener. Comput. Syst. 2018, 81, 465–477. [Google Scholar] [CrossRef]
  24. Li, X.; Gao, L. An effective hybrid genetic algorithm and tabu search for flexible job shop scheduling problem. Int. J. Prod. Econ. 2016, 174, 93–110. [Google Scholar] [CrossRef]
  25. Sastry, K.; Goldberg, D.; Kendall, G. Genetic algorithms. In Search Methodologies; Springer: Boston, MA, USA, 2005; pp. 97–125. [Google Scholar]
  26. do Amaral, L.R.; Hruschka, E.R. Transgenic, an operator for evolutionary algorithms. In Proceedings of the 2011 IEEE Congress of Evolutionary Computation (CEC), New Orleans, LA, USA, 5–8 June 2011; pp. 1308–1314. [Google Scholar]
  27. do Amaral, L.R.; Hruschka, E.R., Jr. Transgenic: An evolutionary algorithm operator. Neurocomputing 2014, 127, 104–113. [Google Scholar] [CrossRef]
  28. Viana, M.S.; Contreras, R.C.; Junior, O.M. A New Genetic Improvement Operator Based on Frequency Analysis for Genetic Algorithms Applied to Job Shop Scheduling Problem. In Proceedings of the International Conference on Artificial Intelligence and Soft Computing, Zakopane, Poland, 20–24 June 2021; Springer: Zakopane, Poland, 2021; pp. 434–450. [Google Scholar]
  29. Ombuki, B.M.; Ventresca, M. Local search genetic algorithms for the job shop scheduling problem. Appl. Intell. 2004, 21, 99–109. [Google Scholar] [CrossRef]
  30. Watanabe, M.; Ida, K.; Gen, M. A genetic algorithm with modified crossover operator and search area adaptation for the job-shop scheduling problem. Comput. Ind. Eng. 2005, 48, 743–752. [Google Scholar] [CrossRef]
  31. Jorapur, V.S.; Puranik, V.S.; Deshpande, A.S.; Sharma, M. A promising initial population based genetic algorithm for job shop scheduling problem. J. Softw. Eng. Appl. 2016, 9, 208. [Google Scholar] [CrossRef] [Green Version]
  32. Kurdi, M. An effective genetic algorithm with a critical-path-guided Giffler and Thompson crossover operator for job shop scheduling problem. Int. J. Intell. Syst. Appl. Eng. 2019, 7, 13–18. [Google Scholar] [CrossRef] [Green Version]
  33. Liang, X.; Du, Z. Genetic Algorithm with Simulated Annealing for Resolving Job Shop Scheduling Problem. In Proceedings of the 2020 IEEE 8th International Conference on Computer Science and Network Technology (ICCSNT), Dalian, China, 20–22 November 2020; pp. 64–68. [Google Scholar] [CrossRef]
  34. Anil Kumar, K.R.; Das, E.R. Genetic Algorithm and Particle Swarm Optimization in Minimizing MakeSpan Time in Job Shop Scheduling. In Proceedings of ICDMC 2019; Yang, L.J., Haq, A.N., Nagarajan, L., Eds.; Springer: Singapore, 2020; pp. 421–432. [Google Scholar]
  35. Zhang, J.; Cong, J. Research on job shop scheduling based on ACM-GA algorithm. In Proceedings of the 2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chongqing, China, 12–14 March 2021; Volume 5, pp. 490–495. [Google Scholar] [CrossRef]
  36. Wang, B.; Wang, X.; Lan, F.; Pan, Q. A hybrid local-search algorithm for robust job-shop scheduling under scenarios. Appl. Soft Comput. 2018, 62, 259–271. [Google Scholar] [CrossRef]
  37. Frausto-Solis, J.; Hernández-Ramírez, L.; Castilla-Valdez, G.; González-Barbosa, J.J.; Sánchez-Hernández, J.P. Chaotic Multi-Objective Simulated Annealing and Threshold Accepting for Job Shop Scheduling Problem. Math. Comput. Appl. 2021, 26, 8. [Google Scholar] [CrossRef]
  38. Zhou, G.; Zhou, Y.; Zhao, R. Hybrid social spider optimization algorithm with differential mutation operator for the job-shop scheduling problem. J. Ind. Manag. Optim. 2021, 17, 533–548. [Google Scholar] [CrossRef] [Green Version]
  39. Liu, C. An improved Harris hawks optimizer for job-shop scheduling problem. J. Supercomput. 2021, 77, 14090–14129. [Google Scholar] [CrossRef]
  40. Jiang, T. A Hybrid Grey Wolf Optimization for Job Shop Scheduling Problem. Int. J. Comput. Intell. Appl. 2018, 17, 1850016. [Google Scholar] [CrossRef]
  41. Dao, T.K.; Pan, T.S.; Pan, J.S. Parallel bat algorithm for optimizing makespan in job shop scheduling problems. J. Intell. Manuf. 2018, 29, 451–462. [Google Scholar] [CrossRef]
  42. Semlali, S.C.B.; Riffi, M.E.; Chebihi, F. Memetic chicken swarm algorithm for job shop scheduling problem. Int. J. Electr. Comput. Eng. 2019, 9, 2075. [Google Scholar] [CrossRef] [Green Version]
  43. Hamzadayı, A.; Baykasoğlu, A.; Akpınar, Ş. Solving combinatorial optimization problems with single seekers society algorithm. Knowl.-Based Syst. 2020, 201, 106036. [Google Scholar] [CrossRef]
  44. Yu, H.; Gao, Y.; Wang, L.; Meng, J. A Hybrid Particle Swarm Optimization Algorithm Enhanced with Nonlinear Inertial Weight and Gaussian Mutation for Job Shop Scheduling Problems. Mathematics 2020, 8, 1355. [Google Scholar] [CrossRef]
  45. Fisher, C.; Thompson, G. Probabilistic learning combinations of local job-shop scheduling rules. In Industrial Scheduling; Prentice-Hall: Englewood Cliffs, NJ, USA, 1963; pp. 225–251. [Google Scholar]
  46. Lawrence, S. Resouce Constrained Project Scheduling: An Experimental Investigation of Heuristic Scheduling Techniques (Supplement); Graduate School of Industrial Administration, Carnegie-Mellon University: Pittsburgh, PA, USA, 1984. [Google Scholar]
  47. Croes, G.A. A method for solving traveling-salesman problems. Oper. Res. 1958, 6, 791–812. [Google Scholar] [CrossRef]
  48. Meng, X.; Liu, Y.; Gao, X.; Zhang, H. A new bio-inspired algorithm: Chicken swarm optimization. In Proceedings of the International Conference in Swarm Intelligence, Hefei, China, 17–20 October 2014; Springer: Cham, Switzerland, 2014; pp. 86–94. [Google Scholar]
  49. Alkhateeb, F.; Abed-alguni, B.H.; Al-rousan, M.H. Discrete hybrid cuckoo search and simulated annealing algorithm for solving the job shop scheduling problem. J. Supercomput. 2022, 78, 4799–4826. [Google Scholar] [CrossRef]
  50. Xiong, H.; Shi, S.; Ren, D.; Hu, J. A survey of job shop scheduling problem: The types and models. Comput. Oper. Res. 2022, 142, 105731. [Google Scholar] [CrossRef]
  51. Bierwirth, C.; Mattfeld, D.C.; Kopfer, H. On permutation representations for scheduling problems. In Proceedings of the International Conference on Parallel Problem Solving from Nature, Berlin, Germany, 22 – 26 September; Springer: Berlin/Heidelberg, Germany, 1996; pp. 310–318. [Google Scholar]
  52. Wegner, P. A Technique for Counting Ones in a Binary Computer. Commun. ACM 1960, 3, 322. [Google Scholar] [CrossRef]
  53. Yang, X.S.; Deb, S. Multiobjective cuckoo search for design optimization. Comput. Oper. Res. 2013, 40, 1616–1624. [Google Scholar] [CrossRef]
  54. Yang, X.S. Engineering Optimization: An Introduction with Metaheuristic Applications; John Wiley & Sons: Hoboken, NJ, USA, 2010. [Google Scholar]
  55. Al-Obaidi, A.T.S.; Hussein, S.A. Two improved cuckoo search algorithms for solving the flexible job-shop scheduling problem. Int. J. Perceptive Cogn. Comput. 2016, 2. [Google Scholar] [CrossRef]
  56. Applegate, D.; Cook, W. A computational study of the job-shop scheduling problem. ORSA J. Comput. 1991, 3, 149–156. [Google Scholar] [CrossRef]
  57. Adams, J.; Balas, E.; Zawack, D. The shifting bottleneck procedure for job shop scheduling. Manag. Sci. 1988, 34, 391–401. [Google Scholar] [CrossRef]
  58. Wang, F.; Tian, Y.; Wang, X. A Discrete Wolf Pack Algorithm for Job Shop Scheduling Problem. In Proceedings of the 2019 5th International Conference on Control, Automation and Robotics (ICCAR), Beijing, China, 19–22 April 2019; pp. 581–585. [Google Scholar]
  59. Jiang, T.; Zhang, C. Application of grey wolf optimization for solving combinatorial problems: Job shop and flexible job shop scheduling cases. IEEE Access 2018, 6, 26231–26240. [Google Scholar] [CrossRef]
  60. Kurdi, M. An effective new island model genetic algorithm for job shop scheduling problem. Comput. Oper. Res. 2016, 67, 132–142. [Google Scholar] [CrossRef]
  61. Asadzadeh, L.; Zamanifar, K. An agent-based parallel approach for the job shop scheduling problem with genetic algorithms. Math. Comput. Model. 2010, 52, 1957–1965. [Google Scholar] [CrossRef]
  62. Xie, J.; Gao, L.; Peng, K.; Li, X.; Li, H. Review on flexible job shop scheduling. IET Collab. Intell. Manuf. 2019, 1, 67–77. [Google Scholar] [CrossRef]
Figure 1. Calculating the frequency vectors ( v i ) of the three jobs in each coordinate of the four best chromosomes in the population.
Figure 1. Calculating the frequency vectors ( v i ) of the three jobs in each coordinate of the four best chromosomes in the population.
Sensors 22 04561 g001
Figure 2. Computation of the representative individual ( c ) and its genetic relevance (w).
Figure 2. Computation of the representative individual ( c ) and its genetic relevance (w).
Sensors 22 04561 g002
Figure 3. Determination of the most significant genes of a representative individual.
Figure 3. Determination of the most significant genes of a representative individual.
Sensors 22 04561 g003
Figure 4. Genetic improvement proposed. The genes highlighted on a black background are the most relevant, while the genes highlighted with the red sectioned circle are those that need correction.
Figure 4. Genetic improvement proposed. The genes highlighted on a black background are the most relevant, while the genes highlighted with the red sectioned circle are those that need correction.
Sensors 22 04561 g004
Figure 5. Flow chart of our proposed Genetic Improvement operator for Genetic Algorithm.
Figure 5. Flow chart of our proposed Genetic Improvement operator for Genetic Algorithm.
Sensors 22 04561 g005
Figure 6. Experiments with respect to N Top and N Worst settings. In each heatmap, the average enhancement values AvgImp ( N T o p , N W o r s t ) for N Top and N Worst are varying in grid on set { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 20 , 30 , 40 , 50 } . For the computation of these values, N run = 35 executions for each method were considered. (a) LA01. (b) LA21. (c) LA25.
Figure 6. Experiments with respect to N Top and N Worst settings. In each heatmap, the average enhancement values AvgImp ( N T o p , N W o r s t ) for N Top and N Worst are varying in grid on set { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 20 , 30 , 40 , 50 } . For the computation of these values, N run = 35 executions for each method were considered. (a) LA01. (b) LA21. (c) LA25.
Sensors 22 04561 g006
Table 1. GA-like methods statistics for 35 executions of each method.
Table 1. GA-like methods statistics for 35 executions of each method.
InstanceMethodBestWorstMeanSDNumber of OptimaIteration of the OptimumTime (s)
FT06GA555755.450.8527522.4
GIFA-GA555655.140.3530382.63
GSA555555035839.78
GIFA-GSA555555035840.02
LSGA555957.681.436277.95
GIFA-LSGA555655.830.386208.17
aLSGA5555550351111.51
GIFA-aLSGA555555035613.01
mXLSGA555555035522.11
GIFA-mXLSGA555555035522.37
LA01GA666712679.029.986762.65
GIFA-GA666678669.375.1715262.94
GSA666715677.813.61131051.79
GIFA-GSA666687672.347.43171051.96
LSGA66672669716.6516112.15
GIFA-LSGA666707688.6715.5982312.42
aLSGA6666666660351120.07
GIFA-aLSGA666666666035620.29
mXLSGA666666666035538.06
GIFA-mXLSGA666666666035438.34
LA06GA926938927.42.8724793.34
GIFA-GA926936926.862.2630543.57
GSA926935926.311.5433767.31
GIFA-GSA926926926035767.56
LSGA926970935.813.15175519.87
GIFA-LSGA926952932.289.08204120.13
aLSGA926926926035338.07
GIFA-aLSGA926926926035338.31
mXLSGA926926926035271.98
GIFA-mXLSGA926926926035272.21
LA11GA122212561235.9710.815583.97
GIFA-GA122212531233.489.526514.18
GSA122212761232.1414.94201981.59
GIFA-GSA122212631231.2413.56261581.7
LSGA122212991251.619.6223231.33
GIFA-LSGA122212781250.1711.0942631.59
aLSGA122212221222035560.82
GIFA-aLSGA122212221222035461.03
mXLSGA1222122212220353116.57
GIFA-mXLSGA1222122212220353116.84
LA16GA98211001045.626.40-2.89
GIFA-GA98210611022.8920.510-3.12
GSA99411101046.7726.370-55.31
GIFA-GSA99410211017.3815.490-55.63
LSGA101611481084.2532.270-20.62
GIFA-LSGA101610771037.1126.620-20.83
aLSGA959985980.514.480-38.75
GIFA-aLSGA956982975.122.360-38.98
mXLSGA945982972.2513.329666.25
GIFA-mXLSGA945979959.936.3774966.51
LA23GA118913361271.7134.440-3.78
GIFA-GA115113241269.5130.950-4
GSA114813471214.0843.850-73.19
GIFA-GSA112113391191.2530.270-73.42
LSGA121414191295.3443.70-38.39
GIFA-LSGA111513691278.3634.790-38.63
aLSGA10351115107816.340-75.62
GIFA-aLSGA103210981067.5815.0627575.79
mXLSGA103210931060.4517.96151123.64
GIFA-mXLSGA103210931039.816.54234123.87
LA26GA152516991619.5139.50-4.44
GIFA-GA148516671614.1132.590-4.65
GSA143315861512.2236.470-91.67
GIFA-GSA142015231503.7317.990-91.94
LSGA151716651597.9436.670-60.84
GIFA-LSGA149215371534.1128.740-61.07
aLSGA130213841343.2819.690-124.93
GIFA-aLSGA127313821332.6813.280-125.12
mXLSGA121813711300.8541.88685203.95
GIFA-mXLSGA121813711258.5132.321167204.3
LA31GA212023262223.1448.450-6.23
GIFA-GA211123222197.3145.190-6.51
GSA194321422050.1763.090-130.65
GIFA-GSA191921011964.256.940-130.89
LSGA20052336217766.290-123.23
GIFA-LSGA198623012119.8862.490-123.57
aLSGA180818971843.5121.170-258.53
GIFA-aLSGA178418611841.2519.13280258.81
mXLSGA178418451807.7119.2580424.56
GIFA-mXLSGA178418451805.9818.96771424.77
Table 2. Comparison of computational results between GIFA-mXLSGA and other algorithms for FT. The symbol “-” means “not evaluated in that instance”.
Table 2. Comparison of computational results between GIFA-mXLSGA and other algorithms for FT. The symbol “-” means “not evaluated in that instance”.
InstanceSizeBKSGIFA-mXLSGAmXLSGANGPSOSSSGA-CPG-GTGWOIPB-GAaLSGA
Best E % Best E % Best E % Best E % Best E % Best E % Best E % Best E %
FT06 6 × 6 55550.00550.00550.00550.00550.00550.00550.00550.00
FT10 10 × 10 9309300.009300.009300.009360.649350.539401.079603.229300.00
FT20 20 × 5 116511650.0011650.0012103.8611650.0011801.2811781.1111922.3111650.00
MErr 0.00 0.00 1.28 0.21 0.60 0.73 1.84 0.00
Table 3. Comparison of computational results between GIFA-mXLSGA and other algorithms for LA. The symbol “-” means “not evaluated in that instance”.
Table 3. Comparison of computational results between GIFA-mXLSGA and other algorithms for LA. The symbol “-” means “not evaluated in that instance”.
InstanceSizeBKSGIFA-mXLSGAmXLSGANGPSOSSSGA-CPG-GTDWPAGWOIPB-GAaLSGA
Best E % Best E % Best E % Best E % Best E % Best E % Best E % Best E % Best E %
LA01 10 × 5 6666660.006660.006660.006660.006660.006660.006660.006660.006660.00
LA02 10 × 5 6556550.006550.006550.006550.006550.006550.006550.006550.006550.00
LA03 10 × 5 5975970.005970.005970.005970.005970.006142.845970.005990.336061.50
LA04 10 × 5 5905900.005900.005900.005900.005900.005981.355900.005900.005930.50
LA05 10 × 5 5935930.005930.005930.005930.005930.005930.005930.005930.005930.00
LA06 15 × 5 9269260.009260.009260.009260.009260.009260.009260.009260.009260.00
LA07 15 × 5 8908900.008900.008900.008900.008900.008900.008900.008900.008900.00
LA08 15 × 5 8638630.008630.008630.008630.008630.008630.008630.008630.008630.00
LA09 15 × 5 9519510.009510.009510.009510.009510.009510.009510.009510.009510.00
LA10 15 × 5 9589580.009580.009580.009580.009580.009580.009580.009580.009580.00
LA11 20 × 5 122212220.0012220.0012220.0012220.0012220.0012220.0012220.0012220.0012220.00
LA12 20 × 5 103910390.0010390.0010390.00--10390.0010390.0010390.0010390.0010390.00
LA13 20 × 5 115011500.0011500.0011500.00--11500.0011500.0011500.0011500.0011500.00
LA14 20 × 5 129212920.0012920.0012920.00--12920.0012920.0012920.0012920.0012920.00
LA15 20 × 5 120712070.0012070.0012070.00--12070.0012735.4612070.0012070.0012070.00
LA16 10 × 10 9459450.009450.009450.009470.219460.109935.079561.169460.109460.10
LA17 10 × 10 7847840.007840.007941.27--7840.007931.147900.767840.007840.00
LA18 10 × 10 8488480.008480.008480.00--8480.008611.538591.298530.588480.00
LA19 10 × 10 8428420.008420.008420.00--8420.008885.468450.358662.858521.18
LA20 10 × 10 9029020.009020.009080.66--9070.559343.549373.889131.219070.55
LA21 15 × 10 104610520.5710591.24118313.0910762.8610904.2011055.6410904.2010813.3410682.10
LA22 15 × 10 9279270.009350.869270.00--9542.919896.689704.639704.639563.12
LA23 15 × 10 103210320.0010320.0010320.00--10320.0010511.8410320.0010320.0010320.00
LA24 15 × 10 9359400.539461.179683.52--9744.179885.669825.0210027.169663.31
LA25 15 × 10 9779840.719860.929770.00--9992.2510396.3410083.1710234.7010022.55
LA26 20 × 10 121812180.0012180.0012180.00--12371.5513036.9712391.7212734.5112230.41
LA27 20 × 10 123512612.1012692.75139412.8712561.7013136.3113468.9812904.4513176.6312813.72
LA28 20 × 10 121612391.8912391.8912160.00--12805.2612916.1612633.8612885.9212452.38
LA29 20 × 10 115211903.2912014.25128011.11--12478.24127510.6712447.9812337.0312306.77
LA30 20 × 10 135513550.0013550.0013550.00--13670.8813892.5013550.0013771.6213550.00
LA31 30 × 10 178417840.0017840.0017840.0017840.0017840.0017840.0017840.0017840.0017840.00
LA32 30 × 10 185018500.0018500.0018500.00--18500.0018500.0018500.0018510.0518500.00
LA33 30 × 10 171917190.0017190.0017190.00--17190.0017190.0017190.0017190.0017190.00
LA34 30 × 10 172117210.0017210.0017210.00--17250.2317883.8917210.0017491.6217210.00
LA35 30 × 10 188818880.0018880.0018880.00--18880.0019473.12518880.0018880.0018880.00
LA36 15 × 15 126812952.1212952.12140811.0413042.8313083.1513889.4613113.3913345.20--
LA37 15 × 15 139714070.7114151.2815158.44--14896.5814866.37--14675.01--
LA38 15 × 15 119612464.1812464.1811960.00--12756.60133911.95--12786.85--
LA39 15 × 15 123312582.0212582.02166234.79--12904.6213348.19--12965.10--
LA40 15 × 15 122212431.7112431.7112220.0012522.4512522.45134710.22--12845.07--
MErr 0.49 0.61 2.42 0.59 1.50 3.52 1.27 1.99 0.80
Table 4. Comparison of computational results between GIFA-mXLSGA and other algorithms for ORB. The symbol “-” means “not evaluated in that instance”.
Table 4. Comparison of computational results between GIFA-mXLSGA and other algorithms for ORB. The symbol “-” means “not evaluated in that instance”.
InstanceSizeBKSGIFA-mXLSGAmXLSGAIPB-GANIMGAaLSGAPaGALSGA
Best E % Best E % Best E % Best E % Best E % Best E % Best E %
ORB0110 × 10105910680.8510680.8510993.781059010923.1211498.510882.74
ORB0210 × 108888890.118890.119062.038900.238940.689294.629213.72
ORB0310 × 10100510231.7910231.7910565.0710262.0910292.39112912.3410413.58
ORB0410 × 101005100501005010322.6910191.3910161.0910625.6710524.68
ORB0510 × 1088788708890.239092.488930.689011.589365.529031.8
ORB0610 × 10101010130.310190.8910382.7710120.210281.7810604.9510625.15
ORB0710 × 10397397039704113.5339704052.024164.794082.77
ORB0810 × 108999070.899070.8991729091.119141.67101012.359081
ORB0910 × 109349400.649400.64- 9420.869430.969946.429804.93
ORB1010 × 1094494409440- - - -
MErr 0.46 0.54 3.04 0.73 1.7 7.24 3.37
Table 5. Comparison of computational results between GIFA-mXLSGA and other algorithms for ABZ. The symbol “-” means “not evaluated in that instance”.
Table 5. Comparison of computational results between GIFA-mXLSGA and other algorithms for ABZ. The symbol “-” means “not evaluated in that instance”.
InstanceSizeBKSGIFA-mXLSGAmXLSGAGA-CPG-GTMeCSOIPB-GA
Best E % Best E % Best E % Best E % Best E %
ABZ0510 × 101234123401234012380.3212360.1612410.57
ABZ0610 × 10943943094309470.429490.649642.23
ABZ0720 × 156566570.156955.95----7199.6
ABZ0820 × 1564871310.0371310.03----73813.89
ABZ0920 × 156796800.157216.19----7429.28
MErr 2.07 4.43 0.37 0.4 7.11
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Viana, M.S.; Contreras, R.C.; Morandin Junior, O. A New Frequency Analysis Operator for Population Improvement in Genetic Algorithms to Solve the Job Shop Scheduling Problem. Sensors 2022, 22, 4561. https://doi.org/10.3390/s22124561

AMA Style

Viana MS, Contreras RC, Morandin Junior O. A New Frequency Analysis Operator for Population Improvement in Genetic Algorithms to Solve the Job Shop Scheduling Problem. Sensors. 2022; 22(12):4561. https://doi.org/10.3390/s22124561

Chicago/Turabian Style

Viana, Monique Simplicio, Rodrigo Colnago Contreras, and Orides Morandin Junior. 2022. "A New Frequency Analysis Operator for Population Improvement in Genetic Algorithms to Solve the Job Shop Scheduling Problem" Sensors 22, no. 12: 4561. https://doi.org/10.3390/s22124561

APA Style

Viana, M. S., Contreras, R. C., & Morandin Junior, O. (2022). A New Frequency Analysis Operator for Population Improvement in Genetic Algorithms to Solve the Job Shop Scheduling Problem. Sensors, 22(12), 4561. https://doi.org/10.3390/s22124561

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop