Next Article in Journal
Univariate Theory of Functional Connections Applied to Component Constraints
Next Article in Special Issue
Differential Evolution under Fixed Point Arithmetic and FP16 Numbers
Previous Article in Journal
Estimating the Market Share for New Products with a Split Questionnaire Survey
Previous Article in Special Issue
Prediction of Maximum Pressure at the Roofs of Rectangular Water Tanks Subjected to Harmonic Base Excitation Using the Multi-Gene Genetic Programming Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Chaotic Multi-Objective Simulated Annealing and Threshold Accepting for Job Shop Scheduling Problem

by
Juan Frausto-Solis
1,*,
Leonor Hernández-Ramírez
1,
Guadalupe Castilla-Valdez
1,
Juan J. González-Barbosa
1 and
Juan P. Sánchez-Hernández
2
1
Graduate Program Division, Tecnológico Nacional de México/Instituto Tecnológico de Ciudad Madero, Cd. Madero 89440, Mexico
2
Dirección de Informática, Electrónica y Telecomunicaciones, Universidad Politécnica del Estado de Morelos, Boulevard Cuauhnáhuac 566, Jiutepec 62574, Mexico
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2021, 26(1), 8; https://doi.org/10.3390/mca26010008
Submission received: 26 September 2020 / Revised: 7 January 2021 / Accepted: 8 January 2021 / Published: 12 January 2021
(This article belongs to the Special Issue Numerical and Evolutionary Optimization 2020)

Abstract

:
The Job Shop Scheduling Problem (JSSP) has enormous industrial applicability. This problem refers to a set of jobs that should be processed in a specific order using a set of machines. For the single-objective optimization JSSP problem, Simulated Annealing is among the best algorithms. However, in Multi-Objective JSSP (MOJSSP), these algorithms have barely been analyzed, and the Threshold Accepting Algorithm has not been published for this problem. It is worth mentioning that the researchers in this area have not reported studies with more than three objectives, and the number of metrics they used to measure their performance is less than two or three. In this paper, we present two MOJSSP metaheuristics based on Simulated Annealing: Chaotic Multi-Objective Simulated Annealing (CMOSA) and Chaotic Multi-Objective Threshold Accepting (CMOTA). We developed these algorithms to minimize three objective functions and compared them using the HV metric with the recently published algorithms, MOMARLA, MOPSO, CMOEA, and SPEA. The best algorithm is CMOSA (HV of 0.76), followed by MOMARLA and CMOTA (with HV of 0.68), and MOPSO (with HV of 0.54). In addition, we show a complexity comparison of these algorithms, showing that CMOSA, CMOTA, and MOMARLA have a similar complexity class, followed by MOPSO.

1. Introduction

The Job Shop Scheduling Problem (JSSP) has enormous industrial applicability. This problem consists of a set of jobs, formed by operations, which must be processed in a set of machines subject to constraints of precedence and resource capacity. Finding the optimal solution for this problem is too complex, and so it is classified in the NP-hard class [1,2]. On the other hand, the JSSP foundations provide a theoretical background for developing efficient algorithms for other significant sequencing problems, which have many production systems applications [3]. Furthermore, designing and evaluating new algorithms for JSSP is relevant not only because it represents a big challenge but also for its high industrial applicability [4].
There are several JSSP taxonomies; one of which is single-objective and multi-objective optimization. The single-objective optimization version has been widely studied for many years, and the Simulated Annealing (SA) [5] is among the best algorithms. The Threshold Accepting (TA) algorithm from the same family is also very efficient in this area [6]. In contrast, in the case of Multi-Objective Optimization Problems (MOOPs), both algorithms for JSSP and their comparison are scarce.
Published JSSP algorithms for MOOP include only a few objectives, and only a few performance metrics are reported. However, it is common for the industrial scheduling requirements to have several objectives, and then the Multi-Objective JSSP (MOJSSP) becomes an even more significant challenge. Thus, many industrial production areas require the multi-objective approach [7,8].
In single-objective optimization, the goal is to find the optimal feasible solution of an objective function. In other words, to find the best value of the variables which fulfill all the constraints of the problem. On the other hand, for MOJSSP, the problem is to find the optimum of a set of objective functions f 1 ( x ) , f 2 ( x ) f n ( x ) depending on a set of variables x and subject to a set of constraints defined by these variables. To find the optimal solution is usually impossible because fulfilling some objective functions may not optimize the other objectives of the problem. In MOOP, a preference relation or Pareto dominance relation produces a set of solutions commonly called the Pareto optimal set [9]. The Decision Makers (DMs) should select from the Pareto set the solution that satisfies their preferences, which can be subjective, based on experience, or will most likely be influenced by the industrial environment’s needs [10]. Therefore, the DM needs to have a Pareto front that contains multiple representative compromise solutions, which exhibit both good convergence and diversity [11].
In the study of single-objective JSSP, many algorithms have been applied. Some of the most common are SA, Genetic Algorithms (GAs), Tabu Search (TS), and Ant Systems (ASs) [12]. In addition, as we mention below, few works in the literature solve JSSP instances with more than two objectives and applying more than two metrics to evaluate their performance. Nevertheless, for MOJSSP, the number of objectives and performance metrics remains too small [8,13,14,15]. The works of Zhao [14] and Mendez [8] are exceptions because the authors have presented implementations with two or three significant objective functions and two performance metrics. Moreover, SA and TA have shown to be very efficient for solving NP-hard problems. Thus, this paper’s motivation is to develop new efficient SA algorithms for MOJSSP with two or more objective functions and a larger number of performance metrics.
The first adaptation of SA to MOOP was an algorithm proposed in 1992, also known as MOSA [16]. An essential part of this algorithm is that it applies the Boltzmann criterion for accepting bad solutions, commonly used in single-objective JSSP. MOSA combines several objective functions. The single-objective JSSP optimization with SA algorithm and MOSA algorithm for multi-objective optimization is different in several aspect related to determining the energy functions, using and generating new solutions, and measuring their quality as is well known, these energy functions are required in the acceptance criterion. Multiple versions of MOSA have been proposed in the last few years. One of them, published in 2008, is AMOSA, that surpassed other MOOP algorithms at this time [17]. In this work, we adapt this algorithm for MOJSSP. TA [6] is an algorithm for single-objective JSSP, which is very similar to Simulated Annealing. These two algorithms have the same structure, and both use a temperature parameter, and they accept some bad solutions for escaping from local optima. In addition, these algorithms are among the best JSSP algorithms, and their performance is very similar. Nevertheless, for MOJSSP, a TA algorithm has not been published, and so for obvious reason, it was not compared with the SA multi-objective version.
MOJSSP has been commonly solved using IMOEA/D [14], NSGA-II [18], SPEA [19], MOPSO [20], and CMOEA [21]; the latter was renamed CMEA in [8]. Nevertheless, the number of objectives and performance metrics of these algorithms remains too small. The Evolutionary Algorithm based on decomposition proposed in 2016 by Zhao in [14] was considered the best algorithm [22]. The Multi-Objective Q-Learning algorithm (MOQL) for JSSP was published in 2017 [23]; this approach uses several agents to solve JSSP. An extension of MOQL is MOMARLA, which was proposed in 2019 by Mendez [8]. This MOJSSP algorithm uses two objective functions: makespan and total tardiness. MOMARLA overcomes the classical multi-objective algorithms SPEA [19], CMOEA [21], and MOPSO [20].
The two new algorithms presented in this paper for JSSP are Chaotic Multi-Objective Simulated Annealing (CMOSA) and Chaotic Multi-Objective Threshold Accepting (CMOTA). The first algorithm is inspired by the classic MOSA algorithm [17]. However, CMOSA is different in three aspects: (1) for the first time it is designed specifically for MOJSSP, (2) it uses an analytical tuning of the cooling scheme parameters, and (3) it uses chaotic perturbations for finding new solutions and for escaping from local optima. This process allows the search to continue from a different point in the solution space and it contributes to a better diversity of the generated solutions. Furthermore, CMOTA is based on CMOSA and Threshold Accepting, and it does not require the Boltzmann distribution. Instead, it uses a threshold strategy for accepting bad solutions to escape from local optima. In addition, a chaotic perturbation function is applied.
In this paper, we present two new alternatives for MOJSSP, and we consider three objective functions: makespan, total tardiness, and total flow time. The first objective is very relevant for production management applications [7], while the other two are critical for enhancing client attention service [23]. In addition, we use six metrics for the evaluation of these algorithms, and they are Mean Ideal Distance (MID), Spacing (S), Hypervolume (HV), Spread ( Δ ), Inverted Generational Distance (IGD), and Coverage (C). We also apply an analytical tuning parameter method to these algorithms. Finally, we compare the achieved results with those obtained with the JSSP algorithm cited below in [8,14].
The rest of the paper is organized as follows. In Section 2, we make a qualitative comparison of related MOJSSP works. In Section 3, we present MOJSSP concepts and the performance metrics that were applied. Section 4 presents the formulation of MOJSSP with three objectives. The proposed algorithms, their tuning method, and the chaotic perturbation are also shown in Section 5. Section 6 shows the application of the proposed algorithms to a set of 70, 58, and 15 instances. Finally, the results are shown and compared with previous works. In Section 7, we present our conclusions.

2. Related Works

As mentioned above, in single-objective optimization, the JSSP community has broadly investigated the performance of the different solution methods. However, the situation is entirely different for MOJSSP, and there is a small number of published works. In 1994, an analysis of SA family algorithms for JSSP was presented [24]; two of them were SA and TA, which we briefly explain in the next paragraph. These algorithms suppose that the solutions define a set of macrostates of a set of particles, while the objective functions’ values represent their energy, and both algorithms have a Metropolis cycle where the neighborhood of solutions is explored. In single-objective optimization, for the set of instances used to evaluate JSSP algorithms, SA obtained better results than TA. Furthermore, a better solution than the previous one is always accepted, while a worse solution may be accepted depending on the Boltzmann distribution criterion. This distribution is related to the current temperature value and the increment or decrement of energy (associated with the objective functions) in the current temperature value. In the TA case, a worse solution than the previous one may be accepted using a criterion that tries to emulate the Boltzmann distribution. This criterion establishes a possible acceptance of a worse solution when the decrement of energy is smaller than a threshold value depending on the temperature and a parameter γ that is very close to one. Then at the beginning of the process, the threshold values are enormous because they depend on the temperatures. Subsequently, the temperature parameter is gradually decreased until a value close to zero is achieved, and then this threshold is very small.
In 2001, a Multi-Objective Genetic Algorithm was proposed to minimize the makespan, total tardiness, and the total idle time [25]. The proposed methodology for JSSP was assessed with 28 benchmark problems. In this publication, the authors randomly weighted the different fitness functions to determine their results.
In 2006, SA was used for two objectives: the makespan and the mean flow time [26]. This algorithm was called Pareto Archived Simulated Annealing (PASA), which used the Simulated Annealing algorithm with an overheating strategy to escape from local optima and to improve the quality of the results. The performance of this algorithm was evaluated with 82 instances taken from the literature. Unfortunately, this method has not been updated for three or more objective functions.
In 2011, a two-stage genetic algorithm (2S-GA) was proposed for JSSP with three objectives to minimize the makespan, total weighted earliness, and total weighted tardiness [13]. In the first stage, a parallel GA found the best solution for each objective function. Then, in the second stage, the GA combined the populations, which evolved using the weighted aggregating objective function.
Researchers from the Contemporary Design and Integrated Manufacturing Technology (CDIMT) laboratory proposed an algorithm named Improved Multi-Objective Evolutionary Algorithm based on Decomposition (IMOEA/D) to minimize the makespan, tardiness, and total flow time [14]. The authors experiment with 58 benchmark instances, and they use the performance metrics Coverage [27] and Mean Ideal Distance (MID) [28] to evaluate their algorithm. We notice in Table 1, studies with two or three objectives, but they do not report any metric. On the other hand, IMOEA/D stands out from the rest of the literature, not only because the authors reported good results but also because they considered a more significant number of objectives, and they applied two metrics.
In 2008, the AMOSA algorithm based on SA for several objectives was proposed [17]. In this paper, the authors reported that the AMOSA algorithm performed better than some MOEA algorithms, one of them NSGA-II [29]. They presented the main Boltzmann rules for accepting bad solutions. Unfortunately, a MOJSSP with AMOSA and with more than two objectives has not been published.
In 2017, a hybrid algorithm between an NSGA-II and a linear programming approach was proposed [15]; it was used to solve the FT10 instance of Taillard [30]. This algorithm minimized the weighted tardiness and energy costs. To evaluate the performance, the authors only used the HV metric.
In 2019, MOMARLA was proposed, a new algorithm based on Q-Learning to solve MOJSSP [8]. This work provided flexibility to use decision-maker preferences; each agent represented a specific objective and used two action selection strategies to find a diverse and accurate Pareto front. In Table 1, we present the last related studies for MOJSSP and the proposed algorithms.
This paper analyzes our algorithms CMOSA and CMOTA, as follows: (a) comparing CMOSA and CMOTA versus IMOEA/D [14], (b) comparing our algorithms with the results published for MOMARLA, MOPSO, CMOEA, and SPEA, and (c) comparing CMOSA versus CMOTA.

3. Multi-Objective Optimization

In a single-objective problem, the algorithm finishes its execution when it finds the solution that optimizes the objective function or a very close optimal solution. However, for Multi-Objective Optimization, the situation is more complicated since several objectives must be optimized simultaneously. Then, it is necessary to find a set of solutions optimizing each of the objectives individually. These solutions can be contrasting because we can obtain the best solution for an objective function that is not the best for other objective functions.

3.1. Concepts

Definitions of some concepts of Multi-Objective Optimization are shown below.
Pareto Dominance: In general, for any optimization problem, solution A dominates another solution B if the following conditions are met [31]: A is strictly better than B on at least one objective, and A is not worse than B for any objective function.
Non-dominated set: In a set of P solutions, the non-dominated solutions P1 is integrated by solutions that accomplish the following conditions [31]: any pair of P1 solutions must be non-dominated (one regarding the other), and any solution that does not belong to P1 is dominated by at least one member of P1.
Pareto optimal set: The set of non-dominated solutions of the total search space.
Pareto front: The graphic representation of the non-dominated solutions of the multi-objective optimization problem.

3.2. Performance Metrics

In an experimental comparison of different optimization techniques or algorithms, it is always necessary to have the notion of performance. In the case of Multi-Objective Optimization, the definition of quality is much more complicated than for single-objective optimization problems because the multi-objective optimization criteria itself consists of multiple objectives, of which, the most important are:
  • To minimize the distance of the resulting non-dominated set to the true Pareto front.
  • To achieve an adequate distribution (for instance, uniform) of the solutions is desirable.
  • To maximize the extension of the non-dominated front for each of the objectives. In other words, a wide range of values must be covered by non-dominated solutions.
In general, it is difficult to find a single performance metric that encompasses all of the above criteria. In the literature, a large number of performance metrics can be found. The most popular performance metrics were used in this research and are described below:
Mean Ideal Distance: Evaluates the closeness of the calculated Pareto front ( P F c a l c ) solutions with an ideal point, which is usually (0, 0) [28].
M I D = i = 1 Q c i Q
where c i = f 1 , i 2 + f 2 , i 2 + f 3 , i 2 and f 1 , i , f 2 , i , f 3 , i are the values of the i-th non-dominated solution for their first, second, and third objective function, and Q is the number of solutions in the P F c a l c .
Spacing: Evaluates the distribution of non-dominated solutions in the P F c a l c . When several algorithms are evaluated with this metric, the best is that with the smallest S value [32].
S = i = 1 Q ( d i d ¯ ) 2 Q
where d i measures the distance in the space of the objective functions between the i-th solution and its nearest neighbor; that is the j-th solution in the P F c a l c of the algorithm, Q is the number of the solutions in the P F c a l c , d ¯ is the average of the d i , that is d ¯ = i = 1 Q d i Q and d i = m i n j ( | f 1 i ( x ) f 1 j ( x ) | + | f 2 i ( x ) f 2 j ( x ) | + + | f M i ( x ) f M j ( x ) | ) , where f 1 i , f 2 i are the values of the i-th non-dominated solution for their first and second objective function, f 1 j , f 2 j are the values of the j-th non-dominated solution for their first and second objective function respectively, M is the number of objective functions and i , j = 1 , Q .
Hypervolume: Calculates the volume in the objective space that is covered by all members of the non-dominated set [33]. The  H V metric is measured based on a reference point (W), and this can be found simply by constructing a vector with the worst values of the objective function.
H V = v o l u m e i = 1 | Q | v i
where v i is a hypercube and is constructed with a reference point W and the solution i as the diagonal corners of the hypercube [31]. An algorithm that obtains the largest H V value is better. The data should be normalized by transforming the value in the range [0, 1] for each objective separately to perform the calculation.
Spread: This metric was proposed to have a more precise coverage value and considers the distance to the (extreme points) of the true Pareto front ( P F t r u e ) [29].
Δ = k = 1 M d k e + i = 1 Q | d i d ¯ | k = 1 M d k e + Q × d ¯
where d k e measures the distance between the “extreme” point of the P F t r u e for the k-th objective function, and the nearest point of P F c a l c , d i corresponds to the distance between the solution i-th of the P F c a l c , while its nearest neighbor, d ¯ corresponds to the average of the d i and M is the number of objectives.
Inverted Generational Distance: It is an inverted indicator version of the Generational Distance (GD) metric, where all the distances are measured from the P F t r u e to the P F c a l c [1].
I G D ( Q ) = j = 1 | T | d ^ j p 1 / p | T |
where T = { t 1 , t 2 , , t | T | } that is, the solutions in the P F t r u e and | T | is the cardinality of T, p is an integer parameter, in this paper p = 2 and d ^ j is the Euclidean distance from t j to its nearest objective vector q in Q, according to (6).
d j = min q = 1 | Q | m = 1 M ( f m ( t j ) f m ( q ) ) 2
where f m ( t ) is the m-th objective function value of the t-th member of T and M is the number of objectives.
Coverage: Represents the dominance between set A and set B [27]. It is the ratio of the number of solutions in set B that were dominated by solutions in set A and the total number of solutions in set B. The C metric is defined by (7).
C ( A , B ) = | { b B | a A : a b } | | B |
When C ( A , B ) = 1 , all B solution are dominated or equal to solutions in A. Otherwise, C ( A , B ) = 0 , represents situations in which none of the solutions in B is dominated by any solution in A. The higher the value of C ( A , B ) , the more solutions in B are dominated by solutions in A. Both C ( A , B ) and C ( B , A ) should be considered, since C ( B , A ) is not necessarily equal to 1 C ( A , B ) .

4. Multi-Objective Job Shop Scheduling Problem

In JSSP, there are a set of n different jobs consisting of operations that must be processed in m different machines. There are a set of precedence constraints for these operations, and there are also resource capacity constraints for ensuring that each machine should process only one operation at the same time. The processing time of each operation is known in advance. The objective of JSSP is to determine the sequence of the operations in each machine (the start and finish time of each operation) to minimize certain objective functions subject to the constraints mentioned above. The most common objective is the makespan, which is the total time in which all the problem operations are processed. Nevertheless, real scheduling problems are multi-objective, and several objectives should be considered simultaneously.
The three objectives that are addressed in the present paper are:
Makespan: the maximum time of completion of all jobs.
Total tardiness: it is calculated as the total positive differences between the makespan and the due date of each job.
Total flow time: it is the summation of the completion times of all jobs.
The formal MOJSSP model can be formulated as follows [34,35]:
O p t i m i z e F ( x ) = [ f 1 ( x ) , f 2 ( x ) , , f q ( x ) ] S u b j e c t t o : x S
where q is the number of objectives, x is the vector of decision variables, and S represents the feasible region. Defined by the next precedence and capacity constraints, respectively:
t j t i + p i For   all   i , j O   when   i   precedes   j t j t i + p i   or   t i t j + p i For   all   i , j O   when   M i = M j
where
  • t i , t j are the starting times for the jobs i, j J .
  • p i and p j are the processing times for the jobs i, j J .
  • J : { J 1 , J 2 , J 3 , , J n } it is the set of jobs.
  • M : { M 1 , M 2 , M 3 , , M m } it is the set of machines.
  • O is the set of operations O j , i (operation i of the job j).
The objective functions of makespan, total tardiness, and total flow time, are defined by Equations (9)–(11), respectively.
f 1 = m i n max j = 1 n C j
where C j is the makespan of job j.
f 2 = m i n j = 1 n T j = m i n j = 1 n m a x ( 0 , C j D j )
where T j = m a x ( 0 , C j D j ) is the tardiness of job j, and  D j is the due date of job j and is calculated with D j = τ i = 1 m p j , i [36], where p j , i is the time required to process the job j in the machine i. In this case, the due date of the j job is the sum of the processing time of all its operations on all machines, multiplied by a narrowing factor ( τ ), which is in the range 1.5 τ 2.0 [14,36].
f 3 = m i n j = 1 n C j

5. Multi-Objective Proposed Algorithms

The two multi-objective algorithms presented in this section for solving JSSP are Chaotic Multi-Objective Simulated Annealing and Chaotic Multi-Objective Threshold Accepting. We describe these algorithms in this section after analyzing the single-objective optimization algorithms for JSSP.

5.1. Simulated Annealing

The algorithm SA proposed by Kirkpatrick et al. comes from a close analogy with the metal annealing process [5]. This process consists of heating and progressively cooling metal. As the temperature decreases, the molecules’ movement slows down and tends to adopt a lower energy configuration. Kirkpatrick et al. proposed this algorithm for combinatorial optimization problems and to escape from local minima. It starts with an initial solution and generates a new solution in its neighborhood. If the new solution is better than the old solution, then it is accepted. Otherwise, SA applies the Boltzmann distribution, which determines if a bad solution can be taken as a strategy for escaping from local optima. This process is repeated many times until an equilibrium condition is accomplished.
The SA algorithm is shown in Algorithm 1. Line 1 receives the parameters: the initial ( T i n i t i a l ) and final ( T f i n a l ) temperatures, the alpha value ( α ) for decreasing the temperature, and beta ( β ) for increasing the length of the Metropolis cycle. The current temperature ( T k ) is set in line 2. An initial solution ( s c u r r e n t ) is generated randomly in line 3. The stop criterion is evaluated (line 4); this main cycle is repeated while the current temperature ( T k ) is higher than the final temperature ( T f i n a l ). The Metropolis cycle starts in line 5, where a neighboring solution ( s n e w ) is generated (line 6). In line 7 the increment Δ E of the objective function is determined for the current solution ( s c u r r e n t ) and the new one ( s n e w ). When this increment is negative (line 8) the new solution is the best. In this case, the new solution replaces the current solution (line 9). Otherwise, the Boltzmann criterion is applied (lines 11 and 12). This criterion allows the algorithm to escape from local optima depending on the current temperature and delta values. Finally, line 16 increases the number of iterations of the Metropolis cycle, and in line 17, the cooling function is applied to reduce the current temperature.
Algorithm 1 Classic Simulated Annealing algorithm
1: procedureSA( T i n i t i a l , T f i n a l , α , β , L k )
2: T k T i n i t i a l
3: s c u r r e n t R a n d o m I n i t i a l S o l u t i o n ( )
4: while T k T f i n a l do
5:  for 1 to L k do
6:     s n e w p e r t u r b a t i o n ( s c u r r e n t )
7:     Δ E E ( s n e w ) E ( s c u r r e n t )
8:   if Δ E < 0 then
9:      s c u r r e n t s n e w
10:    else
11:     if ( e Δ E / T k > r a n d o m ( 0 , 1 ) then
12:      s c u r r e n t s n e w
13:     end if
14:    end if
15:   end for
16:    L k β × L k
17:    T k α × T k
18: end while
19: return s c u r r e n t
20: end procedure

5.2. Analytical Tuning for Simulated Annealing

The parameters tuning process for the SA algorithm used in this paper is based on a method proposed in [37]. This method establishes that both the initial and final temperatures are functions of the maximum and minimum energy values E m a x and E m i n , respectively. These energies appeared in the Boltzmann distribution criterion that states that a bad solution is accepted in a temperature T when r a n d o m ( 0 , 1 ) e Δ E / T . For JSSP, Δ E is obtained with the makespan. For this tuning method, these two functions are obtained from the neighborhood of different solutions randomly generated. A set of previous SA executions must be carried out for obtaining Δ E m a x and Δ E m i n . These value are used in the Boltzmann distribution for determining the initial and final temperatures. Then, the other parameters of Metropolis cycle are determined. The process used is detailed in the next paragraph.
Initial temperature ( T i n i t i a l ): It is the temperature value from which the search process begins. The probability of accepting a new solution is almost 1 at high temperatures so, its cost of deterioration is maximum. The initial temperature is associated with the maximum allowed deterioration and its defined acceptance probability. Let us define s i as the current solution, s j a new proposed solution, E ( s i ) and E ( s j ) are its associated costs, the maximum and minimum deterioration are Δ E m a x and Δ E m i n . Then P ( Δ E m a x ) , is the probability of accepting a solution with the maximum deterioration and it is calculated with (12). Thus the value of the initial temperature ( T i n i t i a l ) is calculated with (13).
P ( Δ E m a x ) = e ( Δ E m a x / T i n i t i a l )
T i n i t i a l = Δ E m a x ln ( P ( Δ E m a x ) )
Final temperature ( T f i n a l ): It is the temperature value at which the search stops. In the same way, the final temperature is determined with (14) according to the probability P ( Δ E m i n ) , which is the probability of accepting a solution with minimum deterioration.
T f i n a l = Δ E m i n ln ( P ( Δ E m i n ) )
Alpha value ( α ): It is the temperature decrease factor. This parameter determines the speed at which the decrease in temperature will occur, for fast decrements 0.7 it is usually used and for slow decrements 0.99.
Cooling scheme: This function specifies how the temperature is decreased. In this case, the value of the current temperature ( T k ) follows the geometric scheme (15).
T k + 1 = α T k
Length of the Markov chain or iterations in Metropolis cycle ( L k ): This refers to the number of iterations of the Metropolis cycle that is performed at each temperature k, this number of iterations can be constant or variable. It is well known that at high temperatures, only a few iterations are required since the stochastic equilibrium is rapidly reached [37]. However, at low temperatures, a much more exhaustive level of exploration is required. Thus, a larger L k value must be used. If  L m i n is the value of L k at the initial temperature, and  L m a x is the L k at the final temperature, then the Formula (16) is used.
L k + 1 = β L k
where β is the increment coefficient of L k . Since the Functions (15) and (16) are applied successively in SA from the initial to the final temperature, T f i n a l and L m a x are calculated with (17) and (18).
T f i n a l = α n T i n i t i a l
L m a x = β n L m i n
In (17) and (18) n is the number of steps from T i n i t i a l to T f i n a l , then (19) and (20) are obtained.
n = ln ( T f i n a l ) ln ( T i n i t i a l ) ln ( α )
β = e ( ln ( L m a x ) ln ( L m i n ) n )
The probability of selecting the solution s j from N random samples in the neighborhood V s i is given by (21); and from this equation, the N value is obtained in (22), where the exploration level C is defined in Equation (23).
P ( S j ) = 1 e N V s i
N = V s i ln ( 1 P ( S j ) ) = C V s i
C = ln ( P ( S j ) )
The length of the Markov chain or iterations of the Metropolis cycle are defined by (24).
L m a x = N = C V s i
To guarantee a good exploration level, the C value determined by (23) must be established between 1 C 4.6 [38].

5.3. Chaotic Multi-Objective Simulated Annealing (CMOSA)

As we previously mentioned, the AMOSA algorithm was proposed in [17]. However, this algorithm is designed for general purposes. In this work, we adapt the AMOSA for JSSP to include the following features: (1) the mathematical constraints of MOJSSP, and (2) the objective functions makespan, total tardiness, and total flow time.
CMOSA has the same features previously described and has the next three elements: (1) a new structure, (2) chaotic perturbation, and (3) apply dominance to select solutions. These elements are described in the next subsections.

5.3.1. CMOSA Structure

The CMOSA algorithm uses a chaotic phase to improve the quality of the solutions considering the three objectives. Algorithm 2 receives its parameters in line 1: initial temperature ( T i n i t i a l ), final temperature ( T f i n a l ), alpha ( α ), beta ( β ), Metropolis iterations in every cycle ( L k ), and the initial solution ( s c u r r e n t ) to be improved. In lines 2 and 3, the variables of the algorithm are initialized. In line 4, the  s c u r r e n t is processed to obtain the values for each of the three objectives as output. In line 5, the initial temperature is established as the current temperature ( T k ). Then the main cycle begins in line 6. This cycle is repeated as long as the current temperature is greater than, or equal to, the final temperature. In line 7, the Metropolis cycle begins. Subsequently, the algorithm verifies if it is stagnant in line 8. If that is the case, lines 9 to 20 are executed. The number of iterations to perform a local search is established in line 10; this value is based on the number of tasks of the instance multiplied by an experimentally tuned parameter (in this case, this parameter is t i m e s L S = 10 ).
Algorithm 2 Chaotic Multi-Objective Simulated Annealing (CMOSA)
1: procedure CMOSA( T i n i t i a l , T f i n a l , α , β , L k , s c u r r e n t )
2:     M A X S T A G N A N T 10 , c o u n t e r T r a p p e d 0 , i s C a u g h t F A L S E
3:     i t e r a t i o n s L o c a l S e a r c h t a s k s × t i m e s L S , v e r i f y C a u g h t T R U E , c o u n t C a u g h t 0
4:     m k s c u r r e n t , t d s c u r r e n t , f l t c u r r e n t c a l c u l a t e V a l u e s ( s c u r r e n t ) m k s : m a k e s p a n , t d s : t a r d i n e s s , f l t : f l o w t i m e
5:     T k T i n i t i a l
6:    while T k T f i n a l do
7:        for i 0 to L k do
8:           if i s C a u g h t = T R U E then
9:                i s C a u g h t F A L S E
10:               for k 0 to i t e r a t i o n s L o c a l S e a r c h do
11:                   if k = 0 then
12:                        s n e w c h a o t i c P e r t u r b a t i o n ( s c u r r e n t ) ▹See Algorithm 4
13:                   else
14:                        s n e w r e g u l a r P e r t u r b a t i o n ( s c u r r e n t ) ▹Exchange of two operations
15:                   end if
16:                    m k s n e w , t d s n e w , f l t n e w c a l c u l a t e V a l u e s ( s n e w )
17:                   if ( m k s n e w < m k s c u r r e n t ) AND ( t d s n e w < t d s c u r r e n t ) AND ( f l t n e w < f l t c u r r e n t ) then
18:                        s c u r r e n t s n e w
19:                   end if
20:               end for
21:           else
22:                s n e w r e g u l a r P e r t u r b a t i o n ( s c u r r e n t )
23:                m k s n e w , t d s n e w , f l t n e w c a l c u l a t e V a l u e s ( s n e w )
24:           end if
25:           if ( m k s n e w m k s c u r r e n t ) AND ( t d s n e w t d s c u r r e n t ) AND ( f l t n e w f l t c u r r e n t ) then
26:                v e r i f y D o m i n a n c e C M O S A ( T k , s n e w , s c u r r e n t ) ▹See Algorithm 5
27:           end if
28:        end for
29:        if v e r i f y C a u g h t = T R U E then
30:           if c a u g h t ( s c u r r e n t , c o u n t e r T r a p p e d ) = T R U E   then ▹See Algorithm 3
31:                c o u n t C a u g h t = c o u n t C a u g h t + 1
32:               if c o u n t C a u g h t = M A X S T A G N A N T then
33:                    v e r i f y C a u g h t F A L S E
34:               end if
35:           end if
36:        end if
37:         L k β × L k
38:         T k α × T k
39:    end while
40:    return s c u r r e n t
41: end procedure
In line 11, a local search begins. In the first iteration of this search, a chaotic perturbation (explained in Algorithm 4) is applied to the s c u r r e n t (line 12) to restart the search process from another point in the solution space. In further iterations, a regular perturbation is applied (line 14) that consists only of exchanging the position of two operations in the solution, always verifying that the solution generated is feasible. In line 16, the  s n e w is processed to obtain the values for each of the three objectives. Subsequently, and only if the new solution dominates the current solution of the three objectives, the new solution is used to continue the search process (lines 17 and 18). When the algorithm is not stagnant, a regular perturbation is applied, and the flow continues (line 22). If the current and the new solution are different, we proceed with the dominance verification process to determine which solution is used to continue the search (line 26); this process is explained in Algorithm 5. Finally, from lines 29 to 36, a process is applied to set a limit to the number of times the algorithm is stagnant (See Algorithm 3). The algorithm is determined to be stagnant if, after some iterations, it fails to generate a new, non-dominated solution. In this algorithm, the stagnation is limited to 10 iterations. At the end of the algorithm, in line 37, the number of repetitions of the Metropolis cycle ( L k ) is increased by multiplying its previous value by the β parameter value. Additionally, in line 38, the current temperature ( T k ) is decreased by multiplying it by the α value. At the end of line 40, the stored solution ( s c u r r e n t ) is generated as the output of the algorithm.
Algorithm 3 shows the process that is carried out to verify the stagnation mentioned in line 30 of Algorithm 2.
Algorithm 3 Caught
1: procedure caught( s c u r r e n t , c o u n t e r T r a p p e d )
2:     i s C a u g h t F A L S E , t i m e s D o m i n a t e d 0 , m a x T r a p p e d 10
3:     t i m e s D o m i n a t e d c o u n t T i m e s D o m i n a t e d ( s c u r r e n t )
4:    if t i m e s D o m i n a t e d = 0 then
5:         F s c u r r e n t
6:    end if
7:    if t i m e s D o m i n a t e d 1 then
8:         c o u n t e r T r a p p e d c o u n t e r T r a p p e d + 1
9:    end if
10:    if c o u n t e r T r a p p e d = m a x T r a p p e d then
11:         i s C a u g h t T R U E
12:         c o u n t e r T r a p p e d 0
13:    end if
14:    return i s C a u g h t
15: end procedure
In this Algorithm 3 the current solution ( s c u r r e n t ) and the counter of times it has trapped ( c o u n t e r T r a p p e d ) are received as input. In line 2 the variables used are initialized. Then the times that the current solution is dominated by at least one solution from the non-dominated front are counted (line 3). If the current solution is non-dominated (line 4) it is stored in the front of non-dominated solutions (line 5). If the current solution is dominated by at least one solution (line 7) then the c o u n t e r T r a p p e d is incremented (line 8). When c o u n t e r T r a p p e d equals the maximum number of trapped allowed (line 10), the value of i s C a u g h t is set to T R U E (line 11) and the trap counter is reset to zero in line 12.

5.3.2. Chaotic Perturbation

The logistic equation or logistic map is a well-known mathematical application of the biologist Robert May for a simple demographic model [39]. This application tells us the population in the n-th generation based on the size of the previous generation. This value may be found by a popular logistic model mathematically expressed as:
x n + 1 = r x n ( 1 x n )
In Equation (25), the variable x n takes values ranged between zero and one. This variable represents the fraction of individuals in a specific situation (for instance, into a territory or with a particular feature) in a given instant n. The parameter r is a positive number representing the combined ratio between reproduction and mortality. Even though we are not interested in this paper in demographic or similar problems, we notice the very fast last variable changes. Then it can be taken as a chaotic variable. Thus, we use this variable for performing a chaotic perturbation function, which may help to escape from local optima for our CMOTA and CMOSA algorithms.
The chaotic function used is very sensitive to changes in the initial conditions, and this characteristic is used to generate a perturbation to the solution for escaping from local optima. Then chaos or chaotic perturbation is a process carried out to restart the search from another point in the space of solutions.
Algorithm 4 can be explained in three steps. Firstly, the feasible operations (operations that can be performed without violating any restrictions) are searched (line 4). Secondly, whether there is only one feasible operation (line 5) means that it is the last operation and selected (line 6). When there is more than one feasible operation, a chaotic function is applied to select the operations. In this case, the logistic function is used (lines 8–19), which applies a threshold in the range [0.5 to 1]. Finally, the selected operation is added to the new solution (line 21). This process is applied until all the operations are selected.
Algorithm 4 Chaotic perturbation
1: procedure chaoticPerturbation( s c u r r e n t )
2:     f e a s i b l e T a s k s N u m b e r 0 , r 4 , r e p e a t T R U E , X n 0 , X n 1 0
3:    while c o u n t e r < t a s k s do
4:         f e a s i b l e T a s k s N u m b e r s e a r c h F e a s i b l e T a s k s ( )
5:        if f e a s i b l e T a s k s N u m b e r = 1 then
6:            i n d e x 0
7:        else
8:           while r e p e a t = T R U E do
9:                X n r a n d o m ( 0 , 1 )
10:               for i 0 to f e a s i b l e T a s k s N u m b e r do
11:                    X n 1 ( r × X n ) × ( 1.0 X n )
12:                   if X n 1 > 0.5 then
13:                        i n d e x i
14:                        r e p e a t F A L S E
15:                        b r e a k
16:                   end if
17:                    X n X n 1
18:               end for
19:           end while
20:        end if
21:         s n e w a d d T a s k ( i n d e x )
22:         c o u n t e r c o u n t e r + 1
23:    end while
24:    return s n e w
25: end procedure

5.3.3. Applying Dominance to Select Solutions

In Algorithm 5, the current solution ( s c u r r e n t ) is compared with the new solution ( s n e w ) to determine which solution is used to continue the search. In this comparison, there are three cases:
  • If s n e w dominates s c u r r e n t , then s n e w is used to continue the search (lines 3 to 6).
  • If s n e w is dominated by s c u r r e n t then the differences of each objective are calculated separately from the two solutions compared to obtain the decreased parameter ( Δ ) and use it to determine if the s n e w continues with the search according to the condition in line 12. In this case, s c u r r e n t is added to the non-dominated front (F) and s n e w replaces s c u r r e n t (lines 13 and 14).
  • If the two solutions are non-dominated by each other, then the current solution s c u r r e n t is added to the non-dominated front (F), and the search continues with s n e w (lines 18 to 21).
Algorithm 5: Verify dominance CMOSA
1:procedureverifyDominanceCMOSA( T k , s n e w , s c u r r e n t , m k s n e w , t d s n e w , f l t n e w , m k s c u r r e n t , t d s c u r r e n t , f l t c u r r e n t )
2:     n e w D o m i n a t e C u r r e n t F A L S E , c u r r e n t D o m i n a t e N e w F A L S E
3:    if s n e w s c u r r e n t then
4:         s c u r r e n t s n e w
5:         n e w D o m i n a t e C u r r e n t T R U E
6:    end if
7:    if s c u r r e n t s n e w then
8:         Δ M K S m k s n e w m k s c u r r e n t
9:         Δ T D S t d s n e w t d s c u r r e n t
10:         Δ F L T f l t n e w f l t c u r r e n t
11:         Δ Δ M K S + Δ T D S + Δ F L T
12:        if r a n d o m ( 0 , 1 ) < e Δ / T k then
13:            F s c u r r e n t
14:            s c u r r e n t s n e w
15:        end if
16:         c u r r e n t D o m i n a t e N e w T R U E
17:    end if
18:    if ( n e w D o m i n a t e C u r r e n t = F A L S E ) AND ( c u r r e n t D o m i n a t e N e w = F A L S E ) then
19:         F s c u r r e n t
20:         s c u r r e n t s n e w
21:    end if
22:    return s c u r r e n t
23: end procedure

5.4. Chaotic Multi-Objective Threshold Accepting (CMOTA)

In 1990, Dueck et al. proposed the TA algorithm as a general-purpose algorithm for the solution of combinatorial optimization problems [6]. This TA algorithm has a simpler structure than SA, and is very efficient for solving many problems but has never been applied for MOJSSP. The difference between SA and TA is basically in the criteria for accepting bad solutions. TA accepts every new configuration, which is not much worse than the old one. In contrast, SA would accept worse solutions only with small probabilities. An apparent advantage of TA is that it is higher simply because it is not necessary to compute probabilities or to make decisions based on a Boltzmann probability distribution.
Algorithm 6 shows CMOTA algorithm, where we observe that it has the same structure as CMOSA algorithm. These two algorithms have a temperature cycle and, within it, a Metropolis cycle. In these algorithms, a perturbation is applied to the current solution. Then, the dominance of the two solutions is verified to determine which of them is used to continue the searching process (Algorithm 7). Finally, the increment of the variable that controls the iterations of the Metropolis cycle, the reduction of the temperature, and the increment of the counter (line 39) for the number of temperatures are performed.
Algorithm 6 Chaotic Multi-Objective Threshold Accepting (CMOTA)
1: procedure CMOTA( T i n i t i a l , T f i n a l , α , β , L k , s c u r r e n t )
2:      c o u n t e r 1 , M A X S T A G N A N T 10 , c o u n t e r T r a p p e d 0 , i s C a u g h t F A L S E
3:      i t e r a t i o n s L o c a l S e a r c h t a s k s × t i m e s L S , v e r i f y C a u g h t T R U E , c o u n t C a u g h t 0
4:      m k s c u r r e n t , t d s c u r r e n t , f l t c u r r e n t c a l c u l a t e V a l u e s ( s c u r r e n t ) m k s : m a k e s p a n , t d s : t a r d i n e s s , f l t : f l o w t i m e
5:      T k T i n i t i a l
6:     while T k T f i n a l do
7:         for i 0 to L k do
8:            if i s C a u g h t = T R U E then
9:                 i s C a u g h t = F A L S E
10:                for k 0 to i t e r a t i o n s L o c a l S e a r c h do
11:                    if k = 0 then
12:                         s n e w c h a o t i c P e r t u r b a t i o n ( s c u r r e n t ) ▹ See Algorithm 4
13:                    else
14:                         s n e w r e g u l a r P e r t u r b a t i o n ( s c u r r e n t ) ▹ Exchange of two operations
15:                    end if
16:                     m k s n e w , t d s n e w , f l t n e w c a l c u l a t e V a l u e s ( s n e w )
17:                    if ( m k s n e w < m k s c u r r e n t ) AND ( t d s n e w < t d s c u r r e n t ) AND ( f l t n e w < f l t c u r r e n t ) then
18:                         s c u r r e n t s n e w
19:                    end if
20:                end for
21:            else
22:                 s n e w r e g u l a r P e r t u r b a t i o n ( s c u r r e n t )
23:                 m k s n e w , t d s n e w , f l t n e w c a l c u l a t e V a l u e s ( s n e w )
24:            end if
25:            if ( m k s n e w m k s c u r r e n t ) AND ( t d s n e w t d s c u r r e n t ) AND ( f l t n e w f l t c u r r e n t ) then
26:                 v e r i f y D o m i n a n c e C M O T A ( c o u n t e r , T k , s n e w , s c u r r e n t ) ▹ See Algorithm 7
27:            end if
28:         end for
29:         if v e r i f y C a u g h t = T R U E then
30:            if c a u g h t ( s c u r r e n t , c o u n t e r T r a p p e d ) = T R U E then ▹ See Algorithm 3
31:                 c o u n t C a u g h t = c o u n t C a u g h t + 1
32:                if c o u n t C a u g h t = M A X S T A G N A N T then
33:                     v e r i f y C a u g h t F A L S E
34:                end if
35:            end if
36:         end if
37:          L k β × L k
38:          T k α × T k
39:          c o u n t e r c o u n t e r + 1
40:     end while
41:     return s c u r r e n t
42: end procedure
In Algorithm 7, the dominance of the two solutions is verified to determine which continues with the search. It has the same three cases used in CMOSA (Algorithm 5). The main differences are the following:
  • In the beginning, while the temperature counter ( c o u n t e r ) is less than the value of bound (line 4) T has a value equal to T k (line 5), which is too large, which implies that at high temperature, the new solution ( s n e w ) will often be accepted to continue the search. That is, during the processing of 95% temperatures (parameter l i m i t = 0.95 , whose value is obtained with Equation (19) in the tuning process), the parameter γ is used to obtain the value T (threshold), and since γ is equal to 1, then it means that T has the value of T k . For the five percent of the remaining temperatures, γ takes the value of γ r e d u c e d (0.978). This parameter is tuned experimentally (line 12), and it is established to control the acceptance criterion and make it more restrictive as part of the process.
  • CMOTA includes a verification process for accepting bad solution lighting different from CMOSA. To determine if the searching process continues using a dominated solution, CMOTA does not use the Boltzmann criterion to accept it as the current solution. Instead, CMOTA uses a threshold defined as the T parameter value (line 19), which is updated in line 29. In other words, it is no longer necessary to calculate the decrement of the objective functions. This modification makes CMOTA much more straightforward than CMOSA or any other AMOSA algorithm. Moreover, because the parameter γ is usually very close to one, it is unnecessary to calculate probabilities for the Boltzmann distribution or make a random decision process for bad solutions.
Algorithm 7 Verify dominance CMOTA
1: procedureverifyDominanceCMOTA( c o u n t e r , T k , s n e w , s c u r r e n t )
2:      γ 1 , γ r e d u c e d 0.978 , s e t T 1 , b o u n d N u m b e r O f T e m p e r a t u r e s × l i m i t
3:      n e w D o m i n a t e C u r r e n t F A L S E , c u r r e n t D o m i n a t e N e w F A L S E
4:     if c o u n t e r < b o u n d then
5:          T T k
6:     end if
7:     if ( c o u n t e r = b o u n d ) AND ( s e t T = 1 ) then
8:          s e t T 0
9:          T T k
10:     end if
11:     if s e t T = 0 then
12:          γ γ r e d u c e d
13:     end if
14:     if s n e w s c u r r e n t then
15:          s c u r r e n t s n e w
16:          n e w D o m i n a t e C u r r e n t T R U E
17:     end if
18:     if s c u r r e n t s n e w then
19:         if r a n d o m ( 0 , 1 ) < T then
20:             F s c u r r e n t
21:             s c u r r e n t s n e w
22:         end if
23:          c u r r e n t D o m i n a t e N e w T R U E
24:     end if
25:     if ( n e w D o m i n a t e C u r r e n t = F A L S E ) AND ( c u r r e n t D o m i n a t e N e w = F A L S E ) then
26:          F s c u r r e n t
27:          s c u r r e n t s n e w
28:     end if
29:      T T × γ
30: end procedure

6. Main Methodology for CMOSA and CMOTA

Figure 1 shows the main module for each of the two proposed algorithms CMOSA and CMOTA, which may be considered the main processes in any high-level language.
In this main module, the instance to be solved is read, then the tuning process is performed. The due date is calculated, which is an essential element for calculating the tardiness. The set of initial solutions (S) is generated randomly, as follows. First, a collection of feasible operations are determined, then one of them is randomly selected and added to the solution until all the job operations are added.
Once the set of initial solutions has been generated, an algorithm (CMOSA or CMOTA) is applied to improve each initial solution, and the generated solution is stored in a set of final solutions (F). To obtain the set of non-dominated solutions, also called the zero front ( f 0 ) from the set of final solutions, we applied the fast non-dominated Sorting algorithm [29]. To know the quality of the non-dominated set obtained, the MID, Spacing, HV, Spread, IGD, and Coverage metrics are calculated. To perform the calculation of the spread and IGD, the true Pareto front ( P F t r u e ) is needed. However, for the instances used in this paper, the P F t r u e has not been published for all the instances. For this reason, the calculation was made using an approximate Pareto front ( P F a p p r o x ), which we obtained from the union of the fronts calculated with previous executions of the two algorithms presented here (CMOSA and CMOTA).

6.1. Computational Experimentation

A set of 70 instances of different authors was used to evaluate the performance of the algorithms, including: (1) FT06, FT10, and FT20 proposed by [40]; (2) ORB01 to ORB10 proposed by [41]; (3) LA01 to LA40 proposed by [42]; (4) ABZ5, ABZ6, ABZ7, ABZ8, and ABZ9 proposed by [43]; (5) YN1, YN2, YN3, and YN4 proposed by [44], and (6) TA01, TA11, TA21, TA31, TA41, TA51, TA61, and TA71 proposed by [30].
As already explained, to perform the analytical tuning, some previous executions of the algorithm are necessary. The parameters used for those previous executions are shown in Table 2, and the parameters used in the final experimentation for each instance are shown in Table 3.
The execution of the algorithm was carried out on one of the terminals of the Ehecatl cluster at the TecNM/IT Ciudad Madero, which has the following characteristics: Intel® Xeon® processor at 2.30 GHz, Memory: 64 GB (4 × 16 GB) ddr4-2133, Linux operating system CentOS, and C language was used for the implementation. We developed CMOSA (https://github.com/DrJuanFraustoSolis/CMOSA-JSSP.git) and CMOTA (https://github.com/DrJuanFraustoSolis/CMOTA-JSSP.git) and we tested the software and using three data sets reported in the paper and taken from the literature.
In the first experiment, the algorithms CMOSA and CMOTA were compared with AMOSA algorithm using the 70 described instances and six performance metrics. In a second experiment, we compared CMOSA and CMOTA with the IMOEA/D algorithm, with the 58 instances used by Zhao [14]. In the second experiment, we used the same MID metric of this publication. The third experiment was based on the 15 instances reported in [8], where the results of the next MOJSSP algorithms are published: SPEA, CMOEA, MOPSO, and MOMARLA. In this publication the authors used two objective functions and two metrics (HV and Coverage); they determined that the best algorithm is MOMARLA followed by MOPSO. We executed CMOSA and CMOTA for the instances of this dataset and we compared our results using the HV metric with those published in [8]. However, a comparison using the coverage metric was impossible because the Pareto fronts of these methods have not been reported [8]. In our case, we show in Appendix A the fronts of non-dominated solutions obtained with 70 instances.

6.2. Results

The average values of 30 runs, for the six metrics obtained by CMOSA and CMOTA for the complete data set of 70 instances are shown in Table 4 and Table 5. We observed that CMOSA obtained the best values for MID and IGD metrics. For Spacing and Spread, CMOTA obtained the best results. For the HV metric, both algorithms achieved the same result (0.42). We observed in Table 5 that CMOSA obtained the best coverage result.
A two-tailed Wilcoxon test was performed with a significance level of 5% (last column in Table 4) and this shows that there are no significant differences between the CMOSA and CMOTA except in MID and IGD metrics.
Table 6 shows the comparison of CMOSA and AMOSA. We observed that CMOSA obtains the best performance in all the metrics evaluated. In addition, the Wilcoxon test indicates that there are significant differences in most of them; thus, CMOSA overtakes AMOSA. We compared CMOTA and AMOSA in Table 7. In this case, CMOTA also obtains the best average results in all the metrics; however, according to the Wilcoxon test, there are significant differences in only two metrics.
We compare in Table 8 the CMOSA and CMOTA with the IMOEA/D algorithm using the 58 common instances published in [14] where the MID metric was measured. This table shows the MID average value of this metric for the non-dominated set of solutions of CMOSA and CMOTA. The results showed that CMOSA and CMOTA obtain better performances than IMOEA/D. We notice that both algorithms, CMOSA and CMOTA, achieved smaller MID values than IMOEA/D, which indicates that the Pareto fronts of our algorithms are closer to the reference point (0,0,0). The Wilcoxon test confirms that CMOSA and CMOTA surpassed the IMOEA/D.
The results of CMOSA and CMOTA were compared with the SPEA, CMOEA, MOPSO, and MOMARLA algorithms [8]. In the last reference, only two objective functions were reported, the makespan and total tardiness. The experimentation was carried out with 15 instances and the average HV values were calculated to perform the analysis of the results, which are shown in Table 9. We notice that MOMARLA surpassed SPEA, CMOEA, and MOPSO. We can observe that CMOSA obtained a better performance than MOMARLA and the other algorithms. Comparing CMOTA and MOMARLA, we notice that both algorithms obtained the same HV average results.

6.3. CMOSA-CMOTA Complexity and Run Time Results

In this section, we present the complexity of the algorithms analyzed in this paper. The algorithms’ complexity is presented in Table 10, and it was obtained directly when it was explicitly published or determined from the algorithms’ pseudocodes. In this table, M is the number of objectives, Γ is the population size, T is the neighborhood size, n is the number of iterations (temperatures for AMOSA, CMOSA, and CMOTA), and p is the problem size. The latter is equal to j m where j and m are the number of jobs and machines, respectively. Because the algorithms with the best quality metrics are CMOSA, CMOTA MOMARLA, and MOPSO, their complexity is compared in this section.
It is well known that the complexity of classical SA is O ( p 2 l o g p ) [45]. However, we notice from Table 10 that CMOSA, and CMOTA have a different complexity even though they are based on SA. This is because these new algorithms applied a different chaotic perturbation and another local search (see Algorithms 2 and 6 in lines 10–20).
The temporal function of MOMARLA, CMOSA, and CMOTA belong to O ( M n p ) . For MOMARLA, n is the number of iterations, a variable used at the beginning of this algorithm. On the other hand, for CMOSA and CMOTA, n is the number of temperatures used in the algorithm, also at its beginning; in any case, the difference will be only a constant.
We note that AMOSA and MOPSO have a similar complexity class expression, that is O ( n Γ 2 ) and O ( M Γ 2 ) respectively. However, MOPSO overtakes AMOSA because M is in general lower than n. We observe that CMOSA, CMOTA and MOMARLA belong to O ( M n p ) class complexity, while MOPSO belongs to O ( M Γ 2 ) [46]. Thus, the relation between them is n p / Γ 2 which in general is lower than one. Thus CMOSA, CMOTA and MOMARLA have a lower complexity than MOPSO. Moreover, CMOSA, CMOTA, and MOMARLA have better HV metric quality as is shown in Table 9.
In the next paragraph, we present a comparative analysis of the execution time of the algorithms implemented in this paper.
In Table 11 we show the execution time, expressed in seconds, for the three algorithms (CMOSA, CMOTA, and AMOSA) implemented in this paper for three data sets (70, 58, and 15 instances). In all these cases, we emphasize that the AMOSA algorithm was the base to design the other two algorithms. In fact, all of them have the same structure except that CMOSA and CMOTA apply chaotic perturbations when they detect a possible stagnation. Thus, all of them have similar complexity measures for the worst-case. Table 11 shows the percentage of time saved by these two algorithms concerning AMOSA. For these datasets, we measured that AMOSA saved 2.1, 19.87, and 42.48 percent of the AMOSA run time; on the other hand, these figures of CMOTA versus AMOSA are 55, 68.89, and 46.73 percent. Thus, both of our proposed algorithms CMOSA and CMOTA are significantly more efficient than AMOSA. Unfortunately, we do not have the tools to compare these algorithms versus the other algorithms’ execution time in Table 1. Nevertheless, we made the quality comparisons by using the metrics previously published.

7. Conclusions

This paper presents two multi-objective algorithms for JSSP, named CMOSA and CMOTA, with three objectives and six metrics. The objective functions for these algorithms are makespan, total tardiness, and total flow time. Regarding the results from the comparison of CMOSA and CMOTA with AMOSA, we observe that both algorithms obtained a well-distributed Pareto front, closest to the origin, and closest to the approximate Pareto front as was indicated by Spacing, MID, and IGD metrics, respectively. Thus, using these five metrics, we found that CMOSA and CMOTA surpassed the AMOSA algorithm. Regarding the volume covered by the front calculated by the HV metric, it was observed that both algorithms, CMOSA and CMOTA, have the same performance; however, CMOSA has a higher convergence than CMOTA. In addition, the proposed algorithms surpass IMOEA/D when MID metric was used. Moreover, we use the HV to compare the proposed algorithms with SPEA, CMOEA, MOPSO, and MOMARLA. We found that CMOSA outperforms these algorithms, followed by CMOTA, MOMARLA, and MOPSO.
We observe that CMOSA and CMOTA have similar complexity as the best algorithms in the literature. In addition, we show that CMOSA and CMOTA surpass AMOSA when we compare them using execution time for three data sets. We found CMOTA is, on average, 50 percent faster than AMOSA and CMOSA. Finally, we conclude that CMOSA and CMOTA have similar temporal complexity than the best literature algorithms, and the quality metrics show that the proposed algorithms outperform them.

Author Contributions

Conceptualization: J.F.-S., L.H.-R., G.C.-V.; Methodology: J.F.-S., L.H.-R., G.C.-V., J.J.G.-B.; Investigation: J.F.-S., L.H.-R., G.C.-V., J.J.G.-B.; Software: J.F.-S., L.H.-R., G.C.-V., J.J.G.-B.; Formal Analysis: J.F.-S., G.C.-V.; Writing original draft: J.F.-S., L.H.-R., G.C.-V.; Writing review and editing: J.F.-S., J.J.G.-B., J.P.S.-H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to express their gratitude to CONACYT and TecNM/IT Ciudad Madero. In addition, the authors acknowledge the support from Laboratorio Nacional de Tecnologías de la Información (LaNTI) for the access to the cluster.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Non-Dominated Front Obtained

The non-dominated solutions obtained by CMOSA algorithm for the 70 instances used are shown in Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6, and the non-dominated solutions obtained by CMOTA algorithm for the same instances are shown in Table A7, Table A8, Table A9, Table A10, Table A11 and Table A12. In these tables, MKS is the makespan, TDS is the total tardiness and FLT is the total flow time. For each instance, the best value for each objective function is highlighted with an asterisk (*) and in bold type.
Table A1. Non-dominated front obtained by CMOSA for the JSSP instances proposed by [40].
Table A1. Non-dominated front obtained by CMOSA for the JSSP instances proposed by [40].
FT06FT10FT20
MKSTDSFLTMKSTDSFLTMKSTDSFLT
155 *30.0305993 *1768.592341224 *8960.016614
25538.03019941609.0912112278809.016375
35637.030410041495.0906212298793.016359
45629.030810061083.0858412358774.016340
55723.530510361053.08406 *12438455.5 *16119 *
65727.029710371009.0 *8437
75726.0298
8589.5280
96011.0279 *
10628.5285
11698.0 *291
Table A2. Non-dominated front obtained by CMOSA for the JSSP instances proposed by [41].
Table A2. Non-dominated front obtained by CMOSA for the JSSP instances proposed by [41].
ORB1ORB2ORB3ORB4ORB5
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11142 *1539.09245925 *767.583391104 *1874.094481063 *1186.09175966 *1192.58279
211431517.09223927781.5828511111548.0939210731108.592709711180.58296
311441522.09135931722.5816011121816.0931810781059.59128975859.57648
411501381.59219951542.5805611231462.093061107917.59234978752.58016
511611355.5 *9469958331.0 *774211271806.092881111978.09199980758.58011
611721508.09214958339.07730 *11621579.092001134944.59221984708.57961
711741521.09134 * 11641562.091831140795.59111984706.57970
8 11801492.589841156843.59083998822.07784
9 11871475.5 *8967 *1200733.5 *90491001746.57869
10 1230919.089691001834.07620 *
11 1232983.588131013689.0 *7765
12 1277995.58735 *1017795.07713
13 1032798.07659
14 1049771.07678
ORB6ORB7ORB8ORB9ORB10
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11097 *1318.09573423 *207.53663963 *1804.08439987 *1193.58912991 *835.08482
211001199.59505424167.037319681412.582049881362.58860993843.08465
311001267.59434431161.0 *36439701387.082159931220.088981020798.58785
411051225.09434439295.036209881514.580999961072.588441029742.58691
511051227.09412449207.536259971587.0807810061002.085381043608.58659
611101255.09409453230.5361610011239.0791210191017.585231044493.5 *8522
711131220.59452455204.5363610441120.0 *7617 *10351100.584931072774.58455 *
811141078.59287459213.03577 10391043.58430
911411153.09109 *461216.03509 1048887.0 *8348 *
1011711097.09194461203.03545
1111911018.59145461186.53572
121233988.0 *9225466202.53547
13 466171.03561
14 470184.53504 *
Table A3. Non-dominated front obtained by CMOSA for the JSSP instances proposed by [42].
Table A3. Non-dominated front obtained by CMOSA for the JSSP instances proposed by [42].
LA01LA02LA03LA04LA05
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
1666 *1194.05436655 *1207.05123615 *1492.55000590 *1252.04900593 *1159.54451
26661237.553626561161.050776221400.548965951235.049485931088.04455
36671382.553576651222.049946261484.548815981250.048575941053.04399
46681068.553286651203.050506271467.048895981226.549106101099.54386
56681074.053096711042.049046281343.548665991167.049156151129.54351 *
66701269.553006731094.548796301357.548036031154.54895631999.5 *4371
76721152.55260681938.547996301339.548506051089.047376481036.04359
86881145.55247695927.548646331226.547506141034.047826591032.04355
97001120.55297695930.547966381183.046496151047.54756
107061081.55241696910.548376411178.547136181042.54705
117061179.05225714997.547766461173.047186221038.54705
127131065.55203715936.547206551088.544826291006.04710
137181025.55235736925.048126621062.045956291020.54695
147271056.55138741993.04716 *6621081.54591631982.54697
157341046.05184771909.5 *47866681015.04522637981.04576
167431089.05101 669981.54523638961.54667
17751951.0 *5115 683979.54516640962.04566
188251098.05099 * 6881087.54481643930.04525 *
19 6981055.04504648927.04531
20 741955.54382650895.54558
21 744891.04375655908.04537
22 744914.04372663888.5 *4551
23 750896.54323 *663906.04543
24 757867.0 *4325
LA06LA07LA08LA09LA10
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
1926 *4185.510,142890 *4006.59554863 *3717.59455951 *3925.010,297958 *4439.510,441
29274183.010,1718904044.094968633792.594249513916.510,3119694476.510,437
39294062.010,0508943974.595228653723.593879543908.0 *10,2809714313.010,343
49314122.010,0418963646.592648703685.593499743944.510,195 *9764298.010,328
59383911.098709043684.092488713649.59284 9824121.010,151
69403827.0 *9786 *9063615.092198763602.59340 10524083.0 *10,113 *
7 9103652.092168853598.59309
8 9673595.0 *9199 *8953596.09266
9 8963410.5 *9045 *
LA11LA12LA13LA14LA15
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11222 *9157.517,1841039 *7218.014,2291150 *8436.516,2081292 *10,017.018,0361207 *9447.517,581
212258947.516,85310417203.01416711538333.516,10512999986.018,00512089249.517,383
312418879.516,78510437198.01419611548310.516,07913289992.517,99012139175.017,314
412428862.516,76810497164.01416211558247.515,95313529810.5 *17,80812209149.017,284
512438860.516,76610507126.01412411618175.015,95413529867.017,797 *12299014.017,149
612568811.516,79811347114.0 *14,112 *11628210.515,916 12329013.017,148
712578725.516,712 11828057.015,836 12348991.017,126
812588765.516,671 11838013.015,792 12518915.517,062
912658650.5 *16,637 * 11847994.015,773 12718947.517,040
10 11857989.015,768 12738703.516871
11 11897978.0 *15,757 * 12818651.516,819
12 12838638.516,802
13 12898603.516,767
14 12978601.5 *16,765 *
LA16LA17LA18LA19LA20
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
1968 *983.58777796 *799.07502865 *488.07765884 *538.07950934 *665.58354
2982904.08754796784.07509866468.57743889288.07945939599.58409
3988898.58608810855.07492868439.57853891495.07821948631.58393
4992882.08752811783.07555873419.57687900406.07916957542.08423
5994816.58669813702.07458878396.57755905279.07846957556.08302
61000873.08570813745.07450882404.57732935327.07730964658.08232
71003900.08565816693.07458883429.57648953335.57726966403.08032
81003908.08545820630.07395893411.07671953259.5 *7806967408.08028
91003942.08474823670.57334923394.57802979304.57673 *971408.08001
101008493.08205824633.57240927368.57885 972419.07975
111016553.58063831623.57321928351.57882 1009390.58094
121040459.58232833625.57320939353.07691 1067422.07927
131050352.07997835717.57203939300.57860 1084424.07908 *
141066345.58285836596.57291940345.07827 1100383.58292
151071341.58068836611.57284945332.57845 1115382.58065
161073401.07980840597.07267946305.07629 1142335.57915
171095326.5 *7908 *840612.07260952267.0 *7778 1142334.07998
18 842612.07194978476.07614 1148262.5 *8205
19 849522.07208982455.07519 * 1168302.58204
20 849521.57232984439.07626
21 864531.07135998361.57603
22 864530.57159
23 864521.57169
24 899535.07114
25 914509.07034
26 927470.0 *7098
27 931475.07000 *
LA21LA22LA23LA24LA25
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11124 *3229.515,0301013 *2968.513,7741077 *2292.014,2221000 *2145.513230 *1071 *3161.014,387
211243233.515,00210182916.513,72210782253.514,19810082137.513,47410723060.014,275
311273180.514,88310202906.513,71210782249.514,23810082120.513,60610893002.014,096
411283137.514,86810342738.513,55210802173.514,15210772010.513,45811002756.513,951
511293015.514,71810372660.013,63810912231.514,14910791981.513,39011042764.513,940
611372998.514,40010382774.513,54810952243.514,147108819,76.5 *13,38511182721.013,962
711412892.514,63610392648.013,61110972071.014,011 11182768.013,938
811442821.514,56510452811.013,52811021939.0 *13,867 * 11212802.513,829
911462939.014,34610472696.513,510 11232618.513,658
1011502543.014,34410502614.513,445 11312584.513,845
1111502639.514,31610682565.513,396 11342536.513,577
1211572557.514,24710762544.513,375 11342529.013,770
1311582545.514,22210822462.513,253 11542517.513,535
1411642511.514,18810872392.513,169 11592457.013,654
1511792393.514,20410992332.5 *13,109 * 11602451.513,666
1611822331.514,165 11732530.013,470
1711822355.514,153 11752445.013,385
1811832454.514,131 11872435.013,481
1912272328.014,238 11892315.0 *13,255 *
2012472225.0 *14,161
2112582561.513,967
2212722527.513,963
2312852465.513,871 *
2412902305.014,103
LA26LA27LA28LA29LA30
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11281 *6921.022,5761332 *6555.022,8031318 *7579.023,5471293 *7971.522,8021434 *9177.025,172
212826811.022,46613346495.022,74313217403.023,42612947963.522,78614378132.024,056
313046708.522,43413406399.022,64713296603.022,62613177799.522,69314458064.023,991
413236643.522,41613466280.022,52813626683.522,57813197796.522,69014487996.023,923 *
513256629.522,40213586228.0 *22,476 *13676552.022,57513277770.522,66415407980.0 *24,000
613286741.522,254 13786469.022,45413337738.522,632
713296560.522,333 13856465.022,38913347711.522,605
813386616.522,129 13936480.522,36013397507.522,314
913406510.522,276 14136443.022,32013407446.522,253
1013776307.0 *21,940 * 14166439.022,31613687411.522,218
11 14546429.022,29813757398.522,289
12 14766239.022,01313767464.522182
13 14776141.0 *21,915 *13767374.522,268
14 13797018.521,912
15 13897011.5 *21,905 *
LA31LA32LA33LA34LA35
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11784 *20,830.543,6171850 *20,861.5457151719 *20,933.543,3871743 *22,605.545,6171898 *24,225.547,233
2179420,718.543,505186720,860.545,714172118,798.541,252174721,475.544,487189923,434.546,652
3179620,390.543,177187120,686.545,540172318,528.540,982175521,271.544,283190022,784.546,012
4179720,066.542,842188120,563.545,417172518,137.540,591175621,211.544,223190122,724.545,952
5179820,009.542785188920,059.544,913173818,109.5 *40,563 *175921041.544,037190322,684.545,912
6180019,919.5 *42,695 *190020,049.5 *44,903 * 177120,916.043,916192022,481.545,709
7 177420,787.043,787194722,677.045,695
8 178120,736.043,736195022,442.545,670
9 179120,693.543,705195322,454.045,665
10 180120,505.543,517195822,327.545,555
11 183720,476.543,488201822,311.5 *45,539 *
12 183920,356.543,368
13 184020,305.543,317
14 184320,298.543,310
15 185020,072.543,084
16 190619,880.5 *42,892 *
LA36LA37LA38LA39LA40
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11453 *3131.020,5751569 *3065.021,4441400 *1586.018,1711444 *2371.019,4471436 *2617.519,260
214713030.520,30915713077.021,43614191578.5 *18,20014522056.019,21514432017.018,689
314742834.520,12515743043.021,40214212057.518,11914981770.518,66214501806.018,391
414752936.520,08515743025.021,40414392092.518,06714991731.518,60714581719.018,303
514762847.520,09415803009.021,30114681753.518,10315041473.518,40414711433.5 *18,431
614762949.520,05415843002.021,29414731736.518,08616211422.518,57914951549.518,287 *
714872633.519,88915902331.520,75514961744.518,044 *18171902.0 *18,191 *
814982474.519,69415932289.520,748
915052492.519,67516082247.520,585
1015212604.019,67116142384.020,153
1115212379.019,84016142414.020,101
1215292459.519,67916182374.020,143
1315302420.019,66816212418.020,077 *
1415342335.519,81216492234.520,600
1515342472.519,65016502237.520,587
1615482278.519,75516502241.520,557
1715632015.5 *19,23717002222.520,453
1815732532.519,231 *17002205.020,517
19 17072187.520,418
20 17812012.020,554
21 17811964.520,634
22 17901835.5 *20,309
Table A4. Non-dominated front obtained by CMOSA for the JSSP instances proposed by [43].
Table A4. Non-dominated front obtained by CMOSA for the JSSP instances proposed by [43].
ABZ5ABZ6ABZ7ABZ8ABZ9
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11250 *145.011,006967 *324.08453746 *2420.013,274763 *23,17.013,696805 *3296.514,426
21250134.0 *11,025974256.585247532403.013,2577632332.013,6888073127.014,287
31252139.010,998974251.585507932305.0 *13,137 *7732336.013,6758082941.014,094
41289141.010,984979204.08464 7732326.013,6888222846.013,820
51289142.010,946 *997258.58357 7752294.013,6338332770.013,840
6 999202.08553 7792236.5 *13,591 *8422733.513,888
7 1001172.08484 8432740.513,845
8 1009164.08589 8452727.513,832
9 1016164.58532 8462706.513,811
10 1018134.08692 8472696.513,801
11 1019126.08275 * 8852806.013,800
12 107435.58583 8862737.013,762
13 107736.58525 8892726.013,720
14 107749.58459 8962708.513,703
15 108025.58550 8972684.5 *13,679 *
16 108229.58488
17 108240.58472
18 10851.5 *8423
Table A5. Non-dominated front obtained by CMOSA for the JSSP instances proposed by [44].
Table A5. Non-dominated front obtained by CMOSA for the JSSP instances proposed by [44].
YN01YN02YN03YN04
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11103 *2485.019,8191133 *2178.019,4291083 *2025.519,3461210 *2864.520,633
211052442.019,77611372205.019,42410842015.519,33612212814.0 *20,552
311052465.519,75311402050.019,29910842012.519,33712972915.520,525
411062418.519,70611402067.019,28610892003.519,32813002910.520,520 *
511062395.019,72911482059.019,27810901987.5 *19,308
611081901.019,12911502023.0 *19,276 *11382179.519,219
711111859.019,068 12032157.518,751 *
811171867.519,013 *
911261756.5 *19,265
1011311772.519,247
Table A6. Non-dominated front obtained by CMOSA for the JSSP instances proposed by [30].
Table A6. Non-dominated front obtained by CMOSA for the JSSP instances proposed by [30].
TA01TA11TA21TA31
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11412 *1821.518,7161603 *6409.527,9032048 *7261.537,0392083 *20,557.054,457
2141216,41.518,74916076365.527,85920506184.536,322209120,504.054,404
314141809.518,70416196051.527,72220516184.536,290209620,448.054,348
414331753.518,648 *17506387.027,63520746023.536,129209720,112.054,012
514431733.518,73917536307.027,555 *20786017.536,123209920,099.053,999
614481625.0 *18,76517666293.027,57220916031.036,050210619,879.053,779
7 18596088.0 *27,67922745393.0 *35,462 *210919,860.053,760
8 211919,857.053,757
9 212119,802.053,702
10 212519,782.053,682
11 213218,670.552,157
12 213918,657.5 *52,144 *
TA41TA51TA61TA71
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
12530 *18,610.565,5293121 *77,760.0134,6373437 *71,924.0148,3706050 *368,519.5519,856
2255318,589.565,508312474,125.0131,002344571,162.0147,6086063368,491.5519,828
3273118,298.065,157312574,113.0130,990356170,685.0147,1316097367,933.5519,270
4273318,257.065,116312774,028.0130,905356770,550.0 *146,996 *6098367,927.551,9264
5273618,228.065,087313472,636.0129,513 6129366,149.551,7486
6274318,197.065,056318672,624.0129,501 6165365,118.5516,455
72832181,28.565,047318871,884.0128,761 6166365,116.5516,453
8294917,853.5 *64,772 *318971,849.0128,726 6168365,090.5516,427
9 320270,643.0127,520 6215361,891.5 *513,228 *
10 320470,623.0 *127,500 *
Table A7. Non-dominated front obtained by CMOTA for the JSSP instances proposed by [40].
Table A7. Non-dominated front obtained by CMOTA for the JSSP instances proposed by [40].
FT06FT10FT20
MKSTDSFLTMKSTDSFLTMKSTDSFLT
155 *30.03051021 *1759.594071234 *9571.017,132
25538.030110291721.0912212408914.516,578
35629.030810631711.0935812438934.016,526
45723.530510651697.0928012498898.516,562
55726.029810671562.5922612588959.516,480
65727.029710881650.58859 *12598930.516451
7589.528010891614.5903112708831.516,352
8608.5 *276 *10911619.5901812778782.516,303
9 11091468.0904613278768.016,365
10 11251459.0889013518768.516,289 *
11 11461361.0 *900313598738.0 *16,335
Table A8. Non-dominated front obtained by CMOTA for the JSSP instances proposed by [41].
Table A8. Non-dominated front obtained by CMOTA for the JSSP instances proposed by [41].
ORB1ORB2ORB3ORB4ORB5
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11180 *1853.09764964 *985.584211124 *2307.510,1571094 *1727.59897945 *1006.08032
211901714.59619983971.5867211341901.0957911041720.510,062980975.07992
311921721.59585985913.5860112081842.5977011091695.510,117994747.0 *7966
412371787.59440986975.5859312121795.5972111111600.59865999751.07950
512381714.596169871009.0834712171829.5969811181507.098181053979.57944 *
612491799.59423988980.0830312181791.5971711301626.09704
712531771.59428991857.5854512191875.0953111321588.59768
812551582.09459996918.0842712401516.5 *9349 *11331595.59760
912611581.093871011842.08630 11381548.59713
1013361415.593031015854.58526 11431487.09798
1113391372.5 *9260 *1020625.58251 11531626.09674
12 1047625.0 *8288 11551472.59645
13 1081753.08059 * 11651452.59625
14 1209721.58224 11651440.09645
15 11661428.09633
16 11731424.09621
17 11821454.09404 *
18 11831310.09506
19 11891279.09481
20 12021303.09252
21 12661249.59639
22 12841198.5 *9588
ORB6ORB7ORB8ORB9ORB10
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11090 *1382.59489433 *226.038131016 *1919.584651009 *1646.594021055 *1366.59211
210911284.59341437225.0377010251635.58181 *10131595.093311065790.58899
311341078.09177439271.5370710471617.0845710161534.092511108843.08834
411531059.09182453220.0374211481575.0831910271644.091871114686.5 *8810
51168969.09030 *465236.0369711501564.0831210361669.091301115687.58795
61204945.09072471173.5 *3620 *11761565.0829410431479.0920612461080.08747 *
71221907.0 *9034 11841502.0 *830110631360.08975
8 10641355.0 *8966
9 10661378.08942
10 10731358.58956
11 10831426.08885 *
12 10921417.08914
Table A9. Non-dominated front obtained by CMOTA for the JSSP instances proposed by [42].
Table A9. Non-dominated front obtained by CMOTA for the JSSP instances proposed by [42].
LA01LA02LA03LA04LA05
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
1666 *1416.05550663 *1327.55145617 *1807.55353598 *1396.05096593 *1241.54601
26661367.055616771284.050536241516.048905981414.050945931240.54604
36661444.05500685925.0 *4805 *6301444.049826021181.048425931290.04516
46661325.55577 6301511.549776101049.047305961277.04583
56671465.55488 6331383.548166441083.54726 *5971242.04537
66681269.05403 6371345.548206601014.0 *47436001233.54546
76721245.55468 6501147.5 *46736601027.547376001273.04499
86741246.05396 6731164.04632 * 6001190.54553
96761313.05348 6031162.04571
107021229.55438 6071154.54518
117061099.55177 6071185.04497
127261072.55210 6081176.54502
137641001.0 *5176 * 6101133.54502
14 6131093.0 *4502
15 6141130.54494
16 6221164.04459
17 6481209.04424
18 6501198.04413 *
LA06LA07LA08LA09LA10
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
1926 *4193.510151890 *4398.010014863 *3719.59421951 *4212.510607958 *4562.010536
29274150.5101088934494.099088703644.593469544387.0106019584558.510587
39434104.0100288944092.596518963401.5 *9139 *9604284.5105869604507.010481
49644061.599789043890.5 *9452 * 9664077.0 *10411 *9654277.010251
59924034.5 *9951 * 9884271.0 *10,245 *
LA11LA12LA13LA14LA15
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11222 *9579.517,6061039 *7550.014,5641150 *8618.016,3971292 *9927.517,9401207 *9792.517,960
212349317.517,34410457514.014,52811508641.516,37712929966.017,84712099679.517,847
312389222.5 *17,249 *10507498.014,51211528608.016,38712989919.517,85712179644.517,812
4 10817318.0 *14,332 *11538459.516,16013219697.0 *17,716 *12179692.517,769
5 11827884.015,577 12189628.517,705
6 11897811.0 *15,504 * 12199312.5 *17,336 *
LA16LA17LA18LA19LA20
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
1982 *909.58738825 *1045.07819872 *609.57920901 *569.08258938 *749.08616
21008771.085678301016.07782874560.57836904398.08071967697.08549
31065613.585038481001.07698905555.08017916375.08146967695.08561
41082603.08227850969.07569908555.57880916422.07972969674.08498
51091490.5 *8311854983.07557922549.08056921342.07903972645.58578
61107524.08130 *856883.57656930549.07866929276.0 *7766972647.58470
7 865845.57612933472.07797 *931325.07765978558.08318
8 873758.07517933468.5 *7824953488.07759 *1010531.0 *8291
9 883764.57500 1025662.58277
10 894752.07539 1041612.08069 *
11 911758.07448
12 918723.07415
13 927775.07336 *
14 981760.07384
15 995770.07373
16 1009730.07368
17 1176720.0 *7605
LA21LA22LA23LA24LA25
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11154 *3406.515,3291041 *3315.014,2651115 *2616.514,4581047 *2511.014,0811073 *3252.014,388
211723329.515,08410503118.014,06811182599.514,44110522477.014,04710873217.014,315
311743035.514,83510533035.014,00011582459.014,47610542870.514,00110883143.014,241
411773059.514,60710702994.013,97511602457.014,43610602613.513,86011102638.013,761
512023044.514,76310792754.013,62511602722.514,38910702593.513,91811472633.013,793
612043024.514,74310812699.0 *13,562 *11632437.014,41610732598.513,87411482682.513,742 *
712203032.514,609 11722761.514,37010792547.513,85911482623.5 *13,764
812382881.514,783 11782408.0 *14,38410802473.014,063
912392877.514,666 12102595.514,37310802546.513,858 *
1012532832.514,696 12162562.514,340 *10872368.0 *13,911
1113472973.514,634
1213492883.014,507
1313562943.014,494
1413932936.014,419
1513932929.514,489
1614032766.5 *14,412 *
LA26LA27LA28LA29LA30
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11300 *7356.523,1291374 *8083.024,3311325 *7440.023,4631328 *8518.023,2911455 *9085.025,105
213367171.522,94413777946.024,19413267315.023,33813378513.023,28614579071.025,091
313377077.522,85013787660.023,87513407233.023,25613458501.023,27414659211.525,064
413437047.522,82013807641.023,85613547185.023,17613538534.023,27314779196.525,049
513446971.522,74413947645.523,85413577096.023,08713588464.023,20314798374.524,204
613536947.5 *22,72013987494.02374213607056.023,04713608091.522,98514818348.524,178
713967083.022,66614017438.023,68613756997.022,88513638064.522,95815198280.5242,20
814547072.522,660 *14027374.023,62213846906.022,79413688062.522,95615438227.524167
9 14057408.523,58613966674.522,67213898208.022,93915848391.524,097
10 14127327.023,57514126568.522,56614037990.522,83615988090.523,796
11 14467265.023,51314176518.522,50914327971.522,86516577980.5 *23,686 *
12 14547367.023,50014366491.5 *22,482 *14487972.022,776
13 14697264.523,511 14537805.022,609
14 14767228.023,476 14757733.522,627
15 14837185.023,433 15257664.5 *22,558 *
16 15027226.523,352
17 16027109.5 *23,312 *
LA31LA32LA33LA34LA35
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11784 *219,44.544,7311850 *22,413.047,1111719 *22,284.544,7381768 *23,263.546,2751899 *24,702.547,930
2180021,424.544,211185022,411.547,265172021,944.544,398177422,903.545,915190824,515.547,743
3180721,363.544,150185722,085.546,939172221,802.544,256177522,881.545,893190923,489.546,717
4184220,988.543,775185922,074.546,928172321,777.544,190177622,657.545,669191723,481.546,709
5184320,814.5 *43,601 *188121,988.546,842173421,723.544,177179222,656.545,668191923,379.546,607
6 188421,985.546,839174321,447.543,901179622,150.545,162192323,368.5 *46,596
7 189621,958.546,812174621,446.543,900180322,109.545,121202923,393.546,568 *
8 189721,509.546,363175021,134.543,508181321,889.544,901
9 191621,481.546,335175521,040.543,414181721,797.544,809
10 205121,401.546,255177121,024.543,478182021,749.544,761
11 206821,362.546,216177620,995.543,449182321,740.5 *44,752 *
12 208421,294.5 *46,148177720,945.543,399
13 214821,372.546,059 *178320,842.543,296
14 178520,778.543,232
15 178720,722.543,176
16 178920,358.042,706
17 179620,310.042,658
18 180020,044.042,360
19 180119,567.041,883
20 180519,558.0 *41,874 *
LA36LA37LA38LA39LA40
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11467 *3203.020,6491652 *2988.521,5401446 *2646.019,0431474 *2876.020,0771438 *2444.019,398
215033180.020,62616532988.521,53614722601.019,15914942872.020,07315312369.019,333
315153076.020,42016562912.521,46014732060.518,32215132385.519,21615612336.0 *19,300 *
415193024.020,25416913256.021,32314912000.5 *18,262 *15972396.019,175
515962988.520,59716922894.021,493 16032362.019,101
616162948.520,55716963233.021,300 16052254.0 *18,993 *
716222868.520,47717052757.021,254
816322884.520,16317512798.521,208
916782903.520,10617562888.521,064
1017042958.020,03717572850.021,005 *
1117092869.019,94818392670.521,086
1217352654.019,51018832578.5 *21,291
1317382650.0 *19,506 *
Table A10. Non-dominated front obtained by CMOTA for the JSSP instances proposed by [43].
Table A10. Non-dominated front obtained by CMOTA for the JSSP instances proposed by [43].
ABZ5ABZ6ABZ7ABZ8ABZ9
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11296 *565.011,621991 *587.58826796 *3124.014,127821 *3504.014,883837 *3263.014,378
21306692.511,581999460.586587972923.513,9068233447.014,8268452996.514,126
31321683.511,5721013300.087538032805.513,8268243428.014,8078482967.514,097
41322523.011,8011021469.58543 *8762684.513,6088253423.014,8028532936.514,066
51333507.012,0161037407.587198902636.5 *13,556 *8352786.0 *14,1118562900.5 *14,030 *
61334407.511,7861037439.08674 8472817.014,086 *
71334403.011,8611045235.58614
81337574.011,6041089197.5 *8812
91338566.011,5341115203.58768
101351533.511,768
111356557.511,750
121383745.011,520
131385759.511,401
141386679.511,336
151387475.011,545
161397468.011,538
171409407.0 *11,374 *
Table A11. Non-dominated front obtained by CMOTA for the JSSP instances proposed by [44].
Table A11. Non-dominated front obtained by CMOTA for the JSSP instances proposed by [44].
YN01YN02YN03YN04
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11160 *3154.520,4701155 *3592.021,1121138 *2732.519,9411225 *4078.022,098
211662654.019,80811593545.021,10511542543.019,83912283780.021,449
311882618.019,92911653569.021,08911582457.019,75312313475.021,490
411932617.019,77111663537.021,05712042394.519,43812323460.021,465
511972399.519,91211693491.021,01112232370.519,414 *12333745.021,414
612002220.519,74511883171.520,60612772194.0 *19,46212453530.021,431
712012114.0 *19,570 *12113068.020,216 12473254.521,188
8 12123055.020,203 * 12733236.521,170
9 12803024.0 *20,592 12863233.521,167
10 13253169.0 *20,977 *
Table A12. Non-dominated front obtained by CMOTA for the JSSP instances proposed by [30].
Table A12. Non-dominated front obtained by CMOTA for the JSSP instances proposed by [30].
TA01TA11TA21TA31
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
11469 *2284.019,0271649 *7293.028,8722098 *8414.538,5342126 *21,558.055,423
215022201.019,46116557264.028,84321037979.038,146212721,553.055,453
315151792.518,79116727049.028,69621137971.038,138213521,552.055,417
415191783.518,80116737045.028,69221257247.537,366215621,540.055405
515301713.0 *18,75016776903.528,43121287153.037,398216121,416.055,316
615321725.018,714 *16966383.528,05421376999.037,244217321,109.055,009
7 18096347.5 *28,018 *21396974.037,209217721052.054,952
8 21486820.537,028218719,966.053,866
9 21506802.537,021220519,963.0 *53,863 *
10 22146550.036,679
11 22386539.036,668
12 23726316.036,317
13 23736190.0 *36,191 *
TA41TA51TA61TA71
MKSTDSFLTMKSTDSFLTMKSTDSFLTMKSTDSFLT
12632 *21,027.567,9043128 *73,001.0129,8783420 *74,932.0151,3786094 *366,221.5517,558
2265020,910.567,829313272,689.0129,566342173956.0150,4026095365,726.5517,063
3266620,826.567,745313772,651.0129,528342373884.0150,3306098365,546.5516,883
4267220,766.567,685319270,022.5126,809346169,778.0146,2246174365,320.5 *516,657 *
5277120,304.567,222324969,935.5 *126,722 *346269,767.0146,213
6277620,265.5 *67,183 * 347869,754.0 *146,200 *

References

  1. Coello, C.; Cruz, N. Solving Multiobjective Optimization Problems Using an Artificial Immune System. Genet. Program. Evolvable Mach. 2005, 6, 163–190. [Google Scholar] [CrossRef]
  2. Garey, M.R.; Johnson, D.S.; Sethi, R. PageRank: The complexity of flowshop and jobshop scheduling. Math. Oper. Res. 1976, 1, 117–129. [Google Scholar] [CrossRef]
  3. Ojstersek, R.; Brezocnik, M.; Buchmeister, B. Multi-objective optimization of production scheduling with evolutionary computation: A review. Int. J. Ind. Eng. Comput. 2020, 11, 359–376. [Google Scholar] [CrossRef]
  4. Pinedo, M. Scheduling: Theory, Algorithms, and Systems; Springer-Verlag: New York, NY, USA, 2008. [Google Scholar]
  5. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by simulated annealing. Am. Assoc. Adv. Sci. 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  6. Dueck, G.; Scheuer, T. Threshold Accepting: A General Purpose Algorithm Appearing Superior to Simulated Annealing. J. Comput. Phys. 1990, 90, 161–175. [Google Scholar] [CrossRef]
  7. Scaria, A.; George, K.; Sebastian, J. An artificial bee colony approach for multi-objective job shop scheduling. Procedia Technol. 2016, 25, 1030–1037. [Google Scholar] [CrossRef] [Green Version]
  8. Méndez-Hernández, B.; Rodriguez Bazan, E.D.; Martinez, Y.; Libin, P.; Nowe, A. A Multi-Objective Reinforcement Learning Algorithm for JSSP. In Proceedings of the 28th International Conference on Artificial Neural Networks, Munich, Germany, 17–19 September 2019; pp. 567–584. [Google Scholar] [CrossRef]
  9. López, A.; Coello, C. Study of Preference Relations in Many-Objective Optimization. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’ 2009), Montreal, QC, Canada, 8–12 July 2009; pp. 611–618. [Google Scholar] [CrossRef]
  10. Blasco, X.; Herrero, J.; Sanchis, J.; Martínez, M. Decision Making Graphical Tool for Multiobjective Optimization Problems; Springer: Berlin/Heidelberg, Germany, 2007; Volume 4527, pp. 568–577. [Google Scholar] [CrossRef]
  11. García-León, A.; Dauzère-Pérès, S.; Mati, Y. An Efficient Pareto Approach for Solving the Multi-Objective Flexible Job-Shop Scheduling Problem with Regular Criteria. Comput. Oper. Res. 2019, 108. [Google Scholar] [CrossRef]
  12. Qiu, X.; Lau, H.Y.K. An AIS-based hybrid algorithm for static job shop scheduling problem. J. Intell. Manuf. 2014, 25, 489–503. [Google Scholar] [CrossRef] [Green Version]
  13. Kachitvichyanukul, V.; Sitthitham, S. A two-stage genetic algorithm for multi-objective job shop scheduling problems. J. Intell. Manuf. 2011, 22, 355–365. [Google Scholar] [CrossRef]
  14. Zhao, F.; Chen, Z.; Wang, J.; Zhang, C. An improved MOEA/D for multi-objective job shop scheduling problem. Int. J. Comput. Integr. Manuf. 2016, 30, 616–640. [Google Scholar] [CrossRef]
  15. González, M.; Oddi, A.; Rasconi, R. Multi-objective optimization in a job shop with energy costs through hybrid evolutionary techniques. In Proceedings of the Twenty-Seventh International Conference on Automated Planning and Scheduling, Pittsburgh, PA, USA, 18–23 June 2017; pp. 140–148. [Google Scholar]
  16. Serafini, P. Simulated Annealing for Multi Objective Optimization Problems. In Proceedings of the Tenth International Conference on Multiple Criteria Decision Making, Taipei, Taiwan, 19–24 July 1992. [Google Scholar]
  17. Bandyopadhyay, S.; Saha, S.; Maulik, U.; Deb, K. A Simulated Annealing-Based Multiobjective Optimization Algorithm: AMOSA. Evol. Comput. IEEE Trans. 2008, 12, 269–283. [Google Scholar] [CrossRef] [Green Version]
  18. Liu, Y.; Dong, H.; Lohse, N.; Petrovic, S.; Gindy, N. An Investigation into Minimising Total Energy Consumption and Total Weighted Tardiness in Job Shops. J. Clean. Prod. 2013, 65, 87–96. [Google Scholar] [CrossRef]
  19. Zitzler, E.; Thiele, L. Multiobjective Evolutionary Algorithms: A Comparative Case Study and the Strength Pareto Approach. IEEE Trans. Evol. Comput. 2000, 3, 257–271. [Google Scholar] [CrossRef] [Green Version]
  20. Wisittipanich, W.; Kachitvichyanukul, V. An Efficient PSO Algorithm for Finding Pareto-Frontier in Multi-Objective Job Shop Scheduling Problems. Ind. Eng. Manag. Syst. 2013, 12, 151–160. [Google Scholar] [CrossRef]
  21. Lei, D.; Wu, Z. Crowding-measure-based multiobjective evolutionary algorithm for job shop scheduling. Int. J. Adv. Manuf. Technol. 2006, 30, 112–117. [Google Scholar] [CrossRef]
  22. Kurdi, M. An Improved Island Model Memetic Algorithm with a New Cooperation Phase for Multi-Objective Job Shop Scheduling Problem. Comput. Ind. Eng. 2017, 111, 183–201. [Google Scholar] [CrossRef]
  23. Méndez-Hernández, B.; Ortega-Sánchez, L.; Rodriguez Bazan, E.D.; Martinez, Y.; Fonseca-Reyna, Y. Bi-objective Approach Based in Reinforcement Learning to Job Shop Scheduling. Revista Cubana de Ciencias Informáticas 2017, 11, 175–188. [Google Scholar]
  24. Aarts, E.H.L.; van Laarhoven, P.J.M.; Lenstra, J.K.; Ulder, N.L.J. A Computational Study of Local Search Algorithms for Job Shop Scheduling. INFORMS J. Comput. 1994, 6, 118–125. [Google Scholar] [CrossRef]
  25. Ponnambalam, S.G.; Ramkumar, V.; Jawahar, N. A multiobjective genetic algorithm for job shop scheduling. Prod. Plan. Control 2001, 12, 764–774. [Google Scholar] [CrossRef]
  26. Suresh, R.K.; Mohanasundaram, M. Pareto archived simulated annealing for job shop scheduling with multiple objectives. Int. J. Adv. Manuf. Technol. 2006, 29, 184–196. [Google Scholar] [CrossRef]
  27. Zitzler, E.; Deb, K.; Thiele, L. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evol. Comput. 2000, 8, 173–195. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Karimi, N.; Zandieh, M.; Karamooz, H. Bi-objective group scheduling in hybrid flexible flowshop: A multi-phase approach. Expert Syst. Appl. 2010, 37, 4024–4032. [Google Scholar] [CrossRef]
  29. Deb, K.; Agrawal, S.; Pratap, A.; Meyarivan, T. Agrawal, S.; Pratap, A.; Meyarivan, T. A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In International Conference on Parallel Problem Solving from Nature; Spring: Berlin/Heidelberg, Germany, 2000; Volume 1917. [Google Scholar]
  30. Taillard, E. Benchmarks for basic scheduling problems. Eur. J. Oper. Res. 1993, 64, 278–285. [Google Scholar] [CrossRef]
  31. Deb, K. Multiobjective Optimization Using Evolutionary Algorithms; Wiley: New York, NY, USA, 2001. [Google Scholar]
  32. Schott., J.R. Fault Tolerant Design Using Single and Multicriteria Genetic Algorithm Optimization. Master’s Thesis, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA, USA, 1995.
  33. Veldhuizen, D.A.V. Multiobjective Evolutionary Algorithms: Classifications, Analyses, and New Innovations. Ph.D. Thesis, Air Force Institute of Technology, Wright-Patterson AFB, Dayton, OH, USA, 1999. [Google Scholar]
  34. Sawaragi, Y.; Nakagama, H.; Tanino, T. Theory of Multi-Objective Optimization; Springer: Boston, MA, USA, 1985. [Google Scholar]
  35. Bakuli, D.L. A Survey of Multi-Objective Scheduling Techniques Applied to the Job Shop Problem (JSP). In Applications of Management Science: In Productivity, Finance, and Operations; Emerald Group Publishing Limited: Bingley, UK, 2015; pp. 51–62. [Google Scholar]
  36. Baker, K.R. Sequencing rules and due-date assignments in job shop. Manag. Sci. 1984, 30, 1093–1104. [Google Scholar] [CrossRef]
  37. Sanvicente, S.H.; Frausto, J. A method to establish the cooling scheme in simulated annealing like algorithms. In International Conference on Computational Science and Its Applications; Springer: Berlin/Heidelberg, Germany, 2004; pp. 755–763. [Google Scholar]
  38. Solís, J.F.; Sánchez, H.S.; Valenzuela, F.I. ANDYMARK: An analytical method to establish dynamically the length of the Markov chain in simulated annealing for the satisfiability problem. Lect. Notes Comput. Sci. 2006, 4247, 269–276. [Google Scholar]
  39. May, R. Simple Mathematical Models With Very Complicated Dynamics. Nature 1976, 26, 457. [Google Scholar] [CrossRef] [PubMed]
  40. Fisher, H.; Thompson, G.L. Probabilistic learning combinations of local job-shop scheduling rules. Ind. Sched. 1963, 1, 225–251. [Google Scholar]
  41. Applegate, D.; Cook, W. A computational study of the job-shop scheduling problem. ORSA J. Comput. 1991, 3, 149–156. [Google Scholar] [CrossRef]
  42. Lawrence, S. Resource Constrained Project Scheduling: An Experimental Investigation of Heuristic Scheduling Techniques (Supplement); Graduate School of Industrial Administration, Carnegie-Mellon University: Pittsburgh, PA, USA, 1984. [Google Scholar]
  43. Adams, J.; Balas, E.; Zawack, D. The shifting bottleneck procedure for job shop scheduling. Manag. Sci. 1988, 34, 391–401. [Google Scholar] [CrossRef]
  44. Yamada, T.; Nakano, R. A genetic algorithm applicable to large-scale job-shop problems. In Proceedings of the Second International Conference on Parallel Problem Solving from Nature, Brussels, Belgium, 28–30 September1992; pp. 281–290. [Google Scholar]
  45. Hansen, P.B. Simulated Annealing. In Electrical Engineering and Computer Science-Technical Reports; School of Computer and Information Science, Syracuse University: Syracuse, NY, USA, 1992. [Google Scholar]
  46. Tripathi, P.K.; Bandyopadhyay, S.; Pal, S.K. Multi-Objective Particle Swarm Optimization with time variant inertia and acceleration coefficients. Inf. Sci. 2007, 177, 5033–5049. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Main module for CMOSA and CMOTA.
Figure 1. Main module for CMOSA and CMOTA.
Mca 26 00008 g001
Table 1. Related Works.
Table 1. Related Works.
AlgorithmObjectivesMetrics
SA [16]Makespan*
SA and TA [24]Makespan*
Hybrid GA [25]Makespan, total tardiness, and total idle time*
PASA [26]Makespan, mean flow time*
2S-GA [13]Makespan, total weighted earliness, and  total weighted tardiness*
IMOEA/D [14]Makespan, total flow time, and tardiness timeC, MID
Hybrid GA/LS/LP [15]Weighted tardiness, and energy costsHV
MOMARLA [8]Makespan, total tardinessHV
CMOSA and CMOTA (This paper)Makespan, total tardiness, and total flow timeMID, S, HV, Δ , IGD and C
* Not reported.
Table 2. Tuning parameters for CMOSA/CMOTA.
Table 2. Tuning parameters for CMOSA/CMOTA.
Number of ExecutionsInitial TemperatureFinal TemperatureAlpha L k
501000.10.98100
Table 3. General parameters for CMOSA/CMOTA.
Table 3. General parameters for CMOSA/CMOTA.
Number of ExecutionsInitial SolutionsAlphaStagnant Number
30300.9810
Table 4. Results obtained by the metrics for 70 instances.
Table 4. Results obtained by the metrics for 70 instances.
MetricCMOSACMOTASignificant
Difference
CMOSA-CMOTA
MID30,680.19 *31,233.15Yes
SPACING28,445.6228,183.17 *No
SPREAD24,969.3123,401.88 *No
HV0.42 *0.42 *No
IGD1666.25 *1870.94Yes
* Best result
Table 5. Results obtained by the coverage metric.
Table 5. Results obtained by the coverage metric.
Coverage (CMOSA, CMOTA)Coverage (CMOTA, CMOSA)
0.854 *0.063
* Best result
Table 6. Comparison among CMOSA with AMOSA.
Table 6. Comparison among CMOSA with AMOSA.
MetricCMOSAAMOSA [17]Significant
Difference
CMOSA-AMOSA
MID30,680.19 *32,138.19Yes
SPACING28,445.62 *30,129.36Yes
SPREAD24,969.31 *26,625.04No
HV0.42 *0.37No
IGD1666.25 *2209.96Yes
* Best result
Table 7. Comparison among CMOTA with AMOSA.
Table 7. Comparison among CMOTA with AMOSA.
MetricCMOTAAMOSA [17]Significant
Difference
CMOTA-AMOSA
MID31,233.15 *32,138.19No
SPACING28,183.17 *30,129.36Yes
SPREAD23,401.88 *26,625.04No
HV0.42 *0.37No
IGD1870.94 *2209.96Yes
* Best result
Table 8. CMOSA, CMOTA, and IMOEA/D results obtained using MID metric.
Table 8. CMOSA, CMOTA, and IMOEA/D results obtained using MID metric.
CMOSACMOTAIMOEA/D [14]Significant
Difference
CMOSA-IMOEA/D
Significant
Difference
CMOTA-IMOEA/D
15,729.65 *16,567.0718,727.04YesYes
* Best result
Table 9. Comparison among SPEA, CMOEA, MOPSO, CMOSA, CMOTA, and MOMARLA using HV.
Table 9. Comparison among SPEA, CMOEA, MOPSO, CMOSA, CMOTA, and MOMARLA using HV.
InstanceSPEA [8]CMOEA [8]MOPSO [8]MOMARLA [8]CMOSACMOTA
1FT060.070.070.500.650.640.75 *
2FT100.170.260.870.960.710.69
3FT200.200.200.210.250.57 *0.77 *
4ABZ50.340.330.360.400.85 *0.56 *
5ABZ60.220.360.310.420.60 *0.81 *
6ABZ70.510.451.001.000.790.51
7ABZ80.880.360.990.990.690.66
8LA260.330.390.470.470.91 *0.70 *
9LA270.580.560.410.600.71 *0.93 *
10LA280.480.420.480.540.92 *0.44
11ORB010.620.740.590.800.87 *0.63
12ORB020.200.040.300.530.88 *0.77 *
13ORB030.690.310.850.860.760.80
14ORB040.630.280.520.790.760.81 *
15ORB050.000.0230.220.900.740.32
Mean HV0.390.320.540.680.76 *0.68
* Best result.
Table 10. Complexity of the algorithms.
Table 10. Complexity of the algorithms.
AMOSAIMOEA/DSPEAMOPSOMOMARLACMOSACMOTA
O ( n Γ 2 ) O ( M Γ T ) O ( M Γ ) O ( M Γ 2 ) O ( M n p ) O ( M n p ) O ( M n p )
Table 11. Runtimes for CMOSA, CMOTA and AMOSA.
Table 11. Runtimes for CMOSA, CMOTA and AMOSA.
AlgorithmCMOSACMOTAAMOSA [17]
Data set of 70 instances
Average execution time495.22229.42 *505.84
% time saved vs AMOSA2.155 *0
Data set of 58 instances
Average execution time111.6841.97 *139.39
% time saved vs AMOSA19.8769.89 *0
Data set of 15 instances
Average execution time81.2475.24 *141.25
% time saved vs AMOSA42.4846.73 *0
* Best result
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Frausto-Solis, J.; Hernández-Ramírez, L.; Castilla-Valdez, G.; González-Barbosa, J.J.; Sánchez-Hernández, J.P. Chaotic Multi-Objective Simulated Annealing and Threshold Accepting for Job Shop Scheduling Problem. Math. Comput. Appl. 2021, 26, 8. https://doi.org/10.3390/mca26010008

AMA Style

Frausto-Solis J, Hernández-Ramírez L, Castilla-Valdez G, González-Barbosa JJ, Sánchez-Hernández JP. Chaotic Multi-Objective Simulated Annealing and Threshold Accepting for Job Shop Scheduling Problem. Mathematical and Computational Applications. 2021; 26(1):8. https://doi.org/10.3390/mca26010008

Chicago/Turabian Style

Frausto-Solis, Juan, Leonor Hernández-Ramírez, Guadalupe Castilla-Valdez, Juan J. González-Barbosa, and Juan P. Sánchez-Hernández. 2021. "Chaotic Multi-Objective Simulated Annealing and Threshold Accepting for Job Shop Scheduling Problem" Mathematical and Computational Applications 26, no. 1: 8. https://doi.org/10.3390/mca26010008

APA Style

Frausto-Solis, J., Hernández-Ramírez, L., Castilla-Valdez, G., González-Barbosa, J. J., & Sánchez-Hernández, J. P. (2021). Chaotic Multi-Objective Simulated Annealing and Threshold Accepting for Job Shop Scheduling Problem. Mathematical and Computational Applications, 26(1), 8. https://doi.org/10.3390/mca26010008

Article Metrics

Back to TopTop