Next Article in Journal
On a Robust and Efficient Numerical Scheme for the Simulation of Stationary 3-Component Systems with Non-Negative Species-Concentration with an Application to the Cu Deposition from a Cu-(β-alanine)-Electrolyte
Next Article in Special Issue
Approximately Optimal Control of Nonlinear Dynamic Stochastic Problems with Learning: The OPTCON Algorithm
Previous Article in Journal
Multiple Criteria Decision Making and Prospective Scenarios Model for Selection of Companies to Be Incubated
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Effective Decomposition-Based Stochastic Algorithm for Solving the Permutation Flow-Shop Scheduling Problem

by
Mehrdad Amirghasemi
SMART Infrastructure Facility, Faculty of Engineering and Information Sciences, University of Wollongong, Wollongong, NSW 2522, Australia
Algorithms 2021, 14(4), 112; https://doi.org/10.3390/a14040112
Submission received: 27 February 2021 / Revised: 26 March 2021 / Accepted: 29 March 2021 / Published: 30 March 2021
(This article belongs to the Special Issue Stochastic Algorithms and Their Applications)

Abstract

:
This paper presents an effective stochastic algorithm that embeds a large neighborhood decomposition technique into a variable neighborhood search for solving the permutation flow-shop scheduling problem. The algorithm first constructs a permutation as a seed using a recursive application of the extended two-machine problem. In this method, the jobs are recursively decomposed into two separate groups, and, for each group, an optimal permutation is calculated based on the extended two-machine problem. Then the overall permutation, which is obtained by integrating the sub-solutions, is improved through the application of a variable neighborhood search technique. The same as the first technique, this one is also based on the decomposition paradigm and can find an optimal arrangement for a subset of jobs. In the employed large neighborhood search, the concept of the critical path has been used to help the decomposition process avoid unfruitful computation and arrange only promising contiguous parts of the permutation. In this fashion, the algorithm leaves those parts of the permutation which already have high-quality arrangements and concentrates on modifying other parts. The results of computational experiments on the benchmark instances indicate the procedure works effectively, demonstrating that solutions, in a very short distance of the best-known solutions, are calculated within seconds on a typical personal computer. In terms of the required running time to reach a high-quality solution, the procedure outperforms some well-known metaheuristic algorithms in the literature.

1. Introduction

The endeavor of developing a suitable neighborhood scheme is a key factor in the success of any local search algorithm. The reason is that the critical role the size of the neighborhood plays in striking a balance between the effectiveness and computational time [1,2]. For striking such a balance in solving the permutation flow-shop scheduling problem, this paper presents a two-level decomposition-based, variable neighborhood search stochastic algorithm that embeds a large neighborhood scheme into a small one.
The algorithm, called Refining Decomposition-based Integrated Search (RDIS), refines a solution obtained by its small neighborhood search through its large neighborhood search, and uses the notion of decomposition in the two levels of (i) producing an initial solution, and (ii) performing large neighborhood search on the initial solution. Both neighborhood schemes employed are k-exchange [3].
The k-exchange schemes, as the most common types of neighborhood schemes, consider two candidate solutions as neighbors if they differ in at most k solution elements [3]. It is clear that larger neighborhood schemes contain more and potentially better candidate solutions but this increases the neighborhood size of these schemes exponentially with k [2]. In effect, when k is as large as the size of the problem, the corresponding neighborhood scheme makes local optimal solutions global optimal; however, because of their impracticality, these kinds of neighborhood schemes are not used in practice. On the other hand, smaller values of k require lower computational time at the cost of generating fewer candidate solutions [1,3].
In combing small and large neighborhood schemes, the RDIS has three synergetic characteristics of (i) generating an initial solution with a construction method that has been built upon the concept of decomposition of jobs and the extended two-machine solution strategy, (ii) improving the initial solution with a small neighborhood search, and (iii) enhancing the result produced by the small-neighborhood search with a large neighborhood search which has been built based on a decomposition technique.
The construction method for generating initial solutions is based on the recursive application of the extended 2-machine problem [4], described in Section 2. Aimed at generating high-quality solutions, and inspired by the famous two-machine problem [4], this recursive algorithm repeatedly decomposes the jobs and finds the best permutation for the decomposed jobs, with the goal of improving the quality of initial solutions.
Although the employed large neighborhood search, the same as the engaged construction method, is based on decomposition, it performs decomposition in an iterative manner, and not recursively. This iterative decomposition technique starts with the top k jobs in the permutation and ends with the k bottom jobs, ignoring any unfruitful contiguous parts in the middle. In each round, it also keeps the result of calculations in memory to prevent any repeat of calculations in further rounds. In optimizing small chunks, this effective use of memory makes the optimization process as fast as simple decoding of a permutation to a solution. The main contributions of this paper are outlined in the following:
  • The effective embedding of a large neighborhood decomposition technique into a variable neighborhood search for the permutation flow-shop scheduling problem;
  • Efficient decomposition of the main problem into subproblems, and finding optimal permutations for those smaller subproblems; and
  • Employing the concept of the critical path, as described in Section 4, in the employed large neighborhood search.
The paper is structured as follows. In Section 2, the related work is discussed, and Section 3 presents the formulation of the permutation flow-shop scheduling problem. Section 4 presents the RDIS and provides a stepwise description of the corresponding algorithm. In Section 5, the results of computational experiments are discussed. The concluding remarks, as well as the suggestions for the further enhancement of the stochastic algorithm, are discussed in Section 6.

2. Related Work

In manufacturing systems, scheduling is about the allocation of resources to operations over time to somehow optimize the efficiency of the corresponding system. Quite interestingly, because of their complexity, scheduling problems have been among the first categories of problems that have promoted the development of the notion of NP-completeness [5].
The Flow-shop Scheduling Problem (FSP), in which jobs should pass all machines in the same order, is one of the easy-to-state but hard-to-solve problems in scheduling. Because of these contrasting features, the FSP is one of the first problems in scheduling which captured the attention of pioneers in scheduling [4,6], and later became an attractive topic for a significant number of researchers.
Interestingly, one of the most elegant procedures in the area of algorithmic design is an efficient exact algorithm [4] that has been presented for the two special cases of this problem, which are a two-machine problem and a restricted case of the three-machine problem. Although this algorithm has been extended [6,7] to tackle some less restricted cases of the three-machine problem, it has never been extended to solve problems with a larger number of machines to optimality.
Several variations and extensions of the FSP, with multiple objective function criteria, have been studied in the literature [8,9,10]. Among the variants of the FSP, the Permutation Flow-shop Scheduling Problem (PFSP), in which jobs have the same sequence on all machines, plays a key role. The PFSP best suits multi-stage production systems in which conveyor belts are used to perform material handling between machines. Both the FSP and the PFSP have shown to be NP-Hard [11,12]. An interesting variant of the PFSP is the distributed permutation flow shop scheduling problem (DPFSP) which has been studied in [13], and, in [14], energy consumption is considered as one of its multi-objective criteria. The rest of this section briefly surveys the related literature with a focus on the PFSP.
Extensive literature surveys on the early efficient algorithms presented for the FSP can be found in [15,16]. A more recent review and classification of heuristics for the PFSP has been presented in [17]. In the proposed framework of [17], heuristics are analyzed in the three phases of (i) index development, (ii) solution construction and (iii) solution improvement. In the following, some related work is briefly reviewed.
The most related method is Jonson’s method that because of the complete dependence of the RDIS on it, this method will be described as part of the RDIS in Section 4. The Palmer method developed in [18] can be considered as another related work. The author has not provided any reference to Johnson’s paper but has mentioned that the rationale behind the development of the procedure is to minimize the makespan based on the notion of the lower bound of the single machine problem, which is obtained by the head time plus its total processing times plus its tail time. It is these heads and tails that the Palmer algorithm tries to minimize.
Based on advancing Palmer’s index method, an algorithm has been presented in [19] which works based on a slope matching mechanism. First, assuming that only one job exists, the starting time and ending time of each job on each machine is calculated and, by running the regression, a starting slope and an ending slope are calculated for each job.
The first method, with which they have compared their two techniques, is the method presented in [20], that finds the actual slope of jobs. For calculating this actual slope, simply the regression is run for each job.
The other method that they have compared with their two methods is the method based on the priority presented by [21], and obtained based on Johnson’s rule.
In comparing these four methods with one another, they found the following ranking (i) TSP-approximation, (ii) slope-matching mechanism, (iii) Gupta’s method, and (iv) Palmer’s method. These ranking criteria can be used as the basis of any construction method.
As another example, in [22] a heuristic algorithm for n-job m-machine sequencing problem has been presented that, based on the two-machine problem, builds a sequence by decomposing total processing times into three parts.
Among other effective heuristics developed for the permutation flow-shop problem, TB [23], SPIRIT [24], FSHOPH [25], NEH [26], and NEHLJP1 [27] can be mentioned. It is worth noting that NEH is one of the earliest effective heuristics and it is incorporated in many other metaheuristics. For instance, an iterated greedy algorithm is presented in [28], in which a solution is improved through repeated removal and re-insertion of the jobs, from a given solution, using the NEH heuristic.
Some effective metaheuristics developed are Ogbu and Smith’s simulated annealing [29], Taillard’s tabu search [30], Nowicki and Smutnicki’s fast tabu search [31], Reeves’s genetic algorithm [32], Nowicki and Smutnicki’s scatter search [33], Iyer and Saxena’s genetic algorithm [34], Yagmahan and Yenisey’s ant colony algorithm [35], Lian et al.’s particle swarm algorithm [36], artificial bee colony algorithm of [37], Discrete differential evolution algorithm of [38], algebraic decomposition-based evolutionary algorithm of [8], and re-blocking adjustable memetic Procedure (RAMP) of [39]. A comprehensive review and comparison of heuristics and metaheuristics for the PFSP can found in [40].

3. Problem Formulation

Assuming that there are n jobs and m machines, and that all jobs require to be processed on machines 1, 2, 3,..., and m, one after another, the PFSP is aimed at finding a permutation π of jobs, identical for all machines, that minimizes the completion time of the last job on the last machine, referred to as makespan.
Given a permutation π = ( π 1 , π 2 , , π n ) , in which π k shows the number of job in the sequence of jobs, and denoting the processing time of job j on machine i, as tji, the makespan of the problem can be calculated by rearranging the jobs based on π . In effect, by denoting the earliest completion time of job π k on the machine i as c k i , the value of c n m shows the exact amount of makespan. Hence, the objective can be stated as finding a permutation π over the set of all permutations, Π , so that c n m is minimized. A simple formulation of the PFSP, based on the formulation presented in [41] is as follows.
min π Π c n m
where,
c 11 = t π 1 , 1
c 1 i = c 1 , i 1 + t π 1 , i i = 2 , 3 , , m
c k 1 = c k 1 , 1 + t π k , 1 k = 2 , 3 , , n c k i = max ( c k , i 1 , c k 1 , i ) + t π k , i i = 2 , 3 , , m
k = 2 , 3 , , n
It is worth noting that Formulas (2)–(5) also shows how, for a given permutation, the makespan can be calculated in O ( n m ) .
Figure 1 shows a sample 4-Job 4-Machine PFSP problem, with the permutation π 1 = ( 1 , 2 , 3 , 4 ) of jobs leading to the solution value of 30. The arrows originating from each circle in Figure 1, show the corresponding processing time of the corresponding job on the corresponding machine and the sum of processing times on the bold arrows indicates the makespan. Figure 2 shows how by changing the order of π 1 = ( 1 , 2 , 3 , 4 ) to π 2 = ( 1 , 4 , 3 , 2 ) , the makespan has changed to 29, which is the optimal solution.

4. RDIS

A key justification for the design of the RDIS is an effective integration of decomposition, converting a non-tractable problem into tractable subproblems, with variable neighborhood search techniques, synergetically striking a balance between diversification and intensification elements of the algorithm.
Starting with the notion of decomposition, the RDIS works in the two levels of constructing an initial solution, and improving this solution through local searches. For this purpose, it first employs the recursive application of the extended two-machine problem as a seed-construction technique and constructs an initial permutation as a seed. This construction technique recursively decomposes the jobs into two groups and calculates the optimal solutions of each group based on the extended two-machine problem.
After such an initial seed is constructed, a variable neighborhood search is activated. This variable neighborhood search consists of two local searches, one with the small neighborhood and the other with the large neighborhood. The large neighborhood local search uses a decomposition technique for finding an optimal arrangement for different subsets of jobs. In effect, both the seed construction technique and the employed large neighborhood search work are based on the notion of decomposition.
The small neighborhood search component uses both the insertion and swapping operations as the basis of its moves. In both the small and large neighborhood searches, the concept of the critical path plays a crucial role in avoiding unfruitful changes. Whereas in the small neighborhood, the critical path is used to avoid unfruitful swaps or interchanges, in the large neighborhood search, it is used to re-arrange only promising contiguous parts of the permutation, leaving those parts whose re-arrangement is not beneficial. Figure 3 shows the c-type pseudocode of the RDIS, and Figure 4 shows the flowchart of the key steps of the RDIS.
In describing different components of this pseudocode, first, we describe the way in which the seed-construction technique uses decomposition to create the initial solutions. For creating this technique, we have modified the Johnson algorithm [4] and called the result the Recursive-Johnson Procedure (RJP). For tackling any PFSP problem, the RJP decomposes it into smaller problems and continues this in different rounds until the decomposed problem includes only one job or one machine.
The decomposition process in the RJP proceeds in several consecutive rounds. In the first round, the number of columns (machines) of the decomposed problems is divided into two equal parts, with the first machines comprising the first part and the last machines comprising the second part. All machines located in the first part comprise auxiliary machine 1, and those in the second part comprise auxiliary machine 2.
Then the corresponding two-machine problem is solved with Johnson’s algorithm and a permutation for the given jobs is determined. This permutation is only a preliminary one and the goal of further rounds is to further improve this permutation. Because of the recursive nature of the algorithm, further rounds are similar to the first round and their only difference is that they operate on a portion of jobs.
The employed large neighborhood search decomposes the problem into several distinct parts and optimizes each part separately. Based on a technique initially proposed in [30], two forward and backward matrixes simultaneously help the optimization cost to decrease.
Whereas in the forward matrix the distance of each operation from the first performed operation is presented, in the backward matrix, the distance of each operation from the final performed operation is provided. Moreover, the preliminary evaluation renders many of these combinations unnecessary. Because of using two forward and backward matrixes, when the positions of two jobs are not far from one another in the permutation, the evaluation of their swap can be performed very fast.
The solutions provided by the RJP undergo both small and large neighborhood searches. As is seen in the pseudocode presented in Figure 3, the large neighborhood search is called at line 11. The pseudocode presented in Figure 5 describes how this large neighborhood search operates. As is seen in this pseudocode, the main loop starts at line 3 and is aimed at improving the given solution, shown with permutation π , through modifying different parts of this permutation.
The substrings selected for modification are called chunk and all have the same size of chunkSize. For the purpose of modification, a subproblem of the given problem is solved to optimality through evaluating all possible ordering of jobs in a given chunk, keeping other jobs fixed in their current positions. In particular, it is line 9 of the pseudocode that calculates the makespan values of all chunkSize! permutations. For instance, if the value of chunkSize is 6, line 9 has to evaluate 6! = 720, different permutations to find the optimal one.
As is shown in line 5, the number of chunks to be optimized for a given solution is controlled through a parameter named sweepRatio, which is in the range between 0 and 1. For instance, setting sweepRatio to 0.5 and chunkSize to 6 the first chunk, which starts from position 1 and ends in position 6, to be optimized and then the starting position for the next chunk to be set to ( 1 + 6 0.5 ) = 4 .
In other words, around 50% of jobs for the optimized chunk are fixed and the remaining jobs appear in the first half of the next chunk to be optimized. After re-arranging the chunk starting at position 4, through the employed optimization process, and ending at position 9, positions 4, 5, and 6 are fixed and the next chunk starting from position 7 and ending in position 12 is selected for being re-arranged.
Fixing half of the re-arranged jobs in a chunk and carrying the other half with the next chunk continues until a chunk is selected whose ending point is the last position in the permutation, signaling the end of a cycle. As indicated in lines 3 and 15 of the pseudocode, a new cycle can again resume from position 1 if the current cycle has been able to improve the solution.
The quality of the obtained solutions depends on the values of chunkSize and sweepRatio. Whereas increasing the value of sweepRatio leads to increasing the exploration power and reduces computation time, its decreasing can produce better solutions at the expense of higher computation time. Since optimizing a chunk is involved with evaluating chunkSize! permutations, the execution time is highly dependent on the chunk size.
This implies that a combination of selecting a proper value for this parameter and a mechanism for the recognition of unfruitful chunks can greatly improve the efficiency of the stochastic algorithm. Values between 6 and 9 for chunkSize can lead to improving solution quality, without increasing execution time significantly.
With respect to having a necessary mechanism for the recognition of unfruitful chunks, line 7 detects such chunks based on the critical path properties of the current solution. This is done based on the fact that if all of the critical operations in a chunk belong to the same machine, optimizing such a chunk cannot reduce the makespan, and therefore the chunk can be skipped. In other words, regardless of the order of jobs in such an unfruitful chunk, the previous critical path persists in the new solution and this makes any decrease in the makespan impossible.
Setting the values of chunkSize and sweepRatio to 4 and 0.5, respectively, Figure 6 illustrates how the application of just one cycle of the steps presented in the pseudocode leads to improving an initial solution for a sample 8-jobs 8-machines instance from 1057 to 942. In this instance, the processing times of operations have been selected as integers between 0 and 100 with a uniform distribution.
As is seen in the figure, each solution has been shown with a matrix, in which the permutation of jobs has been represented in its first column. In each matrix, the critical operations corresponding to the solution are highlighted and its makespan, which is equal to the sum of the highlighted processing times, has also been shown. Figure 6a shows the initial solution, π 0 = { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 } with the cost of 1057.
In Figure 6b, the first chunk [ 1 , 2 , 3 , 4 ] is optimized and is changed to [ 3 , 1 , 2 , 4 ] . This alone reduces the makespan by 38 units and changes it to 1019. Since the sweepRatio is 0.5, the next chunk to be optimized starts from the 3rd row and is [ 2 , 4 , 5 , 6 ] which is optimized to [ 2 , 6 , 5 , 4 ] , shown in Figure 6c.
This procedure continues accordingly and optimizes all corresponding chunks. As mentioned, Figure 6 shows the result of applying the main loop of the psedudocode just for one cycle and as lines 3, 4 and 15 of the pseudocode show, in general, the optimization of chunks continues for consecutive cycles until no improvement occurs in a cycle.

5. Computational Experiments

The RDIS has been implemented in C++ and compiled via the Visual C++ compiler on a PC with 2.2 GHz speed. Before testing the algorithm on the benchmark instances, the parameters of the construction and large neighborhood components have been set. With respect to the construction component, tthe algorithm has two parameters, namely NRepeats and SwapProb, indicating the frequency and the probability of performing swap moves in the RJP method. Based on the preliminary experiments, we observed that setting NRepeats and SwapProb to 1 and 0.4, respectively, yields the best results by providing a balance between the quality and diversity of the initial solutions generated by the construction component.
With respect to the large neighborhood component, the RDIS has also two parameters, namely ChunkSize, and SweepRatio. Setting ChunkSize to a small value (≤5) will increase the speed of the procedure at the expense of lower solution quality. On the other hand, setting ChunkSize to any value larger than 10 will dramatically increase the running time. Moreover, the computational experiments show that setting the value of SweepRatio close to 1 or 0 degrades the performance. Due to these observations, and based on further preliminary experiments, ChunkSize and SweepRatio have been set to 7 and 0.3, respectively.
We have employed Taillard’s repository of benchmark instances [42], available on the website supported by the University of Applied Sciences of Western Switzerland, to test the performance of the RDIS. In this repository, as many as 120 instances exist, and all of these instances have been extracted for the RDIS to be tested on. These instances are in 12 classes, with 10 instances existing in each class. The number of jobs and machines of these instances are in the range 20–500 and 5–20, respectively.
Before presenting the performance of the stochastic algorithm on the benchmark instances and comparing the results with the best available solutions in the literature, we present the internal operations and convergence of the algorithm in solving the instances ta001, and ta012. Figure 7 shows solution value versus iteration for 5000 iterations, for the instance ta001. As is seen, a high-quality solution can follow a low-quality one, and vice versa.
In general, the quality of the best solution obtained improves as the number of iterations increases. Figure 8a shows how this has happened in the first 100 iterations of the ta001 problem instance. As is seen, before iteration 20, the quality of the best-obtained solution has improved from 1413 to 1336 and for the next 60 iterations, no improvement has occurred. Between iterations 80 and 100, two other improvements have occurred and 1336 has improved to 1297. A similar trend can be seen in other instances as well; Figure 8b shows the convergence pattern, in just over 2000 iterations for a larger problem instance, ta012.

Comparison with Other Metaheuristics

RDIS is compared with a variety of metaheuristic in terms of solution quality and computation time. In particular, as presented in Table 1, RDIS is compared with the Re-blocking Adjustable Memetic Procedure (RAMP) of [39] NEGAVNS of [43], Particle swarm optimization algorithm, PSOVNS, of [44], Simulated Annealing algorithm, SAOP, of [45], two Ant Colony Optimization algorithms, PACO and M-MMAS, due to [46], the iterated local search (ILS) of [47], and Hybrid genetic algorithm, HGA_RMA, of [48].
SAOP, PACO, M-MMAS, ILS and HGA_RMA are all run on a CPU with 2.6 GHz clock speed and the results are reported in [48]. However, RAMP and NEGAVNS are run on a 2.2 GHz and a 2.4 GHz CPU respectively, and PSOVNS is run on a 2.8 GHz CPU. Furthermore, CPU clock speeds are presented in Table 2 and a scaling factor has been calculated. However, since CPU times depend on many factors such as developer skills, compiler efficiency, implementation details, and CPU architecture, any running time comparison should be made with caution.
Table 3 shows the comparison of the real and scaled maximum allowed running time for RDIS, NEGAVNS, PSOVNS, and HGA_RMA. The maximum allowed running time of the other approaches in Table 3 is equal to that of HGA_RMA, since they are re-implemented and compared in [48]. Furthermore, since only NEGAVNS and PSOVNS have reported the average time needed to reach their best solution, a comparison is presented in Table 4.
In addition, based on the fact that NEGAVNS, RAMP, and RDIS, allow similar maximum allowed running time (ref. Table 3), statistical testing has been performed to further analyze their difference. For this purpose, these methods are initially ranked based on their average deviation from the best-known solution %DEVavg, for each instance group, as shown in Table 5. In the case of tied %DEVavg values, the average of the ranks with the assumption of no tied values, are calculated. The mean rank for NEGAVNS, RAMP, and RDIS are 1.5, 1.667 and 2.833 respectively. Friedman test has been performed on the ranked data set, showing a statistically significant difference in the ranks, with χ 2 ( 2 ) = 15.2 , p = 0.0005 .
Further, for each pair of algorithms, post-hoc analysis using Wilcoxon signed-rank tests has been conducted with a significance level set at p < ( 0.05 / 3 = 0.017 ) (Bonferroni correction). It has been found that there were no significant differences between the ranks of NEGAVNS and RAMP (p = 0.482) but a significant difference between the ranks of NEGAVNS, and RDIS (p < 0.001 ) as well as between RAMP and RDIS (p < 0.001). This could indicate that out of the three compared methods, NEGAVNS and RAMP are the best performing ones and RDIS falls in second place. It is worth noting, these results should be interpreted with caution, as they are solely based on average deviation, and other factors such as best deviation, average and best running times, and implementation details have not been considered. An interested reader can see [49] for a tutorial on nonparametric statistical tests, used for comparing metaheuristics.
For a more fine-grained analysis, an instance-by-instance performance comparison is presented in Table 6 and Table 7. In these tables, the performance of the RDIS on these instances is shown and compared with that of the NEGAVNS presented in [43]. The NEGAVNS, which is a genetic algorithm (GA) and uses the variable neighborhood search (VNS) employs the NEH heuristic presented in [26] for constructing its initial solutions, and that is why it has been named as NEGAVNS. The NEH has also been named as the initial of its authors and is a famous algorithm. As further information, the performance of the NEH has also been provided in the table.
In line with [43] (Zobolas, Tarantilis et al. 2009), we have set the running time of the RDIS to ( n m ) / 10 seconds for each run. Moreover, for removing the effect of the random seed and in line with other algorithms, the RDIS has been run 10 times for each instance, with different random seeds.
In Table 6 and Table 7, columns 2 and 3 represent the number of jobs and machines, respectively, and columns 4 and 5 show the lower and upper bound of each instance. The value of the upper bound is the best available makespan for the corresponding instance in the literature, and in the cases where upper and lower bounds are equal, the best available makespan in the literature is equal to the optimal solution.
Columns 6, 7, and 10 provide the values of the makespan produced by NEH, RDIS, and NEGAVNS, respectively. The value of Tbest shows the shortest time taken for the best solution to be obtained in seconds, and the value of %DEVavg shows the average of the percentage deviation from the best available makespan. The percentage deviation from the best available makespan has been calculated based on the formula of ( s U B ) / U B , in which s and U B show the corresponding makespan calculated and the best available makespan, which is also an upper bound for the corresponding instance. The wide spectrum information provided in Table 6 and Table 7, based on 120 benchmark instances, indicates that the RDIS is highly competitive and produces comparatively high-quality solutions.
In all 120 instances, the RDIS has obtained a better solution value than the NEH heuristic. In terms of best solution value, the RDIS has obtained equal or better value in 61 out of 120 problems. Moreover, in 46 out of 120 instances the average percentage deviation has been smaller than or equal to that of the NEGAVNS, and in 100 out of 120 instances, the average shortest time to find the best solution is less than or equal to that of the NEGAVNS. Moreover, the total average running times to reach the best solution of RDIS is more than 44 times faster than that of NEGAVNS.

6. Concluding Remarks

In the area of algorithm design, the notion of decomposition is an umbrella term for obtaining a solution to a problem through its break-down into subproblems, with all these subproblems being handled with the same procedure. The RDIS is a stochastic algorithm based on such notion, in the sense that it translates non-tractable problems to tractable subproblems and obtains optimal solutions of these tractable subproblems.
The integration of the optimal sub-solutions provided for these subproblems, expectedly does not lead to the overall optimal solution, except for small-sized problems. However, computational experiments have shown that the RDIS is able to produce solutions with the accuracy of one percent from the best-known solutions in the literature. This success is not the result of one factor but that of three major factors.
First, the combination of small and large neighborhood schemes is able to create a delicate balance between speed and quality. With this respect, it should be noticed that although in local searches multi-exchanges usually lead to better results than that of pair-exchanges, nevertheless, multi-exchanges require outsized execution times.
In other words, in the RDIS, the size of k-exchange neighborhood is not O n k , as it is the case with other k-exchange neighborhoods, but only a tiny fraction of it. The reason is that in the RDIS, the computational burden has been placed on the small neighborhood scheme and the large neighborhood scheme is used only for promising partitions, taking an immense bulk of possible k-exchanges out of considerations and contributing to the efficiency of the stochastic algorithm.
Second, the RJP, as a construction method, is able to successively rearrange jobs through informed revisions, and this makes the quality of initial solutions for the local searches high. To further emphasize the role of such informed decisions in increasing solution quality, we can mention similar algorithms like the ant colony optimization, which have simply built upon construction methods but can even compete with local searches.
Third, varying the initial permutation through randomly swapping the neighboring jobs, with the chance of SwapProb, not only has diversified the search but it has led to the feature of not losing the accumulated knowledge regarding the best permutation of jobs initially obtained by the RJP. With respect to this feature, the issue is not simply that the random swapping of jobs diversifies the permutation initially obtained by the RJP and allows different starting points to be generated. The deeper issue is that since the RJP does not care about the position of two neighboring jobs and mainly concentrates on grouping the jobs in two partitions, these random swaps, despite their contribution to diversification, cannot significantly degrade the quality of the original permutation proposed by the RJP.
The RDIS can be improved in several directions as outlined in the following:
  • Trivial as well as sophisticated implementation of parallelization techniques;
  • Efficient estimation, instead of exact evaluation, of the makespan; and
  • Integration with population-based techniques.
For enhancing the algorithm through the concept of parallelization, it is possible to employ this concept in a range from its trivial implementation to its most sophisticated structure. In its trivial implementation, several threads of the same procedure can start on a number of processors with different initial solutions.
In its sophisticated structure, however, parallelization can be employed for the entire algorithm as an integrated entity. In this case, two groups of threads need to be developed. In the first group, each thread can calculate the makespan of one neighbor on a separate processor, regardless of whether a small or large neighborhood is in place.
On the other hand, in the second group, each thread can be assigned to solving a two-machine problem. Since nearly the entire time of local searches is spent on calculating the values of makespan and the entire time of the RJP is spent on solving the two-machine problems, this form of parallelization speeds up the algorithm and enables the enhanced procedure to tackle larger instances more efficiently.
The other promising direction for further enhancing the RDIS is the development of novel evaluation functions. Considering the fact that a large percentage of computation time in a typical local search is spent on calculating the exact values of the makespan, the development of novel evaluation functions, which with significant precision can efficiently estimate such values, can decrease the search effort and work towards search efficiency. Based on such ease of calculation, which is obtained at the loss of insignificant precision, the local search can probe larger parts of the search space and, if conducted properly, can obtain high-quality solutions.
The final suggestion for enhancing the RDIS is the development of a population-based mechanism that can work with the current point-based local searches employed. Such a mechanism can govern how computation time is devoted to manipulating solutions with higher quality, and in this way, it can prevent solutions with lower quality from seizing computational resources. The proposed population-based mechanism can also spread the characteristics of high-quality solutions in other individuals of the population. Since, as the population is evolved, the genetic diversity among the individuals of the population declines and the population becomes homogeneous, the design of proper mutation operators exploiting the structure of the PFSP is of key importance in implementing such a possible population-based mechanism.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ahuja, R.; Jha, K.; Orlin, J.; Sharma, D. Very large-scale neighborhood search for the quadratic assignment problem. INFORMS J. Comput. 2007, 19, 646–657. [Google Scholar] [CrossRef] [Green Version]
  2. Hansen, P.; Mladenović, N. Variable Neighborhood Search. In Handbook of Metaheuristics; Glover, F., Kochenberger, G.A., Eds.; Springer: Boston, MA, USA, 2003; pp. 145–184. [Google Scholar] [CrossRef] [Green Version]
  3. Ahuja, R.K.; Ergun, Ö.; Orlin, J.B.; Punnen, A.P. A survey of very large-scale neighborhood search techniques. Discret. Appl. Math. 2002, 123, 75–102. [Google Scholar] [CrossRef] [Green Version]
  4. Johnson, S. Optimal two and three machine production scheduling with set up times included. Nav. Res. Logist. 1954, 1, 61–68. [Google Scholar] [CrossRef]
  5. Garey, M.; Johnson, D.; Sethi, R. The complexity of flowshop and jobshop scheduling. Math. Oper. Res. 1976, 1, 117–129. [Google Scholar] [CrossRef]
  6. Jackson, J. An extension of Johnson’s results on job IDT scheduling. Nav. Res. Logist. Q. 1956, 3, 201–203. [Google Scholar] [CrossRef]
  7. Burns, F.; Rooker, J. Technical Note—Three-Stage Flow-Shops with Recessive Second Stage. Oper. Res. 1978, 26, 207–208. [Google Scholar] [CrossRef] [Green Version]
  8. Baioletti, M.; Milani, A.; Santucci, V. MOEA/DEP: An Algebraic Decomposition-Based Evolutionary Algorithm for the Multiobjective Permutation Flowshop Scheduling Problem. In Evolutionary Computation in Combinatorial Optimization; Liefooghe, A., López-Ibáñez, M., Eds.; Springer International Publishing: Cham, Switzerland, 2018; pp. 132–145. [Google Scholar]
  9. Shao, Z.; Shao, W.; Pi, D. Effective heuristics and metaheuristics for the distributed fuzzy blocking flow-shop scheduling problem. Swarm Evol. Comput. 2020, 59, 100747. [Google Scholar] [CrossRef]
  10. Caldeira, R.H.; Gnanavelbabu, A.; Vaidyanathan, T. An effective backtracking search algorithm for multi-objective flexible job shop scheduling considering new job arrivals and energy consumption. Comput. Ind. Eng. 2020, 149, 106863. [Google Scholar] [CrossRef]
  11. Rinnoy Kan, A. Machine Scheduling Problems: Classification, Complexity and Computation; Martinus Nijhoff Hague: Leiden, Belgium, 1976. [Google Scholar]
  12. Röck, H. The three-machine no-wait flow shop is NP-complete. JACM 1984, 31, 336–345. [Google Scholar] [CrossRef]
  13. Naderi, B.; Ruiz, R. The distributed permutation flowshop scheduling problem. Comput. Oper. Res. 2010, 37, 754–768. [Google Scholar] [CrossRef]
  14. Wang, G.; Gao, L.; Li, X.; Li, P.; Tasgetiren, M.F. Energy-efficient distributed permutation flow shop scheduling problem using a multi-objective whale swarm algorithm. Swarm Evol. Comput. 2020, 57, 100716. [Google Scholar] [CrossRef]
  15. Dudek, R.; Panwalkar, S.; Smith, M. The lessons of flowshop scheduling research. Oper. Res. 1992, 40, 7–13. [Google Scholar] [CrossRef] [Green Version]
  16. Blazewicz, J.; Domschke, W.; Pesch, E. The job shop scheduling problem: Conventional and new solution techniques. Eur. J. Oper. Res. 1996, 93, 1–33. [Google Scholar] [CrossRef]
  17. Framinan, J.M.; Gupta, J.N.D.; Leisten, R. A review and classification of heuristics for permutation flow-shop scheduling with makespan objective. J. Oper. Res. Soc. 2004, 55, 1243–1255. [Google Scholar] [CrossRef]
  18. Palmer, D. Sequencing jobs through a multi-stage process in the minimum total time—A quick method of obtaining a near optimum. J. Oper. Res. Soc. 1965, 16, 101–107. [Google Scholar] [CrossRef]
  19. Bonney, M.; Gundry, S. Solutions to the constrained flowshop sequencing problem. Oper. Res. Q. 1976, 27, 869–883. [Google Scholar] [CrossRef]
  20. Pajak, Z. The slope-index technique for scheduling in machine shop. In Current Production Scheduling Practice Conference; Tate, T., Ed.; University of London: London, UK, 1971. [Google Scholar]
  21. Gupta, J.N. A general algorithm for m*n flowshop scheduling problem. Int. J. Prod. Res. 1969, 7, 241. [Google Scholar] [CrossRef]
  22. Campbell, H.; Dudek, R.; Smith, M. A heuristic algorithm for the n job, m machine sequencing problem. Manag. Sci. 1970, 16, 630–637. [Google Scholar] [CrossRef] [Green Version]
  23. Turner, S.; Booth, D. Comparison of heuristics for flow shop sequencing. Omega 1987, 15, 75–78. [Google Scholar] [CrossRef]
  24. Widmer, M.; Hertz, A. A new heuristic method for the flow shop sequencing problem. Eur. J. Oper. Res. 1989, 41, 186–193. [Google Scholar] [CrossRef]
  25. Moccellin, J.V. A New Heuristic Method for the Permutation Flow Shop Scheduling Problem. J. Oper. Res. Soc. 1995, 46, 883–886. [Google Scholar] [CrossRef]
  26. Nawaz, M.; Enscore, E.; Ham, I. A heuristic algorithm for the m-machine n-job flow-shop sequencing problem. Omega 1983, 11, 91–95. [Google Scholar] [CrossRef]
  27. Liu, W.; Jin, Y.; Price, M. A new improved NEH heuristic for permutation flowshop scheduling problems. Int. J. Prod. Econ. 2017, 193, 21–30. [Google Scholar] [CrossRef] [Green Version]
  28. Ruiz, R.; Stützle, T. A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem. Eur. J. Oper. Res. 2007, 177, 2033–2049. [Google Scholar] [CrossRef]
  29. Ogbu, F.; Smith, D. The application of the simulated annealing algorithm to the solution of the n/m/Cmax flowshop problem. Comput. Oper. Res. 1990, 17, 243–253. [Google Scholar] [CrossRef]
  30. Taillard, E. Some efficient heuristic methods for the flow shop sequencing problem. Eur. J. Oper. Res. 1990, 47, 65–74. [Google Scholar] [CrossRef]
  31. Nowicki, E.; Smutnicki, C. A fast tabu search algorithm for the permutation flow-shop problem. Eur. J. Oper. Res. 1996, 91, 160–175. [Google Scholar] [CrossRef]
  32. Reeves, C. A genetic algorithm for flowshop sequencing. Comput. Oper. Res. 1995, 22, 5–13. [Google Scholar] [CrossRef]
  33. Nowicki, E.; Smutnicki, C. Some aspects of scatter search in the flow-shop problem. Eur. J. Oper. Res. 2006, 169, 654–666. [Google Scholar] [CrossRef]
  34. Iyer, S.; Saxena, B. Improved genetic algorithm for the permutation flowshop scheduling problem. Comput. Oper. Res. 2004, 31, 593–606. [Google Scholar] [CrossRef]
  35. Yagmahan, B.; Yenisey, M. A multi-objective ant colony system algorithm for flow shop scheduling problem. Expert Syst. Appl. 2010, 37, 1361–1368. [Google Scholar] [CrossRef]
  36. Lian, Z.; Gu, X.; Jiao, B. A novel particle swarm optimization algorithm for permutation flow-shop scheduling to minimize makespan. Chaos Solitons Fractals 2008, 35, 851–861. [Google Scholar] [CrossRef]
  37. Liu, Y.F.; Liu, S.Y. A hybrid discrete artificial bee colony algorithm for permutation flowshop scheduling problem. Appl. Soft Comput. 2013, 13, 1459–1463. [Google Scholar] [CrossRef]
  38. Santucci, V.; Baioletti, M.; Milani, A. Solving permutation flowshop scheduling problems with a discrete differential evolution algorithm. AI Commun. 2016, 29, 269–286. [Google Scholar] [CrossRef]
  39. Amirghasemi, M.; Zamani, R. An Effective Evolutionary Hybrid for Solving the Permutation Flowshop Scheduling Problem. Evol. Comput. 2017, 25, 87–111. [Google Scholar] [CrossRef]
  40. Fernandez-Viagas, V.; Ruiz, R.; Framinan, J.M. A new vision of approximate methods for the permutation flowshop to minimise makespan: State-of-the-art and computational evaluation. Eur. J. Oper. Res. 2017, 257, 707–721. [Google Scholar] [CrossRef]
  41. Nowicki, E.; Smutnicki, C. A fast taboo search algorithm for the job shop problem. Manag. Sci. 1996, 42, 797–813. [Google Scholar] [CrossRef]
  42. Taillard, E. Benchmarks for basic scheduling problems. Eur. J. Oper. Res. 1993, 64, 278–285. [Google Scholar] [CrossRef]
  43. Zobolas, G.; Tarantilis, C.D.; Ioannou, G. Minimizing makespan in permutation flow shop scheduling problems using a hybrid metaheuristic algorithm. Comput. Oper. Res. 2009, 36, 1249–1267. [Google Scholar] [CrossRef]
  44. Tasgetiren, M.F.; Liang, Y.C.; Sevkli, M.; Gencyilmaz, G. A particle swarm optimization algorithm for makespan and total flowtime minimization in the permutation flowshop sequencing problem. Eur. J. Oper. Res. 2007, 177, 1930–1947. [Google Scholar] [CrossRef]
  45. Osman, I.H.; Potts, C.N. Simulated annealing for permutation flow-shop scheduling. Omega 1989, 17, 551–557. [Google Scholar] [CrossRef]
  46. Rajendran, C.; Ziegler, H. Ant-colony algorithms for permutation flowshop scheduling to minimize makespan/total flowtime of jobs. Eur. J. Oper. Res. 2004, 155, 426–438. [Google Scholar] [CrossRef]
  47. Stützle, T. Applying Iterated Local Search to the Permutation Flow Shop Problem; Technical Report, Technical Report AIDA-98-04; FG Intellektik, TU Darmstadt: Darmstadt, Germany, 1998. [Google Scholar]
  48. Ruiz, R.; Maroto, C.; Alcaraz, J. Two new robust genetic algorithms for the flowshop scheduling problem. Omega 2006, 34, 461–476. [Google Scholar] [CrossRef]
  49. Derrac, J.; García, S.; Molina, D.; Herrera, F. A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 2011, 1, 3–18. [Google Scholar] [CrossRef]
Figure 1. A sample 4-job 4-machine Permutation Flow-shop Scheduling Problem (PFSP) problem, with the permutation of (1,2,3,4) of jobs leading to the solution value of 30.
Figure 1. A sample 4-job 4-machine Permutation Flow-shop Scheduling Problem (PFSP) problem, with the permutation of (1,2,3,4) of jobs leading to the solution value of 30.
Algorithms 14 00112 g001
Figure 2. Changing the order of (1,2,3,4) to (1,4,3,2) in the sample problem and obtaining the optimal solution with the makespan of 29.
Figure 2. Changing the order of (1,2,3,4) to (1,4,3,2) in the sample problem and obtaining the optimal solution with the makespan of 29.
Algorithms 14 00112 g002
Figure 3. The c-type pseudocode of the Refining Decomposition-based Integrated Search (RDIS) algorithm.
Figure 3. The c-type pseudocode of the Refining Decomposition-based Integrated Search (RDIS) algorithm.
Algorithms 14 00112 g003
Figure 4. The flowchart of the the main steps of the RDIS algorithm.
Figure 4. The flowchart of the the main steps of the RDIS algorithm.
Algorithms 14 00112 g004
Figure 5. The c-type pseudocode of largeNeighbourhoodSearch.
Figure 5. The c-type pseudocode of largeNeighbourhoodSearch.
Algorithms 14 00112 g005
Figure 6. Changing the order of (1,2,3,4,5,6,7,8) to (3,1,2,6,4,8,7,5) in a sample 8 ( × ) 8 problem in 3 steps, with each step optimizing a chunk of size 4.
Figure 6. Changing the order of (1,2,3,4,5,6,7,8) to (3,1,2,6,4,8,7,5) in a sample 8 ( × ) 8 problem in 3 steps, with each step optimizing a chunk of size 4.
Algorithms 14 00112 g006
Figure 7. Solution value versus iteration in the benchmark instance ta001.
Figure 7. Solution value versus iteration in the benchmark instance ta001.
Algorithms 14 00112 g007
Figure 8. So-far-Best solution versus iteration in the benchmark instances ta001 (a) and ta012 (b).
Figure 8. So-far-Best solution versus iteration in the benchmark instances ta001 (a) and ta012 (b).
Algorithms 14 00112 g008
Table 1. Comparison of average percent deviation with the best methods in the literature.
Table 1. Comparison of average percent deviation with the best methods in the literature.
GroupInstancesSizeSAOPILSM-MMASPACOHGA_RMAPSOVNSNEGAVNSRAMPRDIS
1ta001–ta01020 × 51.050.330.040.180.040.030.000.000.00
2ta011–ta02020 × 102.600.520.070.240.020.020.010.030.03
3ta021–ta03020 × 202.060.280.060.180.050.050.020.040.04
4ta031–ta04050 × 50.340.180.020.050.000.000.000.000.01
5ta041–ta05050 × 103.501.451.080.810.720.570.820.370.95
6ta051–ta06050 × 204.662.051.931.410.991.361.080.611.88
7ta061–ta070100 × 50.300.160.020.020.010.000.000.000.01
8ta071–ta080100 × 101.340.640.390.290.160.180.140.060.38
9ta081–ta090100 × 204.492.422.421.931.301.451.401.762.35
10ta091–ta100200 × 100.940.500.300.230.140.180.160.150.32
11ta101–ta110200 × 203.672.072.151.821.261.351.252.002.30
12ta111–ta120500 × 202.201.201.020.850.690.711.201.32
Table 2. Scaling factor computed for each CPU.
Table 2. Scaling factor computed for each CPU.
AlgorithmClock Speed (GHz)Scaling Factor
RDIS2.21.00
RAMP2.21.00
NEGAVNS2.41.09
HGA_RMA2.61.18
ILS2.61.18
SAOP2.61.18
M-MMAS2.61.18
PACO2.61.18
PSOVNS2.81.27
Table 3. Comparison of maximum allowed running times for different metaheuristics.
Table 3. Comparison of maximum allowed running times for different metaheuristics.
HGA_RMAPSOVNSNEGAVNSRDIS / RAMP
GroupInstancesSizeRealScaledRealScaledRealScaledRealScaled
1ta001–ta01020 × 54.55.313003811010.91010
2ta011–ta02020 × 10910.623003812021.82020
3ta021–ta03020 × 201821.243003814043.64040
4ta031–ta04050 × 511.313.3343003812527.252525
5ta041–ta05050 × 1022.526.553003815054.55050
6ta051–ta06050 × 204553.1300381100109100100
7ta061–ta070100 × 522.526.556007625054.55050
8ta071–ta080100 × 104553.1600762100109100100
9ta081–ta090100 × 2090106.2600762200218200200
10ta091–ta100200 × 1090106.2600762200218200200
11ta101–ta110200 × 20180212.4600762400436400400
12ta111–ta120500 × 204505311000109010001000
Table 4. Comparison of average time needed to reach the best solution for PSOVNS, NEGAVNS, RAMP, and RDIS.
Table 4. Comparison of average time needed to reach the best solution for PSOVNS, NEGAVNS, RAMP, and RDIS.
PSOVNSNEGAVNSRDIS / RAMP
GroupInstancesSizeRealScaledRealScaledRealScaled
1ta001–ta01020 × 513.517.1452.22.40.00.0
2ta011–ta02020 × 1026.333.40112.213.30.30.3
3ta021–ta03020 × 2069.388.01129.231.813.513.5
4ta031–ta04050 × 52.83.5568.28.90.10.1
5ta041-ta05050 × 1079.8101.34632.335.219.919.9
6ta051–ta06050 × 20168.1213.4875560.034.334.3
7ta061–ta070100 × 552.666.80230.833.61.51.5
8ta071–ta080100 × 10211267.9758.764.039.339.3
9ta081–ta090100 × 20310.8394.716122.7133.775.075.0
10ta091–ta100200 × 10191.3242.951134.5146.649.849.8
11ta101–ta110200 × 20438.7557.149271.7296.2170.0170.0
12ta111–ta120500 × 20523.4570.5482.3482.3
Table 5. NEGAVNS, RAMP, and RDIS ranked based on %DEVavg for each problem group.
Table 5. NEGAVNS, RAMP, and RDIS ranked based on %DEVavg for each problem group.
GroupInstancesNEGAVNSRAMPRDIS
1ta001–ta0102.002.002.00
2ta011–ta0201.002.502.50
3ta021–ta0301.002.502.50
4ta031–ta0401.501.503.00
5ta041–ta0502.001.003.00
6ta051–ta0602.001.003.00
7ta061–ta0701.501.503.00
8ta071–ta0802.001.003.00
9ta081–ta0901.002.003.00
10ta091–ta1002.001.003.00
11ta101–ta1101.002.003.00
12ta111–ta1201.002.003.00
Table 6. Comparing the performance of the RDIS with that of NEGAVNS for all individual instances.
Table 6. Comparing the performance of the RDIS with that of NEGAVNS for all individual instances.
RDISNEGAVNS
InstancenmLBUBNEHBest%DEVavg Tbest Best%DEVavg Tbest
ta00120512781278128612780.00012780.001
ta00220513591359136513590.00013590.002
ta00320510811081115910810.00010810.002
ta00420512931293132512930.00012930.002
ta00520512351235130512350.00012350.002
ta00620511951195122811950.00011950.003
ta00720512391239127812390.00012390.003
ta00820512061206122312060.00012060.002
ta00920512301230129112300.00012300.001
ta01020511081108115111080.00011080.004
ta011201015821582168015820.00015820.0010
ta012201016591659172916590.00016590.029
ta013201014961496155714960.02214960.0012
ta014201013771377143913770.04013770.0517
ta015201014191419150214190.00014190.0011
ta016201013971397145313970.00013970.0015
ta017201014841484156214840.00014840.0011
ta018201015381538160915380.24015380.0010
ta019201015931593164715930.00015930.0013
ta020201015911591165315910.00115910.0014
ta021202022972297241022970.051422970.0026
ta022202020992099215020990.063620990.0133
ta023202023262326241123260.092423260.0232
ta024202022232223226222230.03222230.0022
ta025202022912291239722910.103422910.0421
ta026202022262226234922260.091822260.0335
ta027202022732273236222730.00222730.0736
ta028202022002200224922000.00322000.0025
ta029202022372237232022370.00022370.0023
ta030202021782178227721780.01121780.0239
ta03150527242724273327240.00027240.006
ta03250528342834284328360.09028340.005
ta03350526212621264026210.00026210.001
ta03450527512751278227510.00027510.0011
ta03550528632863286828630.01128630.008
ta03650528292829285028290.00028290.005
ta03750527252725275827250.00027250.007
ta03850526832683272126830.00026830.006
ta03950525522552257625520.00025520.0012
ta04050527822782279027820.00027820.0021
ta041501029912991313530251.342130211.0338
ta042501028672867303229111.67229021.2841
ta043501028392839298628711.25528711.1436
ta044501030633063319830640.09530700.2829
ta045501029762976316030051.182329980.8125
ta046501030063006317830120.682330240.6832
ta047501030933093327731261.234331220.9819
ta048501030373037312330420.282530630.9329
ta049501028972897300229050.43829140.6539
ta050501030653065325730991.374330760.4435
ta051502037713850408239171.927338740.7722
ta052502036683704392137571.777037341.0287
ta053502035913640392736992.261136881.3956
ta054502036353720396937811.86137591.1439
ta055502035533610383536732.08836441.0348
ta056502036673681391437161.356037171.0774
ta057502036723704395237691.833937280.7942
ta058502036273691393837592.254037301.1828
ta059502036453743395237901.76537791.1090
ta060502036963756407938091.723638011.3564
Table 7. (continued) Comparing the performance of the RDIS with that of NEGAVNS for all individual instances.
Table 7. (continued) Comparing the performance of the RDIS with that of NEGAVNS for all individual instances.
RDISNEGAVNS
InstancenmLBUBNEHBest%DEVavg Tbest Best%DEVavg Tbest
ta06110055493549355195493 0.00054930.0034
ta062100552685268534852680.041352680.0026
ta063100551755175521951750.00051750.0036
ta064100550145014502350140.03150140.0033
ta065100552505250526652500.00052500.0012
ta066100551355135513951350.00051350.0042
ta067100552465246525952460.00052460.0050
ta068100550945094512050940.00050940.0031
ta069100554485448548954480.00054480.0025
ta070100553225322534153220.03153220.0019
ta0711001057705770584657790.211957700.0449
ta0721001053495349545353530.229953580.2378
ta0731001056765676582456790.05056760.0965
ta0741001057815781592958080.636557920.2322
ta0751001054675467567954830.627154670.0681
ta0761001053035303537553080.09153110.2072
ta0771001055955595570455990.08456050.2254
ta0781001056175617576056460.717256170.0564
ta0791001058715871603259180.905758770.1929
ta0801001058455845591858500.30558450.0973
ta0811002061066202654163692.852563031.6985
ta0821002061836183652363032.091462661.4575
ta0831002062526271663963852.02463511.32145
ta0841002062546269655763641.796363601.49129
ta0851002062626314669564632.643064081.57163
ta0861002063026364666464872.171664531.50108
ta0871002061846268663264192.5717463321.1094
ta0881002063156401673965512.8120164821.49112
ta0891002062046275667764022.358363431.15169
ta0901002064046434667765622.2214165061.26147
ta0912001010,86210,86210,94210,8850.253010,8850.2489
ta0922001010,48010,48010,71610,5030.457610,4950.19125
ta0932001010,92210,92211,02510,9650.4913510,9410.21169
ta0942001010,88910,88911,05710,8930.092710,8890.04158
ta0952001010,52410,52410,64510,5280.113010,5240.03192
ta0962001010,32910,32910,45810,3370.163110,3460.2191
ta0972001010,85410,85410,98910,8830.432210,8660.17124
ta0982001010,73010,73010,82910,7690.441210,7410.15112
ta0992001010,43810,43810,57410,4650.26910,4510.19138
ta1002001010,67510,67510,80710,7220.4812510,6840.14147
ta1012002011,15211,19511,59411,3992.0522411,3391.52222
ta1022002011,14311,20311,67511,4822.7330211,3441.47268
ta1032002011,28111,28111,85211,5352.5216511,4451.45385
ta1042002011,27511,27511,80311,5012.2711811,4341.49154
ta1052002011,25911,25911,68511,4491.8113011,3691.06300
ta1062002011,17611,17611,62911,4132.4014811,2921.01254
ta1072002011,33711,36011,83311,5552.0911411,4811.11269
ta1082002011,30111,33411,91311,5542.124911,4421.03311
ta1092002011,14511,19211,67311,4732.6434211,3131.22326
ta1102002011,28411,28811,86911,5172.3410911,4241.14228
ta1115002026,04026,05926,67026,4481.6034426,2280.73311
ta1125002026,50026,52027,23226,9381.7045326,6880.77552
ta1135002026,37126,37126,84826,7301.4734826,5220.71448
ta1145002026,45626,45627,05526,7211.1660026,5860.54269
ta1155002026,33426,33426,72726,6061.1422026,5410.82396
ta1165002026,46926,47726,99226,7291.1428426,5820.49682
ta1175002026,38926,38926,79726,6431.1075926,6601.12559
ta1185002026,56026,56027,13826,8581.3297026,7110.61814
ta1195002026,00526,00526,63126,3481.493626,1480.61592
ta1205002026,45726,45726,98426,6981.0880826,6110.67611
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Amirghasemi, M. An Effective Decomposition-Based Stochastic Algorithm for Solving the Permutation Flow-Shop Scheduling Problem. Algorithms 2021, 14, 112. https://doi.org/10.3390/a14040112

AMA Style

Amirghasemi M. An Effective Decomposition-Based Stochastic Algorithm for Solving the Permutation Flow-Shop Scheduling Problem. Algorithms. 2021; 14(4):112. https://doi.org/10.3390/a14040112

Chicago/Turabian Style

Amirghasemi, Mehrdad. 2021. "An Effective Decomposition-Based Stochastic Algorithm for Solving the Permutation Flow-Shop Scheduling Problem" Algorithms 14, no. 4: 112. https://doi.org/10.3390/a14040112

APA Style

Amirghasemi, M. (2021). An Effective Decomposition-Based Stochastic Algorithm for Solving the Permutation Flow-Shop Scheduling Problem. Algorithms, 14(4), 112. https://doi.org/10.3390/a14040112

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop