Next Article in Journal
Probabilistic Models for Competence Assessment in Education
Previous Article in Journal
Sous Vide Cooking Effects on Physicochemical, Microbiological and Sensory Characteristics of Pork Loin
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Minimizing the Late Work of the Flow Shop Scheduling Problem with a Deep Reinforcement Learning Based Approach

Department of Software, Northeastern University, Shenyang 110819, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(5), 2366; https://doi.org/10.3390/app12052366
Submission received: 10 January 2022 / Revised: 4 February 2022 / Accepted: 22 February 2022 / Published: 24 February 2022
(This article belongs to the Special Issue Learning Based Methods for Industrial Applications)

Abstract

:
In the field of industrial manufacturing, assembly line production is the most common production process that can be modeled as a permutation flow shop scheduling problem (PFSP). Minimizing the late work criteria (tasks remaining after due dates arrive) of production planning can effectively reduce production costs and allow for faster product delivery. In this article, a novel learning-based approach is proposed to minimize the late work of the PFSP using deep reinforcement learning (DRL) and graph isomorphism network (GIN), which is an innovative combination of the field of combinatorial optimization and deep learning. The PFSPs are the well-known permutation flow shop problem and each job comes with a release date constraint. In this work, the PFSP is defined as a Markov decision process (MDP) that can be solved by reinforcement learning (RL). A complete graph is introduced for describing the PFSP instance. The proposed policy network combines the graph representation of PFSP and the sequence information of jobs to predict the distribution of candidate jobs. The policy network will be invoked multiple times until a complete sequence is obtained. In order to further improve the quality of the solution obtained by reinforcement learning, an improved iterative greedy (IG) algorithm is proposed to search the solution locally. The experimental results show that the proposed RL and the combined method of RL+IG can obtain better solutions than other excellent heuristic and meta-heuristic algorithms in a short time.

1. Introduction

The flow shop scheduling problem (FSSP) plays an important role in manufacturing systems. Optimizing multiple criteria of the FSSP can help reduce manufacturing costs and improve the manufacturing efficiency of enterprises. In recent years, many variants of the FSSP have emerged and many methods have been proposed to optimize some of its criteria. The permutation flow shop scheduling problem (PFSP) is a classical form of the FSSP, which was first introduced and formulated by Johnson [1]. The problem with more than three machines is shown to be NP-hard [2]. The goal of this problem is to schedule operations on the machines to optimize one or more performance criteria, such as minimizing the makespan, mean tardiness, total late work, and total flow time of all jobs. A rational scheduling algorithm can not only improve the efficiency and performance of the system but also reduce the cost of machines. In the past decades, various exact or heuristic algorithms for solving scheduling problems have been suggested [3]. Researchers have generalized the PFSP problem into multiple variants to simulate real production scenarios, such as no-wait flow shop, blocking flow shop, no-idle flow shop, and energy-efficient flow shop.
To solve problems of different scales, many effective exact methods, heuristics, and meta-heuristic algorithms have been proposed to optimize various criteria for the PFSP. Exact methods based on enumeration with an integer programming formulation are usually employed to find the optimal solution for PFSP. When dealing with large-scale problems, the solution space gets bigger and the use of exact algorithms will always lead to the combinatorial explosion so that the computation time is usually unacceptable. The most commonly applied approaches for large-scale problems are heuristic approaches such as NEH [4] and CDS [5], which are capable of real-time decision making. However, handcrafted heuristic algorithms only consider limited information which leads to the unstable performance of the algorithm. Meta-heuristic algorithms such as the genetic algorithm (GA) [6], tabu search algorithm (TS) [7] and the particle swarm optimization algorithm (PSO) [8,9] are a class of algorithmic frameworks that are problem-independent. The performance of these algorithms depends on adequate parameter tuning and initial solutions. Moreover, these algorithms have a slow convergence speed on problems that with high computational complexity. With the development of deep learning, reinforcement learning, and some high-performance search algorithms, methods applying these techniques have achieved better performance compared to traditional algorithms on some complex combinatorial optimization problems.
The late work appears to be important in production planning from the perspective of the customer as well as the manager. Customers are often concerned about the completion of the order, because those tasks that are not completed by the due dates need to be processed additionally. Managers are also interested in minimizing the late work criteria, as delays may cause financial losses. The late work phrase was first introduced in literature [10] and is defined as the number of tardy job units. The symbol Y is first used to represent it. Blazewicz et al. describe the difference between late work and other performance criteria such as makespan, tardiness, lateness, and proves that problems with a late work objective function are at least as difficult as problems with a maximum delay criterion [11]. Since then, many exact and heuristic algorithms have been proposed for single machine [12], parallel machine [13], and dedicated machine problems [14] with late work objective functions. The problem of minimizing late work has been demonstrated in many fields such as chip manufacturing [15], computer integrated manufacturing (CIM) [14], supply chain management [16], and software development processes [17]. The flow shop problem with the objective of minimizing late work was first proposed by [18] and solved using a genetic algorithm. Gerstl et al. studied the properties of the problem and extended it to the proportionate shop problem [19], but no high-performance optimization methods for late work have been proposed in recent years.
The FSSP is a branch of combinatorial optimization problems. Many studies applying reinforcement learning methods to solve combinatorial optimization problems have appeared in recent years. Many combinatorial optimization problems can be transformed into multi-stage decision-making problems, which means a sequence of decisions need to be performed to maximize/minimize the objective function. Therefore, some researchers have proposed agents suitable for these problems based on RL [20,21,22]. These methods can reach a level beyond existing algorithms. The deep reinforcement learning (DRL) methods in these problems can basically be divided into two categories, one that constructs the solution in an end-to-end way, and the other that improves on existing feasible solutions.
In construction-based methods, many studies [20,21,23] are based on the pointer network (PtrNet) [24]. The PtrNet solves the problems of variable size output dictionaries based on a sequence-to-sequence model. In literature [20], the PtrNet was trained using the Actor-Critic algorithm to obtain the distribution over all nodes to solve problems such as a knapsack problem. In literature [25], a simplified version of PtrNet is presented that is capable of handling routing problems in both static and dynamic environments. At each time step, the embedding of static elements as input to the RNN decoder, the output of the RNN, and the embedding of dynamic elements as output of the attention mechanism, the decision is made from the distribution on available destinations.
Different from construction heuristics, some studies [26,27,28] focused on improvements to existing solutions. Chen et al. designed a model called NeuRewriter and applied it to several domains [26]. First, a region is selected by a region selection policy, then a rewrite rule is obtained by a region rewrite policy, a local solution is used instead of the original solution to obtain an improved solution, and the process is repeated until convergence. Another work [28] used only one policy network to select a solution within a neighborhood structured by pairwise local operators, surpassing [26] in both solution quality and generalization capability on the routing problems.
Most of the approaches used graph-independent sequence-to-sequence mapping and did not make full use of the graph structure of graph-based problems. In order to make full use of graph structure, graph embedding and the graph neural network (GNN) have been introduced to solve graph-based combinatorial optimization problems [29,30], which can take into account nodes, edges, and their accompanying labels, attributes, text, and other information in a network, enabling better use of the network structure for modeling and reasoning.
The traditional methods mentioned above for solving the PFSP are difficult for achieving a trade-off between computation time and solution quality. In shop scheduling problems and even combinatorial optimization problems, the optimization objective is generally to minimize some criteria such as total cost, total completion time, distance traveled, and delayed work. The smaller the value of these criteria, the better the quality of the current solution. Construction-based reinforcement learning methods are able to obtain better solutions in a short time, but the quality of the solutions generally cannot exceed that of the meta-heuristic methods, while improvement-based methods require artificial extraction of features and are difficult to train. This article innovatively proposes an end-to-end reinforcement learning method and an improved iterative greedy method to minimize the late work of PFSP. The contribution of this paper mainly includes the following three aspects.
(1)
The proposed approach generates high-quality solutions using an end-to-end architecture based on reinforcement learning. The models can be trained without expert knowledge and labeled data, and the trained models can automatically extract features from the problem.
(2)
The PFSP is innovatively regarded as a complete graph. Two multi-layer GIN are used to encode the constraint features and processing time features in the PFSP. The GINs are able to efficiently aggregate the nodes’ own features and other neighbors’ features to obtain a contextual representation of each node.
(3)
An improved iterative greedy method is proposed. The RL model is able to obtain high-quality initial solutions in a short time and the IG method is used to improve the initial solutions. Experimental results show that the RL + IG method surpasses many excellent heuristic and meta-heuristic algorithms.
The rest of this article is organized as follows. Section 2 first describes the PFSP of minimizing the late work objective and models it as a sequential decision process, then describes the deep reinforcement learning architecture used and the training method of the model to generate an initial solution to the problem. Section 3 proposes a hybrid iterative greedy algorithm to further improve the generated initial solution. Section 4 illustrates the experimental setup of this article and shows the results of comparing the proposed algorithm with other methods. Section 5 concludes the article and presents several directions for future work.

2. Generate Initial Solutions to PFSP Using a Deep Reinforcement Learning Method

2.1. The Formulations of PFSP

The PFSP consists of m machines M = { M 1 , M 2 , , M m } and n jobs J = { J 1 , J 2 , , J n } . Let n jobs are to be scheduled on m machines in the same technological order. The schedule needs to satisfy the following assumptions. The jobs must be processed in the same order on each machine, each job needs to be processed on all machines, a machine can only process one job at a time and a job can be processed only after the release date. Other specific assumptions for the problem can be found in [31] and [32]. The symbols and definitions used to formulate the PFSP as an RL problem are described below. The processing time of the operation on each machine of the job is used as the feature vector of the current job, x i = { r i , d i , o i s , o i 1 , .. , o i m } , where x i is the feature vector of the i -th job, o i m is the processing time of the operation of the i -th job on the m -th machine, o i s = k = 1 m o i k , r i , and d i are the release date and due date of job i , respectively. The late work can be calculated using the following formulas:
C ( π ( 1 ) , i ) = r π ( 1 ) + p = 1 i o π ( 1 ) p , i = 1 , 2 , , m
C ( π ( k ) , 0 ) = r π ( k ) , k = 1 , 2 , , n
C ( π ( k ) , i ) = m a x { C ( π ( k ) , i 1 ) , C ( π ( k 1 ) , i ) } + o π ( k ) i , k = 2 , 3 , , n ; i = 1 , 2 , , m
Y π ( k ) = p = 1 m min { max { C ( π ( k ) , m ) d π ( k ) , 0 } , o π ( k ) p }
O b j e c t   f u n c t i o n   : Y = min { k = 1 n Y π ( k ) }
where π = { π ( 1 ) , , π ( n ) } is the scheduling order of jobs, C ( π ( k ) , i ) represents the completion time of job J π ( k ) on machine M i , C ( π ( k ) , 0 ) is the completion time of J π ( k ) on the virtual machine 0, which also represents the release time of the job. Equation (1) constrains the order in which the first job is processed on adjacent machines. Equation (2) indicates that the job can be processed after the release date has arrived. Equation (3) shows the recursive calculation of the completion time. Equation (4) illustrates how the late work is calculated for each operation. The late work means the tasks that have not yet been processed after the dues date has been arrived. The objective function of this problem is Y = min { k = 1 n Y π ( k ) } , which means to minimize the total late work of each operation of all jobs. According to the tri-field representation, the problem can be expressed as F | r i | Y [18]. To illustrate the PFSP and the late work objective function more clearly, Table 1 gives an example of the problem. Figure 1 represents the Gantt chart when the scheduling sequence is 1-2-3, where the shaded part represents the late work after the arrival of the due dates, Y 1 = 0 ,   Y 2 = 5 ,   Y 3 = 2 . A total of three operations are delayed, the total late work objective function value is 7.
Since the scheduling order of jobs on each machine must be the same in the PFSP, the scheduling results can be abbreviated to a sequence of jobs. A job whose order has been determined is called a scheduled job, and the job that has not been determined is called unscheduled job. Similar to the traveling salesman problem (TSP), nodes (jobs) that are not scheduled are continually expanded until all nodes (jobs) are expanded. In other words, given a set of jobs represented as a sequence of n jobs in a m dimensional space s = { x i } i = 1 n , the goal is to find an optimal permutation π on all jobs that minimizes the late work. The chain decision method is used to calculate the late work of the job scheduling sequence determined by a permutation π as:
L ( π s ) = i = 1 n ( Y π ( i ) Y π ( i 1 ) )
According to the chain rule, the probability of a job scheduling sequence can be factorized into the following form:
p ( π s ) = i = 1 n p ( π ( i ) π ( < i ) , s )
where π ( i ) indicates the job selected at the i -th time step to be scheduled,   π ( 0 ) is null since it is the initial state, 1   i     n . In summary, the flow shop problem is modeled as a continuous decision-making problem, and each step of decision-making outputs the probability distribution of candidate jobs. The probability distribution is generally obtained from the output of a policy network. In this study, a neural network is employed as a policy model to parameterize p ( π s ) .

2.2. Policy Network Architecture

The PFSP can be viewed as a sequence-to-sequence problem [33], where the input is a random sequence of all jobs and the output is a well-designed sequence that makes the late work as small as possible, which is similar to the machine translation task, where the input and output are word vectors. The encoder encodes the input sequence into an intermediate vector, then the intermediate vector is decoded multiple times by the decoder to generate the output sequence. Since the encoding and decoding of a sequence need to consider the effect of the order of elements in the sequence, the long short-term memory (LSTM) networks with long and short-term memory are generally used as encoders and decoders. The structure of the encoder-decoder model is shown in Figure 2. In the decoding process, the output of a certain time step may depend on some specific parts of the input sequence rather than the whole input sequence, the attention mechanisms are introduced to learn to find the most valuable parts of the input sequence, which in turn improves the effectiveness of the model. The improved method aggregates the output hidden vector of the decoder of the current time step with the encoder’s encoding result of each job using the attention mechanism. The output of the next time step is obtained by weighted aggregation of the encoded vectors of each job.
The encoder-decoder network based on the attention mechanism has performed well on the sequence-to-sequence problem. However, directly applying it to solve the PFSP problem suffers from the following two issues.
(4)
The output of the PFSP is heavily dependent on the input, the output of each step is one of the inputs, unlike the machine translation problem where the output is a completely different vector from the input.
(5)
In the PFSP, it is inaccurate to assume that the model input is a sequence of jobs. In fact, the input sequence should have no effect on the output of the model regardless of the order of the jobs in the input. However, the encoder is an LSTM structure, the input jobs are entered and encoded one by one, taking into account unnecessary positional relationships.
Therefore, this study designed a policy network based on the idea of PtrNet, it turns the weights after the aggregation of the attention mechanism into a probability distribution directly through the softmax layer, the probability distribution points to the elements in the input sequence, which solves the first problem. The network structure of the PtrNet is shown in Figure 3. The network structure is redesigned by combining the graph neural network to solve the second problem mentioned above. The improved network structure is shown in Figure 4, which consists of two encoders (a job encoder and a graph encoder), and a decoder with the attention mechanism. The two encoders are responsible for encoding the input sequence information, converting the input into an intermediate vector form. The intermediate vector is then decoded by the decoder, the output is a probability distribution over the input elements.

2.2.1. Job Encoder

For the job encoder, two linear transformations are used to embed each job processing time vector x i = { x i c , x i p } , where x i c = { r i , d i , o i s } and x i p = { o i 1 , .. , o i m } . The x i c vector includes constraint attributes for each job such as release date, due date, and total processing time, which can optimize the schedule from a macroscopic point of view; and the x i p consists of the processing time of each operation of the job, which helps to fine-tune the scheduling sequence. Therefore, two linear transformations are used to embed x i c and x i p , then the two embedded vectors x ˜ i c and x ˜ i p are concatenated as a higher dimensional vector x ˜ i     d , which can be of dimension 128, 256, etc. The weights of the two linear combinations are shared among all jobs. The feature vector dimension of the job is extended to combine each operation in the job, allowing the network to learn combinatorial relationships between multiple operations of a job. Since x i c and x i p are of different orders of magnitude, they are normalized separately to avoid unbalanced feature values. The job encoder is an LSTM, the vector x ˜ i of the job x i   selected at the current time step will be encoded by the job encoder. The hidden state of the LSTM output needs to be fed into the decoder and then into the job encoder at the next time step. More specifically, the job encoder encodes the currently known scheduling sequence (not finished) and outputs a hidden vector x ˜ i h for the current time step. x ˜ i h is input to the decoder to find out which job should be selected next. Then, the vector x ˜ j   of the selected job   j is input to the encoder along with x ˜ i h to obtain x ˜ j h of the next time step. This is a cyclic process until the complete sequence is obtained.

2.2.2. Graph Encoder

A complete graph G = ( V ,   E ) is introduced to describe the PFSP, as shown in Figure 5. V = { x 1 , ... , x n } is the set of nodes and E is the set of edges. All elements of the adjacency matrix are equal to one because any two jobs are related to each other, which means that any job can point to all other jobs.
Originally proposed by [34], GNNs extend the existing neural network models to process data from graphs or topological structures, and have achieved good performance in graph node classification, regression problems, and other fields. In the proposed network structure, job context information is obtained by encoding all job nodes through a graph isomorphic network (GIN) [35,36]. GIN is utilized to learn the context information between one job and other jobs to obtain a high-dimensional embedded representation of each job. The graph encoder learns how messages are passed between jobs. That is, the graph encoder updates the representation (feature vector) of each job using information from the entire problem. Each layer of the improved GIN network is expressed as:
{ x i ( l ) = γ ( l ) ω ( l ) x i ( l 1 ) + ( 1 γ ( l ) ) φ ( l ) ( 1 | A d j ( i ) | { x j ( l 1 ) } j A d j ( i ) { i } ) ,   l { 1 , , L } x i ( 0 ) = x i
where x i ( l ) d l is the graph-encoded vector of job i at the l -th layer with l { 1 , , L } , the eigenvalues of the weight matrix ω ( l ) d l 1 × d l are regularized using a trainable variable γ ( l ) , A d j ( i ) denotes the set of all neighbouring nodes of node i , the information between the two layers is aggregated using the function φ ( l )   : d l 1 d l   [37]. The second equation in Equation (8) represents the first layer of the GIN network, where each node inputs a feature vector (basic information about each job). The first equation indicates that in other layers of the network, each node updates its own features by aggregating them with those of its surrounding neighboring nodes, and the specific aggregation function and weight matrix are trainable. The aggregation function is implemented as a neural network, aggregating information from nodes in the lower layers of the GIN to the next layer. The deeper the layer of the network, the larger the range of neighbor nodes that will be aggregated. The mechanism for embedding aggregation of a node vector in GIN can be represented simply as Figure 6. The figure approximately shows the aggregation and update process of the GIN network when L = 2 . The information of the one-order and two-order neighbors of j o b 1 will be aggregated. Each update of a node requires a combination of information about the node itself and information about its neighbors, and the aggregation function and the weights of the summation operation are parameters that can be trained.
Since the PFSP graph structure is defined as a complete graph, each layer in the GIN can in turn be represented as:
X ( l ) = γ ( l ) ω ( l ) X ( l 1 ) + ( 1 γ ( l ) ) φ ( l ) ( X ( l 1 ) | A d j ( i ) | )
where X ( l ) n × d l , and φ ( l )   : n × d l 1 n × d l , n is the total number of jobs. In the actual structure of the network, a fully connected network is used to simulate the role of aggregate function, X will be replaced by X ˜ c   and   X ˜ p embedded after the linear transformations mentioned in the job encoder, X ˜ c   and   X ˜ p will be fed to two separately trained graph encoders. The graph encoders actually used are expressed as:
X ˜ c / p , ( l ) = γ ω · X ˜ c / p , ( l ) + ( 1 γ ) R e L U ( X ˜ c / p , ( l ) W + b )
x ˜ i ( L ) = c o n c a t e ( x ˜ i c , ( L ) , x ˜ i p , ( L ) )
where ReLU is the activation function, W d l 1 × d l and b n × d l . Equations (9) and (10) are simplifications of Equation (8). Equation (11) indicates that two GIN networks of the same structure are used to embed X ˜ c and X ˜ p (introduced in Section 2.2.1), respectively, and then the outputs of the two networks are stitched together by corresponding jobs.

2.2.3. Decoder

The decoder consists of an attention computation layer and a softmax layer. The pointer vector u computed by the attention mechanism is passed through the softmax layer to generate a probability distribution over the candidate jobs. Through continuous learning, the attention mechanism can learn to the extent that each element in the input sequence needs to be paid attention to. Similar to the pointer network [24], the attention mechanism is defined as Equation (12), its process is shown in Figure 7.
u i = { v T · tanh ( W r e f · r i + W q · q )   i f   i π ( k ) ,   k < i   o t h e r w i s e
where u i is the i -th entry of the vector u , W r e f and W q are trainable matrices, v     d is an attention vector, q is the query vector, and r i is a reference vector in the reference vector set. The output of the attention mechanism is obtained by calculating the similarity between q and each element in the set of reference vectors. In this article, q = x ˜ h and r i = x ˜ i ( l ) , x ˜ h is the hidden variable of the job encoder,   x ˜ i ( l ) is the contextual information of a job from the graph encoder, and the set of reference vectors contains the contextual information of each job. The logits of jobs that already appeared in the scheduled sequence are set to , as shown in Equation (12), to ensure that the model only points to jobs that have not yet been scheduled to generate an available scheduling sequence. The policy distribution on all candidate jobs can be expressed as Equation (13).
p θ ( a i | s i ) = p ( π ( i ) | π ( < i ) , s ) = A ( r i , q ; W r e f , W q , v ) = def s o f t m a x ( C t a n h ( u ) )
where C is a hyperparameter that controls the range of the logits and hence the entropy of p θ ( a i | s i ) , i is the current time step. The next job to be scheduled is a i = x π ( i ) , predicted by sampling or choosing greedily from the policy p θ ( a i | s i ) . a i and s i can be thought of as an action and a state in RL introduced in the next section.

2.2.4. Decode Strategies

Greedy: After the model is trained, the policy network outputs the probability distribution of candidate jobs in each decision step. The most common way is to select the job with the highest probability in each decision step, and then input the job into the network to obtain the probability distribution of candidate jobs in the next step.
Sampling: Since the model is trained on a large number of randomized problems, the single solution generated by greedy search is not necessarily suitable for the test set. Therefore, in the probability distribution of each output, candidate jobs are selected using sampling to produce more diverse solutions. In this study, s a m p l e S i z e is set to 1280, the one with the smallest objective function value among the 1280 solutions generated is taken as the final solution.

2.3. Reinforcement Learning for PFSP

In RL, the agent learns through an iterative trial process by interacting with the environment and observing the resulting reward signals. The RL problem is modeled as a Markov decision process (MDP), which provides a mathematical framework for modeling sequential decisions under uncertainty [38]. It consists of four main elements, Agent, State, Action, and Reward, and the goal is to obtain the most cumulative rewards. The process of RL is shown in Figure 8.
Let S and A denote state space and action space, respectively.
State: Each state s t   S mainly includes two parts, which are the graph encoding of all jobs X ˜ ( L ) , and the LSTM encoding of the jobs that have been scheduled at time step t .
Action: The action a t   A is defined as the next selected job, that is a t = x π ( t ) , the action will be performed in a job that has not been selected.
Policy: The policy is expressed as p θ ( a t s t ) , which is a distribution over candidate jobs a t . Given a set of scheduled jobs, the policy will return a probability distribution over the candidate jobs that have not been chosen and the next job to be scheduled is greedily selected or sampled according to the probability distribution. The policy is implemented with a neural network with an ensemble of trainable weights θ.
Reward: Since the gradient descent algorithm is used to train the network, the reward function is generally set to be the negative cost of taking action a t from the state s t . i.e.,   r ( s t , a t ) = ( L a t e w o r k π ( i ) L a t e w o r k π ( i 1 ) ) . The DRL method can optimize various goals such as late work, makespan, total completion time, and delay time through the gradient descent method without changing the network structure. Thus, the expected reward is defined as follows, where O b j represents the objective function that needs to be optimized.
E ( s t , a t ) p θ ( s t , a t ) [ i = 1 n r ( s t , a t ) ] = E π p θ ( Γ ) [ i = 1 n ( O b j π ( i ) O b j π ( i 1 ) ) ] = E π p θ ( Γ ) [ L ( π , S ) ]
where all possible permutations π over s = { x i } i = 1 n constitute the space Γ , and p θ ( Γ ) is the distribution on Γ predicted by the policy network. When a complete solution is obtained, the cumulative reward obtained is Y π Y π ( 0 ) which is the value of the current solution’s late work. The policy network must learn to minimize the expected object function value. Furthermore, the policy gradient algorithm [39] is employed to train the network to learn to maximize the reward obtained, as presented below.

2.4. Training Method

The object function of the policy is J ( θ | s ) = E π p θ ( Γ ) [ L ( π , S ) ] . Based on the REINFORCE algorithm [39], the gradient of the policy is expressed as:
θ J ( θ s ) = E π p θ ( . s ) [ ( L ( π s ) b ( s ) ) θ l o g p θ ( π s ) ]
where b ( s ) denotes a baseline function that estimates the late work of the expected scheduling sequence to reduce the variance of the gradients. In actual training, based on Monte Carlo sampling [20], the gradient can also be approximated as:
θ J ( θ ) = 1 B i = 1 B [ ( i = 1 n r ( s i , t , a i , t ) b i ) × ( i = 1 n θ l o g p θ ( a i , t | s i , t ) ) ]
where B is the batch size, r is the reward function, b i is the baseline for a problem instance in the current batch (late work value obtained by the baseline), i = 1 n r ( s i , t , a i , t ) is equal to the late work value of the solution, p θ ( a i , t | s i , t ) denotes the probability that each action in the action sequence is selected (output by policy network). On the basis of Equation (16), the parameters θ can be optimized by gradient descent using the update rule θ   θ + α θ J ( θ ) .
The algorithm similar to self-critic [40] is used as the baseline. The main idea is to use the results when the model is tested as the estimated scheduling sequence. Furthermore, during the training phase, samples will be taken from the generated probability distribution. In order to increase the model’s exploration ability and avoid falling into the local optimum. The sampling operation allows the model to trade-off between ‘exploitation’ and ‘exploration’. In the testing phase, the action with the highest probability in the probability distribution at each step is selected until the complete scheduling sequence is obtained. It is also possible to sample the probability distribution to search for a better solution, but at the expense of some computational time. The self-critic baseline b i   is expressed as:
b i = i = 1 n ( r ( s ˜ i , t , a ˜ i , t ) ) + [ 1 B i = 1 n j = 1 B ( r ( s i , t , a i , t ) r ( s ˜ i , t , a ˜ i , t ) ) ]
where the action a ˜ i , t ~ G r e e d y ( p θ ) is sampled greedily from the policy. The right-hand part of the plus sign of Equation (17) is the gap between the rewards obtained by sampling and greedy. If the sampling result is better than the baseline, the gradient of some of the better actions will increase. Eventually, the probability of a good action being selected will be increased, while the probability of a poor action being selected will be decreased. The final optimization process is shown in Algorithm 1. In order to improve the quality of the solutions obtained by RL, an improved hybrid iterative greedy method is proposed in the next section.
Algorithm1 Policy Gradient Optimization
variable declaration: Training set T , training steps S , batch size B , learning rate α
  •     Initialize network parameters θ
  •     for s = 1 to S do
  •           x i = S a m p l e ( T ) for i { 1 , , B }
  •           a i , t = S a m p l e ( p θ ( · | s i , t ) )
  •           a ˜ i , t = G r e e d y ( p θ ( · | s ˜ i , t ) )
  •          Calculate J ( θ ) ,   θ J ( θ )
  •           θ     θ + α θ J ( θ ) .
  •     return p θ

3. A Hybrid Iterated Greedy Method to Improve the Initial Solutions

The iterative greedy (IG) methods are mainly used to solve the flowshop problem of minimizing makespan, which was first proposed in [41] and is the most effective meta-heuristic method so far. The algorithm first uses a heuristic method (such as NEH) to generate an initial solution Π , for which the initial solution is improved using a local search algorithm with the insertion neighborhood algorithm. Then, d jobs are removed from this sequence (destruction phase) and reinserted one after another to the position that minimizes the objective function value obtained after insertion (construction phase), looping this process to obtain a new sequence Π . Finally, the sequence Π is obtained using a local search method based on the insertion neighborhood of Π , using a probability similar to simulated annealing to decide whether to accept Π and start the next iteration. The current well-known variant of the IG algorithm [42] applies a local search to both complete and partial solutions to speed up the search, applying the NEH algorithm with local search to produce an initial solution, the IG framework is shown in Algorithm 2.
Algorithm2 The IG framework with local search
  •    input: a PFSP problem instance P, the number of jobs removed d, simulated annealing parameter T.
  •    output: The best solution found Π * .
  •     Π = N E H ( P ) ; //replaced by D R L ( P )
  •     Π = L o c a l S e a r c h ( Π ) ;
  •     Π * = Π ;
  •    while termination criteria not met do
  •              Randomly remove d jobs from Π (destruction);
  •              (Let ΠR be the remaining sequence and ΠD be the extracted jobs);
  •               Π R = L o c a l S e a r c h ( Π R ) ;
  •               Π = C o n s t r u c t i o n ( Π R , Π D ) ;
  •               Π = L o c a l S e a r c h ( Π ) ;
  •               Π = A c c e p t a n c e C r i t e r i o n ( Π , T ) ;
  •              if ( Π ) < f ( Π ) then
  •                    Π * = Π ;
  •         return Π *
The proposed deep reinforcement learning architecture (DRL for short) can replace the combination of NEH and the insertion neighborhood local search as a high-performance initial solution generator. In the destruction and construction phase, an adaptive local search strategy is proposed, which consists of an insertion operator (LS1), a swap operator (LS2). The weights of the three operators are w e i g h t 1 and w e i g h t 2 , respectively, and the corresponding probabilities are p 1 , p 2 . The weights of the two operators are initialized to 1, and the probabilities are initialized to 1 / 2 . As the number of iterations grows, the probabilities of the operators that can improve the current solution more are increased. In order to improve the effectiveness of the local search, the tie-breaking mechanism is added. If the late work values obtained from multiple positions inserted or swapped by the operator are the same, then the one with the smallest idle time in the scheduling sequences obtained after the execution of the operators is taken as the final result. The l o o p S i z e of the local search is set to half the number of jobs to improve the efficiency of the search. The late work optimization objective is very sensitive to the permutation of the jobs, especially if the problem with a release date constraint, small changes can lead to a dramatic deterioration of the late work value. Therefore, the inverse operator is not used in the improved IG algorithm, which would be destructive to the current solution. The p 1 , p 2 of the next iteration are calculated by w e i g h t 1 , w e i g h t 2 . The update and calculation method of these parameters are shown in Equations (18)–(20).
p i = w e i g h t i w e i g h t 1 + w e i g h t 2 ,   i = 1 , 2
μ = { 1   ,   f n e w < f b e s t 0   ,   f n e w f b e s t
w e i g h t i = w e i g h t i + θ μ i
where f n e w is the fitness after local search update, and f b e s t is the fitness of the best solution (the solution before a local search) currently found. In the same iteration, only one of μ 1 and μ 2 is 1 , and the other is 0 . θ is the learning factor, which is generally set to 0.2 . The pseudocode of the local search framework is shown in Algorithm 3. The operator pseudocode used in it is shown in Algorithm 4.
Algorithm3 The local search framework
  •    take permutation Π or Π R as input Π input , set l = 0;
  •    generate a random number r ( 0 , 1 ) ;
  •    while l < l o o p S i z e do:
  •              if r p 1 then
  •                     Π L S = c h o o s e O p e r a t o r ( Π i n p u t , o p = i n s e r t ) ;
  •              else r p 1 and r p 1 + p 2 then
  •                     Π L S = c h o o s e O p e r a t o r ( Π i n p u t , o p = s w a p ) ;
  •              if f ( Π L S ) < f ( Π * ) then
  •                     Π * = Π L S
  •              if f ( Π L S ) < f ( Π i n p u t ) then
  •                     Π * = Π i n p u t
  •              update the weights and probabilities of each operator if necessary
  •               l = l + 1
  •               Π = Π i n p u t
  •       return Π
Algorithm 4 The process of two operators
  •    randomly select a location a of the input Π i n p u t , let b = 1 , j = j o b   n u m b e r , b e s t P o s F i t = + and o p = i n s e r t or swap;
  •    while b < j do
  •               if a b then
  •                   if o p = = i n s e r t then
  •                         insert job Π a into the b-th position of the permutation (LS1);
  •                   if o p = = s w a p then
  •                        swap the π a in position a and the πb in position b (LS2);
  •                   a candidate solution cand is obtained;
  •                   if f ( Π c a n d ) < b e s t P o s F i t then
  •                          b e s t P o s F i t = f ( Π c a n d ) ;
  •                          b e s t c a n d = c a n d
  •                   if f ( Π c a n d ) = b e s t P o s F i t then
  •                         if the tie-breaking conditions are met then
  •                              b e s t P o s F i t = f ( Π c a n d ) ;
  •                              b e s t c a n d = c a n d
  •                b = b + 1 ;
  •         return b e s t c a n d ,   b e s t P o s F i t
This method uses the same acceptance criterion as the I G R S method, which applies the idea of simulated annealing to decide whether to accept the candidate solution Π obtained from each iteration. If the candidate solution Π is better than or equal to the current best solution Π * , the Π * is directly replaced with Π . If Π is worse, the decision to accept the candidate solution is made with a probability given by e ( f ( π ) f ( π ) ) / T where T is calculated as Equation (21) and T p is a hyperparameter that can be adjusted.
T = T p × i = 1 m j = 1 n p i j n m 10

4. Experiments

4.1. Experiment Setup

The multi-layer GIN with L = 3 is used as the graph encoder if the problem size n < = 50 . For larger-scale problems, L = 5 . The ( m + 3 ) × n problem matrix of PFSP is the main training data, where the processing time of an operation is generated randomly from a U [ 0 , 100 ) distribution. In order to enable most of the jobs to be completed before the due date arrives, r i and d i are generated from U [ 0 , 50 × n ] and U [ r i + 2 × m × p i , r i + 0.4 × n × p i ] , respectively. To ensure that the problem instance is reasonable, a job is randomly selected and its release date is set to 0. The proposed method is compared with the results of other methods over the Taillard [43] benchmark instances. The Taillard benchmark does not include the due date and release date constraints, so they are also generated from the above distribution. At the beginning of each step, the data of the training set is regenerated. The scale of the training set is B a t c h S i z e , and the data of the validation set are only generated once in the entire training process. The hyperparameter configuration of the method proposed is mainly shown in Table 2. All the codes of the experiment are written in Python, and the calculation of the objective function is accelerated by Cython. The experiments are all running in the environment of AMD 5600X CPU, RTX TITAN GPU, and 16G memory. It should be noted that the training and validation sets of the reinforcement model are randomly generated, the algorithms are compared on the Taillard [43] benchmarks, and the release dates and due dates are generated only once to ensure that the problem instances used to evaluate each method are the same.
To improve the performance of the proposed hybrid iterative greedy method ( I G h in short), orthogonal experiments are conducted on two hyperparameters, d and T p . The factor level is given in Table 3. The average improvement percent (AIP) is used to evaluate the performance of the algorithm under different combinations of parameters. A I P = ( Y i n i Y r e s ) m × n × 10 × 100 % , where Y i n i is the initial latework value of the algorithm, and Y r e s is the latework value after the algorithm is executed. The results of the orthogonal tests are shown in Table 4. The effect of the parameters on the performance of the algorithm is plotted in Figure 9a,b. Finally, d of the I G h algorithm is set to 4, and t p is set to 0.7, due to the fact that the algorithm has the best performance at this parameter setting.

4.2. Experiment Results

The proposed method consists of two parts, where RL is used to generate high-quality initial solutions and I G h is used to optimize the solutions obtained by RL. The role of RL methods is similar to that of heuristics, so RL is compared with some classical heuristics such as NEH [4], earliest-due-date first (EDD) [19], and smallest late work insertion (SLW) [18]. A comparison of the effects of these methods is shown in Table 5. The table shows a description of the problem instances, the average relative difference percent (ARDP) values and average computation time (ACT) of each method. The RDP is calculated as Equation (22):
R D P = O b j x O b j m i n m × n × 10 × 100 %
where O b j x is the objective function value of the solution obtained by method x , O b j m i n represents the minimum objective function value obtained by all methods. The ARDP can indicate the difference between the objective function values obtained by several methods solving the same set of benchmarks, and a value of 0 means that the current method obtains the smallest late work among several methods. In short, the smaller the ARPD value, the better the performance of the algorithm and the higher the quality of the solutions obtained. As can be seen from Table 5, the RL algorithm using the sampling decoding strategy performs better than the other heuristics on problems of all sizes. Since EDD uses the simplest dispatch rule, the computation time of the EDD heuristic can be neglected, but the largest ARDP value 100.54 is obtained. The NEH algorithm is a widely used heuristic. The computation time of RL-Greedy is slightly increased compared to NEH, but the ARDP value is reduced by half (16.79 to 9.9). Box plot 10 visually shows the distribution of achieved RDPs and skewness by displaying the data quartiles (or percentiles) and averages. The box plots generally include the five-number summary of a set of data: the minimum score, first quartile, median, third quartile, and maximum score [44]. As can be seen in Figure 10, the RL-based method is significantly better than other heuristics.
Furthermore, the combination of RL-Sample and I G h is compared with other meta-heuristic algorithms such as hybrid cuckoo search algorithm (HCS) [45], discrete artificial bee colony algorithm (DABC) [46], and NEH + IG algorithmA [42]. The hyperparameters of other algorithms use the default settings in the corresponding article. The running times of the IG-based algorithm for solving problems of 20, 50, 100, and 200 scales are 2 s, 4 s, 8 s, and 16 s, respectively; the number of iterations for the other algorithms is fixed at 500. The experimental results of several metaheuristics are shown in Table 6, where BR indicates the best RDP value, AR indicates the average RDP value. The BR value can represent the optimization ability of the algorithm, and the AR represents the stability of the algorithm. Table 6 shows that the RL + IGh algorithm achieves the smallest BR and AR values and runs in much less time than the HCS and DABC algorithms (especially for problems above the 50-job scale), RL + IGh showed a 42% improvement in AR compared to NEH + IG, and the BR obtained by RL + IGh outperforms the NEH + IG method on all size benchmarks. In order to compare the performance difference between RL + IGh and other algorithms, a Wilcoxon test (with a 95% confidence interval) was constructed based on the obtained BR and AR. Table 7 shows the p -values obtained by comparing the proposed method with other methods, if the p -value is less than 0.05 means that there is a statistically significant difference between the method and other methods. From Table 7 and Figure 11, it can be seen that the proposed method is significantly better than NEH + IG, HCS, and DABC.
The method is also compared with the original PtrNet [24,47] and there is no GIN graph encoding module in the structure. Before the job feature vector x i is input into the network, the proposed method will expand the dimension through a simple linear transformation layer, and the PtrNet uses a simple graph embedding of node aggregation. The major difference between the two methods is that the encoder and decoder of the PtrNet are both LSTM structures, the output of each step of the encoder is used as a reference vector r i . Table 8 shows the objective function values of the two methods on some problem instances, the decoding strategy of both methods is greedy strategy. The performance of PtrNet deteriorates rapidly as the number of jobs increases, and if the number of artifacts is greater than 50, the objective function value of the obtained solution is worse than NEH. The PtrNet model can converge during training, but the model ends up in a local optimum, which reflects that GIN can learn the context information of the job; it greatly improves the model’s performance, and the proposed RL method has a 6.7% performance improvement over PtrNet.
The result of the attention mechanism is visualized as a heat map, as shown in Figure 12. Each color block in the figure represents the probability of selecting each job at time step i . The brighter the color, the greater the probability. Figure 12a shows the heat map of the probability distribution of attention output before the model training, and Figure 12b shows the heat map result after the training. The color blocks are messy before training, and the probability distribution is clear after training. According to the greedy decoding strategy, it can be easily derived from the figure; the final job scheduling sequence is 3-17-15-8-9-14-11-13-5-7-4-19-6-16-1-18-2-12-10-20.

5. Conclusions

In this study, the PFSP is innovatively modeled as a sequential decision process and a reinforcement learning (RL) method applying graph neural networks is proposed to minimize the late work of PFSP. In addition, a hybrid iterative greedy algorithm ( I G h ) with a tie-breaking mechanism is proposed to improve the solution obtained by the RL method. The experimental results show that the improved RL outperforms some classical heuristics and pointer network-based reinforcement learning methods, the proposed RL is able to obtain high-quality solutions in a short time. The performance of the combination of RL and I G h also outperforms some excellent metaheuristics such as HCS, DABC, and NEH + IG. In summary, reinforcement learning is more competitive than traditional methods in solving problems such as flow shop scheduling.
Future work will focus on using reinforcement learning methods to solve scheduling problems with more complex constraints and dynamic scheduling problems, the performance and efficiency of reinforcement learning models on large-scale problems also need to be improved.

Author Contributions

Conceptualization, T.R. and Z.D.; methodology, Z.D.; software, Z.D. and J.W.; validation, F.Q., X.W.; formal analysis, F.Q.; investigation, F.Q.; resources, T.R.; data curation, J.W.; writing—original draft preparation, Z.D.; writing—review and editing, Z.D. and T.R.; visualization, J.W.; supervision, T.R.; project administration, T.R.; funding acquisition, T.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Fundamental Research Funds for the Central Universities (N181706001, N2017009, N2017008, N182608003, N181703005), National Natural Science Foundation of China (61902057), Joint Fund of Science & Technology Department of Liaoning Province and State Key Laboratory of Robotics, China (2020-KF-12-11).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Johnson, S.M. Optimal two-and three-stage production schedules with setup times included. Nav. Res. Logist. Q. 1954, 1, 61–68. [Google Scholar] [CrossRef]
  2. Garey, M.R.; Johnson, D.S.; Sethi, R. The complexity of flowshop and jobshop scheduling. Math. Oper. Res. 1976, 1, 117–129. [Google Scholar] [CrossRef]
  3. Ruiz, R.; Maroto, C. A comprehensive review and evaluation of permutation flowshop heuristics. Eur. J. Oper. Res. 2005, 165, 479–494. [Google Scholar] [CrossRef]
  4. Nawaz, M.; Enscore, E.E., Jr.; Ham, I. A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem. Omega 1983, 11, 91–95. [Google Scholar] [CrossRef]
  5. Campbell, H.G.; Dudek, R.A.; Smith, M.L. A heuristic algorithm for the n job, m machine sequencing problem. Manag. Sci. 1970, 16, B-630–B-637. [Google Scholar] [CrossRef] [Green Version]
  6. Tseng, L.-Y.; Lin, Y.-T. A hybrid genetic algorithm for no-wait flowshop scheduling problem. Int. J. Prod. Econ. 2010, 128, 144–152. [Google Scholar] [CrossRef]
  7. Nowicki, E.; Smutnicki, C. A fast tabu search algorithm for the permutation flow-shop problem. Eur. J. Oper. Res. 1996, 91, 160–175. [Google Scholar] [CrossRef]
  8. Pan, Q.-K.; Tasgetiren, M.F.; Liang, Y.-C. A discrete particle swarm optimization algorithm for the no-wait flowshop scheduling problem. Comput. Oper. Res. 2008, 35, 2807–2839. [Google Scholar] [CrossRef]
  9. Tasgetiren, M.F.; Sevkli, M.; Liang, Y.-C.; Gencyilmaz, G. Particle Swarm Optimization Algorithm for Permutation Flowshop Sequencing Problem. In Proceedings of the International Workshop on Ant Colony Optimization and Swarm Intelligence, Brussels, Belgium, 5–8 September 2004; pp. 382–389. [Google Scholar]
  10. Potts, C.N.; van Wassenhove, L.N. Single machine scheduling to minimize total late work. Oper. Res. 1992, 40, 586–595. [Google Scholar] [CrossRef]
  11. Błażewicz, J.; Pesch, E.; Sterna, M.; Werner, F. Total late work criteria for shop scheduling problems. In Proceedings of the Operations Research Proceedings 1999, Magdeburg, Germany, 1–3 September 1999; pp. 354–359. [Google Scholar]
  12. Chen, R.; Yuan, J.; Ng, C.; Cheng, T. Single-machine scheduling with deadlines to minimize the total weighted late work. Nav. Res. Logist. 2019, 66, 582–595. [Google Scholar] [CrossRef]
  13. Chen, X.; Sterna, M.; Han, X.; Blazewicz, J. Scheduling on parallel identical machines with late work criterion: Offline and online cases. J. Sched. 2016, 19, 729–736. [Google Scholar] [CrossRef] [Green Version]
  14. Leung, J. Minimizing Total Weighted Error for Imprecise Computation Tasks and Related Problems. In Handbook of Scheduling: Algorithms, Models, and Performance Analysis; CRC Press: Boca Raton, FL, USA, 2004; p. 34. [Google Scholar]
  15. Ren, J.; Zhang, Y.; Sun, G. The NP-hardness of minimizing the total late work on an unbounded batch machine. Asia-Pac. J. Oper. Res. 2009, 26, 351–363. [Google Scholar] [CrossRef]
  16. Ren, J.; Du, D.; Xu, D. The complexity of two supply chain scheduling problems. Inf. Processing Lett. 2013, 113, 609–612. [Google Scholar] [CrossRef]
  17. Sterna, M. A survey of scheduling problems with late work criteria. Omega, 2011, 39(2): 120-129. Omega 2011, 39, 120–129. [Google Scholar] [CrossRef]
  18. Pesch, E.; Sterna, M. Late work minimization in flow shops by a genetic algorithm. Comput. Ind. Eng. 2009, 57, 1202–1209. [Google Scholar] [CrossRef]
  19. Gerstl, E.; Mor, B.; Mosheiov, G. Scheduling on a proportionate flowshop to minimise total late work. Int. J. Prod. Res. 2019, 57, 531–543. [Google Scholar] [CrossRef]
  20. Bello, I.; Pham, H.; Le, Q.V.; Norouzi, M.; Bengio, S. Neural combinatorial optimization with reinforcement learning. arXiv 2016, arXiv:1611.09940. [Google Scholar]
  21. Hu, H.; Zhang, X.; Yan, X.; Wang, L.; Xu, Y. Solving a new 3d bin packing problem with deep reinforcement learning method. arXiv 2017, arXiv:1708.05930. [Google Scholar]
  22. Kool, W.; van Hoof, H.M.; Welling, M. Attention, Learn to Solve Routing Problems! In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April 2018. [Google Scholar]
  23. Zhang, R.; Prokhorchuk, A.; Dauwels, J. Deep Reinforcement Learning for Traveling Salesman Problem with Time Windows and Rejections. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 28 September 2020; pp. 1–8. [Google Scholar]
  24. Vinyals, O.; Fortunato, M.; Jaitly, N. Pointer networks. Adv. Neural Inf. Processing Syst. 2015, 28, 1–9. [Google Scholar]
  25. Nazari, M.; Oroojlooy, A.; Snyder, L.; Takáč, M. Reinforcement learning for solving the vehicle routing problem. Adv. Neural Inf. Processing Syst. 2018, 31, 1–11. [Google Scholar]
  26. Chen, X.; Tian, Y. Learning to perform local rewriting for combinatorial optimization. Adv. Neural Inf. Processing Syst. 2019, 32, 6281–6292. [Google Scholar]
  27. Lu, H.; Zhang, X.; Yang, S. A Learning-Based Iterative Method for Solving Vehicle Routing Problems. In Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  28. Wu, Y.; Song, W.; Cao, Z.; Zhang, J.; Lim, A. Learning improvement heuristics for solving the travelling salesman problem. arXiv 2019, arXiv:1912.05784. [Google Scholar]
  29. Khalil, E.; Dai, H.; Zhang, Y.; Dilkina, B.; Song, L. Learning combinatorial optimization algorithms over graphs. Adv. Neural Inf. Processing Syst. 2017, 30, 1–11. [Google Scholar]
  30. Lederman, G.; Rabe, M.N.; Seshia, S.A. Learning heuristics for automated reasoning through deep reinforcement learning. arXiv 2018, arXiv:1807.08058. [Google Scholar]
  31. Gupta, J.N.; Stafford, E.F., Jr. Flowshop scheduling research after five decades. Eur. J. Oper. Res. 2006, 169, 699–711. [Google Scholar] [CrossRef]
  32. Baker, K.R.; Trietsch, D. Principles of Sequencing and Scheduling; John Wiley & Sons: New York, NY, USA, 2013. [Google Scholar]
  33. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to sequence learning with neural networks. Adv. Neural Inf. Processing Syst. 2014, 27, 1–9. [Google Scholar]
  34. Scarselli, F.; Gori, M.; Tsoi, A.C.; Hagenbuchner, M.; Monfardini, G. The graph neural network model. IEEE Trans. Neural Netw. 2008, 20, 61–80. [Google Scholar] [CrossRef] [Green Version]
  35. Ma, Q.; Ge, S.; He, D.; Thaker, D.; Drori, I. Combinatorial optimization by graph pointer networks and hierarchical reinforcement learning. arXiv 2019, arXiv:1911.04936. [Google Scholar]
  36. Xu, K.; Hu, W.; Leskovec, J.; Jegelka, S. How Powerful Are Graph Neural Networks? In Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada, 30 April 2018. [Google Scholar]
  37. Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907. [Google Scholar]
  38. Bennett, C.C.; Hauser, K. Artificial intelligence framework for simulating clinical decision-making: A Markov decision process approach. Artif. Intell. Med. 2013, 57, 9–19. [Google Scholar] [CrossRef] [Green Version]
  39. Williams, R.J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 1992, 8, 229–256. [Google Scholar] [CrossRef] [Green Version]
  40. Rennie, S.J.; Marcheret, E.; Mroueh, Y.; Ross, J.; Goel, V. Self-Critical Sequence Training for Image Captioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7008–7024. [Google Scholar]
  41. Ruiz, R.; Stützle, T. A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem. Eur. J. Oper. Res. 2007, 177, 2033–2049. [Google Scholar] [CrossRef]
  42. Dubois-Lacoste, J.; Pagnozzi, F.; Stützle, T. An iterated greedy algorithm with optimization of partial solutions for the makespan permutation flowshop problem. Comput. Oper. Res. 2017, 81, 160–166. [Google Scholar] [CrossRef]
  43. Taillard, E. Benchmarks for basic scheduling problems. Eur. J. Oper. Res. 1993, 64, 278–285. [Google Scholar] [CrossRef]
  44. Ivković, N.; Jakobović, D.; Golub, M. Measuring performance of optimization algorithms in evolutionary computation. Int. J. Mach. Learn. Comput. 2016, 6, 167–171. [Google Scholar] [CrossRef]
  45. Wang, H.; Wang, W.; Sun, H.; Cui, Z.; Rahnamayan, S.; Zeng, S. A new cuckoo search algorithm with hybrid strategies for flow shop scheduling problems. Soft Comput. 2017, 21, 4297–4307. [Google Scholar] [CrossRef]
  46. Ince, Y.; Karabulut, K.; Tasgetiren, M.F.; Pan, Q.-K. A Discrete Artificial Bee Colony Algorithm for the Permutation Flowshop Scheduling Problem with Sequence-Dependent Setup Times. In Proceedings of the 2016 IEEE congress on Evolutionary computation (CEC), Vancouver, BC, Canada, 24–29 July 2016; pp. 3401–3408. [Google Scholar]
  47. Wang, X.; Ren, T.; Bai, D.; Ezeh, C.; Zhang, H.; Dong, Z. Minimizing the sum of makespan on multi-agent single-machine scheduling with release dates. Swarm Evol. Comput. 2021, 69, 100996. [Google Scholar] [CrossRef]
Figure 1. The Gantt chart with a job permutation of 1-2-3.
Figure 1. The Gantt chart with a job permutation of 1-2-3.
Applsci 12 02366 g001
Figure 2. The sequence-to-sequence network structure.
Figure 2. The sequence-to-sequence network structure.
Applsci 12 02366 g002
Figure 3. The PtrNet structure (with attention mechanism).
Figure 3. The PtrNet structure (with attention mechanism).
Applsci 12 02366 g003
Figure 4. The proposed policy network structure (scheduling sequence: 1, 3, 4, 2).
Figure 4. The proposed policy network structure (scheduling sequence: 1, 3, 4, 2).
Applsci 12 02366 g004
Figure 5. A complete graph representation of the PFSP problem.
Figure 5. A complete graph representation of the PFSP problem.
Applsci 12 02366 g005
Figure 6. The information aggregation of GIN.
Figure 6. The information aggregation of GIN.
Applsci 12 02366 g006
Figure 7. The calculation process of the attention mechanism.
Figure 7. The calculation process of the attention mechanism.
Applsci 12 02366 g007
Figure 8. RL learning environment interaction.
Figure 8. RL learning environment interaction.
Applsci 12 02366 g008
Figure 9. Effect of parameter settings on algorithm performance. ((a) is the orthogonal test result for d , (b) is the orthogonal test result for t p ).
Figure 9. Effect of parameter settings on algorithm performance. ((a) is the orthogonal test result for d , (b) is the orthogonal test result for t p ).
Applsci 12 02366 g009
Figure 10. Box plot of RL and several heuristic algorithms.
Figure 10. Box plot of RL and several heuristic algorithms.
Applsci 12 02366 g010
Figure 11. Box plot of RL + IGh and other meta-heuristic algorithms.
Figure 11. Box plot of RL + IGh and other meta-heuristic algorithms.
Applsci 12 02366 g011
Figure 12. The heat map of the probability distribution of attention. ((a) denotes the matrix before training, (b) is the matrix after training).
Figure 12. The heat map of the probability distribution of attention. ((a) denotes the matrix before training, (b) is the matrix after training).
Applsci 12 02366 g012
Table 1. An example of a PFSP problem with the late work objective function.
Table 1. An example of a PFSP problem with the late work objective function.
Job No ( i ) r i d i o i s o i 1 o i 2 o i 3
101412345
241212264
342012435
Table 2. Hyperparameter configuration.
Table 2. Hyperparameter configuration.
ParameterValue
BatchSize128
Epoch100
Steps per epoch3000
Learning rate 1 × 10 3
Learning rate decay0.975
Decay step3000
Hidden size128
Validation set size1000
OptimizerAdam
Table 3. Factor level of I G h .
Table 3. Factor level of I G h .
Level d T p
120.3
240.5
360.7
480.9
Table 4. Results of the orthogonal test.
Table 4. Results of the orthogonal test.
No. d T p AIP
11130.23
21230.42
31331.34
41427.65
52133.87
62237.54
72340.18
82439.11
93135.54
103237.29
113338.93
123433.74
134129.76
144233.66
154335.85
164429.54
Table 5. Comparison of RL and multiple heuristic algorithms.
Table 5. Comparison of RL and multiple heuristic algorithms.
Problem InstancesRL-SampleRL-GreedyNEHSLWEDD
ARDPACT(s)ARDPACT(s)ARDPACT(s)ARDPACT(s)ARDP
20 × 5 00.1711.600.1117.620.00395.570.00290.39
20 × 10 00.1819.650.1233.230.003138.20.00284.29
20 × 20 00.1817.750.1120.250.003108.60.002125.72
50 × 5 00.247.000.1420.200.02265.780.010184.57
50 × 10 00.2910.260.1423.170.02391.850.012130.32
50 × 20 00.3020.270.1347.620.023151.70.015114.15
100 × 5 00.553.560.214.420.08338.280.040101.34
100 × 10 00.616.500.227.040.08452.420.04292.54
100 × 20 00.635.250.217.260.08858.510.04284.33
200 × 10 01.411.620.320.450.36034.430.21044.31
200 × 20 01.495.490.333.450.40041.750.22054.05
average00.559.900.1916.790.09979.740.054100.54
Table 6. Comparison of RL + IGh and multiple meta-heuristic algorithms.
Table 6. Comparison of RL + IGh and multiple meta-heuristic algorithms.
Problem InstancesRL + IGhNEH + IGHCSDABC
BRARBRARBRARACT(s)BRARACT(s)
20 × 5 00.030.001.220.292.033.420.292.141.30
20 × 10 01.014.385.725.978.134.905.918.092.75
20 × 20 00.311.491.881.672.226.531.802.293.80
50 × 5 02.102.545.802.997.057.733.207.367.10
50 × 10 03.632.555.772.896.339.943.616.478.20
50 × 20 03.262.015.873.839.7413.103.919.7210.10
100 × 5 02.950.353.981.454.4121.561.694.4519.54
100 × 10 02.490.292.580.943.1526.401.263.5922.32
100 × 20 03.051.793.413.885.5333.524.385.9826.45
200 × 10 02.030.382.052.858.1468.474.909.1653.23
200 × 20 02.590.542.718.0313.5075.339.3714.0559.54
average02.131.483.733.166.3824.623.666.6619.48
Table 7. Results achieved by Wilcoxon test.
Table 7. Results achieved by Wilcoxon test.
RL + IGh vs p -Values of BR p -Values of AR
NEH + IG 5.06 × 10 3 3.35 × 10 3
HCS 3.35 × 10 3 3.35 × 10 3
DABC 3.35 × 10 3 3.35 × 10 3
Table 8. Results achieved by Proposed RL, PtrNet, and NEH.
Table 8. Results achieved by Proposed RL, PtrNet, and NEH.
Problem InstancesProposed RLPtrNetNEH
Ta10689693704
Ta20435444074449
Ta30833883798424
Ta40127813391362
Ta50533554585460
Ta60995610,38510,244
Ta70207423952296
Ta80669370656722
Ta9017,08918,76717,406
Ta100993111,27810,124
Ta11019,99921,77920,093
Average late work7794.28358.67934.9
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Dong, Z.; Ren, T.; Weng, J.; Qi, F.; Wang, X. Minimizing the Late Work of the Flow Shop Scheduling Problem with a Deep Reinforcement Learning Based Approach. Appl. Sci. 2022, 12, 2366. https://doi.org/10.3390/app12052366

AMA Style

Dong Z, Ren T, Weng J, Qi F, Wang X. Minimizing the Late Work of the Flow Shop Scheduling Problem with a Deep Reinforcement Learning Based Approach. Applied Sciences. 2022; 12(5):2366. https://doi.org/10.3390/app12052366

Chicago/Turabian Style

Dong, Zhuoran, Tao Ren, Jiacheng Weng, Fang Qi, and Xinyue Wang. 2022. "Minimizing the Late Work of the Flow Shop Scheduling Problem with a Deep Reinforcement Learning Based Approach" Applied Sciences 12, no. 5: 2366. https://doi.org/10.3390/app12052366

APA Style

Dong, Z., Ren, T., Weng, J., Qi, F., & Wang, X. (2022). Minimizing the Late Work of the Flow Shop Scheduling Problem with a Deep Reinforcement Learning Based Approach. Applied Sciences, 12(5), 2366. https://doi.org/10.3390/app12052366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop