Next Article in Journal
Information Length as a Useful Index to Understand Variability in the Global Circulation
Next Article in Special Issue
A New Formulation for the Capacitated Lot Sizing Problem with Batch Ordering Allowing Shortages
Previous Article in Journal
The Relations between Residuated Frames and Residuated Connections
Previous Article in Special Issue
Online Batch Scheduling of Simple Linear Deteriorating Jobs with Incompatible Families
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Deep Learning Algorithm for the Max-Cut Problem Based on Pointer Network Structure with Supervised Learning and Reinforcement Learning Strategies

School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200444, China
*
Author to whom correspondence should be addressed.
Current address: 99 Shangda Road, Shanghai 200444, China.
Mathematics 2020, 8(2), 298; https://doi.org/10.3390/math8020298
Submission received: 5 January 2020 / Revised: 19 February 2020 / Accepted: 20 February 2020 / Published: 22 February 2020
(This article belongs to the Special Issue Advances and Novel Approaches in Discrete Optimization)

Abstract

:
The Max-cut problem is a well-known combinatorial optimization problem, which has many real-world applications. However, the problem has been proven to be non-deterministic polynomial-hard (NP-hard), which means that exact solution algorithms are not suitable for large-scale situations, as it is too time-consuming to obtain a solution. Therefore, designing heuristic algorithms is a promising but challenging direction to effectively solve large-scale Max-cut problems. For this reason, we propose a unique method which combines a pointer network and two deep learning strategies (supervised learning and reinforcement learning) in this paper, in order to address this challenge. A pointer network is a sequence-to-sequence deep neural network, which can extract data features in a purely data-driven way to discover the hidden laws behind data. Combining the characteristics of the Max-cut problem, we designed the input and output mechanisms of the pointer network model, and we used supervised learning and reinforcement learning to train the model to evaluate the model performance. Through experiments, we illustrated that our model can be well applied to solve large-scale Max-cut problems. Our experimental results also revealed that the new method will further encourage broader exploration of deep neural network for large-scale combinatorial optimization problems.

1. Introduction

Combinatorial optimization is an important branch of operations research. It refers to solving problems of variable combinations by minimizing (or maximizing) an objective function under given constraints, and is based on the research of mathematical methods to find optimal arrangements, groupings, orderings, or screenings of discrete events. As a research hot-spot in combinatorial optimization, the Max-cut problem is one of the 21 typical non-deterministic polynomial (NP)-complete problems proposed by Richard M. Karp [1]. It refers to obtaining a maximum segmentation for a given directed graph, such that the sum of the weights across all edges of two cutsets is maximized [2]. The Max-cut problem has a wide range of applications in engineering problems, such as Very Large Scale Integration (VLSI) circuit design, statistical physics, image processing, and communications network design [3]. As a solution of the Max-cut problem can be used to measure the robustness of a network [4] and as a standard for network classification [5], it can also be applied to social networks.
It has been discovered that many classic combinatorial optimization problems derived from engineering, economics, and other fields are NP-hard. The Max-cut problem concerned in this paper is among these problems. For combinatorial optimization problems, algorithms can be roughly divided into two categories: one is represented by exact solution approaches, including enumeration methods [6] and branch and bound methods [7], etc. The other category is represented by heuristic algorithms including genetic algorithms, ant colony algorithms, simulated annealing algorithms, neural networks, Lin–Kernighan Heuristic (LKH) algorithms, and so on [8]. However, there is no polynomial time solvable algorithm to find a global optimal solution for NP-hard problems. Compared with the exact approach, heuristic algorithms are able to deal with large-scale problems efficiently. They have certain advantages in computing efficiency and can be applied to solving large-scale problems with huge amount of variables. In order to solve the Max-cut problem, a large number of heuristic algorithms have been proposed, such as evolutionary algorithms and ant colony algorithms. However, for these algorithms, the most obvious disadvantage of them is that they are easy to fall into local optima. For this reason, more and more experts have begun working on the research and innovation of some novel and effective algorithms for large-scale Max-cut problems.
Deep learning is a research field which has developed very rapidly in recent years, achieving great success in many sub-fields of artificial intelligence. From its root, deep learning is a sub-problem of machine learning. Its main purpose is to automatically learn effective feature representations from a large amount of data, such that it can better solve the credit assignment problem (CAP) [9]; that is, the contribution or influence of different components in a system or their parameters to the output of the final system. The emergence of deep neural networks has provided the possibility for solving large-scale combinatorial optimization problems. In recent years, with the development of the combination of deep neural networks and operations research for large-scale combinatorial optimization problems, scholars have explored how to apply deep neural networks in these fields, and have achieved certain results. The related research has mainly focused on the algorithm design for combinatorial optimization problems based on pointer networks. Vinyals used the attention mechanism [10] to integrate a pointer structure into the sequence-to-sequence model, thus creating the pointer network. Bello improved the pointer network structure and used a strategy gradient algorithm combined with time-series difference learning to train the pointer network in reinforcement learning to solve the combinatorial optimization problem [11]. Mirhoseini removed the coding part of a recurrent neural network (RNN) and used the embedded input to replace the hidden state of the RNN. With this modification, the computational complexity was greatly reduced and the efficiency of the model was improved [12]. In Reference [13], a purely data-driven method to obtain approximate solutions of NP-hard problems was proposed. In Reference [14], a pointer network was used to establish a flight decision prediction model. Khalil solved classical combinatorial optimization problems by Q-learning [15]. The pointer network model has also been used, in Reference [16], to solve the weightless Max-cut problem. Similarly, solved the unconstrained boolean quadratic programming problem (UBQP) through the pointer network [17].
The section arrangement of this paper is as follows. Section 2 mainly introduces the Max-cut problem and the method for generating its benchmark. Section 3 demonstrates the pointer network model, including the Long Short-Term Memory network and Encoder–Decoder. Section 4 introduces two ways to train the pointer network model to solve the Max-cut problem, namely supervised learning and reinforcement learning. Section 5 illustrates the details of the experimental procedure and the results. Section 6 provides the conclusions.

2. Motivation and Data Set Structure

2.1. Unified Model of the Max-Cut Problem

The definition of the Max-cut problem is given as follows.
An undirected graph G = ( V , E ) consists of a set of vertices V and a set of edges E, where  V = { 1 , 2 , , n } is its set of vertices, and E V × V is its set of edges, and w i , j is the weight on the edge connecting vertex i and vertex j. For any proper subset S of the vertex set V, let:
δ ( S ) = { e i , j E ; i S , j V S } ,
where δ ( S ) is a set of edges, one end of which belongs to S and the other end belongs to V S . Then, the cut c u t ( S ) determined by S is:
c u t ( S ) = e i , j δ ( S ) w i , j .
In simple terms, the Max-cut problem is to find a segmentation ( S , V S ) of a vertex set V, where the maximum weight of the edges is segmented.

2.2. Benchmark Generator of the Max-Cut Problem

When applying deep learning to train and solve the Max-cut problem, whether supervised learning or reinforcement learning, a large number of training samples are necessary. The method of data set generation introduced here is to transform the {−1,1} quadratic programming problem into the Max-cut problem.
First of all, the benchmark generator method for the boolean quadratic programming (BQP) problem, proposed by Michael X. Zhou [18], is used to generate random {−1,1} quadratic programming problems, which can be solved in polynomial time. Next, inspired by [19], we transform the results of the previous step into solutions of the Max-cut problem. The specific implementation is described below.
Michael X. Zhou transformed the quadratic programming problem shown by Equation (3) into the dual problem shown by Equation (4) through the Lagrangian dual method.
min { f ( x ) = 1 2 x T Q x c T x x { 1 , 1 } n } ,
where Q = Q T R n × n is a given indefinite matrix, and c R n is a given non-zero vector.
The dual problem is described as follows:
find Q , c , x , λ s . t . ( Q + d i a g ( λ ) ) = c Q + d i a g ( λ ) > 0 x { 1 , 1 } n .
Then, according to the paper [19], the solution of the {−1,1} quadratic programming problem can be transformed into the solution of the Max-cut problem.
The integer programming for the Max-cut problem is given by:
max 1 2 i < j w i , j ( 1 x i · x j ) s . t . x i { 1 , 1 } , i = 1 , , n ,
where i in x i { 1 , 1 } represents the vertex i, and 1 and 1 represent the values of the two sets. If x i · x j is equal to 1 and the vertices of edge ( i , j ) are in the same set, then ( i , j ) E is not a cut edge; if x i · x j is equal to 1 and the vertices of the edge ( i , j ) are not in the same set, then ( i , j ) E is the cut edge. If ( i , j ) E is a cut edge, ( 1 x i · x j ) ( 1 x i · x j ) 2 2 is equal to 1; if ( i , j ) E is not a cut edge, ( 1 x i · x j ) ( 1 x i · x j ) 2 2 is equal to 0. Thus, the objective function represents the sum of the weights of the cut edges of the Max-cut. Define S = { i : x i = 1 } , S ¯ = { i : x i = 1 } , and the weight of the cut is w ( S , S ¯ ) = i < j w i , j ( 1 x i · x j ) / 2 .
The pseudocode for generating the benchmark of the Max-cut problem is shown in Algorithm 1, where the parameter b a s e is used to control the value range of the elements in matrix Q.
Algorithm 1 A benchmark generator for the Max-cut problem
Input: Dimension: n; b a s e = 10 ;
Output: Matrix: Q; Vector: x
1:
Randomly generate an n-dimensional matrix that conforms to the standard normal distribution to obtain Q;
2:
Q = b a s e × Q ;
3:
Convert Q to a symmetric matrix with Q + Q T 2 ;
4:
Generate random numbers in the range (0,1) of n rows and 1 column as a vector x;
5:
x = 2 x 1 ;
6:
Take the absolute value of Q and sum it over the rows, assigning the result to λ ;
7:
Place the value of the vector λ on the main diagonal of the square matrix Q in order, and let the values of Q (except the main diagonal) be zero.
8:
c = ( Q + Q ) × x ;
9:
Set an additional variable w o j , w o j = 1 4 ( j = 1 i 1 q j i + j = i + 1 n q i j ) + 1 2 c i , 1 j n , and w i j = 1 4 q i j , 1 i < j n ;
10:
Update Q: Q = ( q 1 j T , q 2 j T , , q ( n + 1 ) j T ) ( w 0 j T , w 1 j T , , w n j T ) ;
11:
Set an additional variable x 0 = 1 , and let x = 2 x + 1 ;
12:
Update x: x = ( x 1 , x 2 , , x n + 1 ) ( x 0 , x 1 , , x n ) .
This method for obtaining Max-cut benchmark data sets effectively solves the difficulty in training the network to solve the Max-cut problem model when lacking a large number of training samples. However, there is a common defect in this method: in the training set obtained using the dual problem to deduce the solution of the original problem, its data samples obey certain rules. This may lead to difficulty in learning the general rule of the Max-cut problem when training with the method by deep learning.
Therefore, in addition to the above method, we consider using the benchmark generator in the Biq Mac Library to solve the Max-cut problem. The Biq Mac Library offers a collection of Max-cut instances. Biq Mac is a branch and bound code based on semi-definite programming (SDP). The dimension of the problems (i.e., number of variables or number of vertices in the graph) ranges from 60–100. These instances are mainly used to test the pointer network model for the Max-cut problem.

3. Models

3.1. Long Short-Term Memory

It is difficult for traditional neural networks to classify subsequent events by using previous event information. However, an RNN can continuously operate information in a cyclic manner to ensure that the information persists, thereby effectively processing time-series data of any length. Given an input sequence x 1 : T = ( x 1 , x 2 , , x t , , x T ) , the RNN updates the activity value h t of the hidden layer with feedback and calculates the output sequence y 1 : T = ( y 1 , y 2 , , y t , , y T ) using the following equations:
h t = sigmoid ( M h x x t + M h h h t 1 ) ,
y t = M y h h t .
As long as the alignment between input and output is known in advance, an RNN can easily map sequences to sequences. However, the RNN cannot solve the problem when the input and output sequences have different lengths or have complex and non-monotonic relationships [20]. In addition, when the input sequence is long, the problem of gradient explosion and disappearance will occur [21]; which is also known as the long-range dependence problem. In order to solve these problems, many improvements have been made to RNNs; the most effective way, thus far, is to use a gating mechanism.
A long short-term memory (LSTM) network [22] is a variant of RNN, which is an outstanding embodiment of RNN based on the gating mechanism. Figure 1 shows the structure of the loop unit of a LSTM. By applying the LSTM loop unit of the gating mechanism, the entire network can establish long-term timing dependencies to better control the path of information transmission. The equations of the LSTM model can be briefly described as:
c ˜ t o t i t f t = tanh σ σ σ M x t h t 1 + b ,
c t = f t c t 1 + i t c ˜ t ,
h t = o t tanh ( c t ) ,
where x t R e is the input at the current time; M R 4 d × ( d + e ) and b R 4 d are the network parameters; σ ( · ) is the Logistic function, with output interval ( 0 , 1 ) ; h t 1 is the external state at the previous time; ⊙ is the product of vector elements; c t 1 is the memory unit at the previous moment; and c ˜ t is the candidate state obtained by the non-linear function. At each time t, the internal state c t of the LSTM records historical information up to the current time. The three gates used to control the path of information transmission are f t , i t , and o t . The functions of three gates are:
  • The forget gate f t controls how much information the previous state c t 1 needs to forget;
  • The input gate i t controls how much information the candidate state c ˜ t needs to be saved at the current moment; and
  • The output gate o t controls how much information the internal state c t 1 of the current moment needs to be output to the external state h t 1 .
In our algorithm, the purpose of the LSTM is to estimate the conditional probability p ( y 1 , , y T x 1 , , x T ) , where ( x 1 , , x T ) is the input sequence, y 1 , , y T is the corresponding output sequence, and the length T may be different from T. The LSTM first obtains a fixed dimension representation X of the input sequence ( x 1 , , x T ) (given by the last hidden state of the LSTM), then calculates y 1 , , y T , whose initial hidden state is set to x 1 , , x T :
p ( y 1 , , y T | x 1 , , x T ) = t = 1 T p ( y t | X , y 1 , , y t 1 ) ,
where each p ( y t | X , y 1 , , y t 1 ) distribution is represented by the softmax of all variables in the input Max-cut problem matrix.

3.2. Encoder–Decoder Model

The encoder–decoder model is also called the asynchronous sequence-to-sequence model; that is, the input sequence and the output sequence neither need to have a strict correspondence relationship, nor do they need to maintain the same length. Compared with traditional structures, it greatly expands the application scope of the model. It can directly model sequence problems in a pure data-driven manner and can train the model using an end-to-end method. It can be seen that it is very suitable for solving combinatorial optimization problems.
In the encoder–decoder model (shown in Figure 2), the input is a sequence x 1 : T = ( x 1 , , x T ) of length T, and the output is a sequence y 1 : T = ( y 1 , , y T ) of length T . The implementation process is realized by first encoding and then decoding. Firstly, a sample x is input into an RNN (encoder) at different times to obtain its encoding h T . Secondly, another RNN (decoder) is used to obtain the output sequence y ^ 1 : T . In order to establish the dependence between the output sequences, a non-linear autoregressive model is usually used in the decoder:
h t = f 1 ( h t 1 , x t ) , t [ 1 , T ] ,
h T + t = f 2 ( h T + t 1 , y ^ t 1 ) , t [ 1 , T ] ,
y t = g ( h T + t ) , t [ 1 , T ] ,
where f 1 ( · ) and f 2 ( · ) are RNNs used as encoder and decoder, respectively; g ( · ) is a classifier; and y ^ t are vector representations used to predict the output.

3.3. Pointer Network

The amount of information that can be stored in a neural network is called the network capacity. Generally speaking, if more information needs to be stored, then more neurons are needed or the network must be more complicated, which will cause the number of necessary parameters of the neural network to increase exponentially. Although general RNNs have strong capabilities, when dealing with complex tasks, such as processing large amounts of input information or complex computing processes, the computing power of computers is still a bottleneck that limits the development of neural networks.
In order to reduce the computational complexity, we use the mechanisms of the human brain to solve the information overload problem. In such a way, we add an attention mechanism to the RNN. When the computing power is limited, it is used as a resource allocation scheme to allocate computing resources to more important tasks.
A pointer network is a typical application for combining an attention mechanism and a neural network. We use the attention distribution as a soft pointer to indicate the location of relevant information. In order to save computing resources, it is not necessary to input all the information into the neural network, only the information related to the task needs to be selected from the input sequence X. A pointer network [9] is also an asynchronous sequence-to-sequence model. The input is a sequence X = x 1 , , x T of length T, and the output is a sequence y 1 : T = y 1 , y 2 , , y T . Unlike general sequence-to-sequence tasks, the output sequence here is the index of the input sequence. For example, when the input is a group of out-of-order numbers, the output is the index of the input number sequence sorted by size (e.g., if the input is 20, 5, 10, then the output is 1, 3, 2).
The conditional probability p ( y 1 : T x 1 : T ) can be written as:
p ( y 1 : T x 1 : T ) = i = 1 m p ( y i y 1 : i 1 , x 1 : T ) i = 1 m p ( y i x y 1 , , x y i 1 , x 1 : T ) ,
where the conditional probability p ( y i x y 1 , , x y i 1 , x 1 : T ) can be calculated using the attention distribution. Suppose that an RNN is used to encode x y 1 , , x y i 1 , x 1 : T to obtain the vector h i , then
p ( y i y 1 : i 1 , x 1 : T ) = softmax ( s i , j ) ,
where s i , j is the unnormalized attention distribution of each input vector at the ith step of the encoding process,
s i , j = v T tanh ( U 1 x j + U 2 h i ) , j [ 1 , T ] ,
where v, U 1 , and U 2 are learnable parameters.
Figure 3 shows an example of a pointer network.

4. Learning Mechanism

Machine learning methods can be classified according to different criteria. Generally speaking, according to the information provided by the training samples and different feedback mechanisms, we classify machine learning algorithms into three categories: supervised learning, unsupervised learning, and reinforcement learning. Our algorithm uses supervised learning (SL) and reinforcement learning (RL) to train the pointer network model to obtain the solution of the Max-cut problem, which will be described in detail below.

4.1. Supervised Learning

4.1.1. Input and Output Design

The feature of the Max-cut problem is that its variable is either 0 or 1, such the problem is equivalent to selecting a set of variables from all variables with a value of 1 to maximize the objective function. This is a typical choice problem in combinatorial optimization problems. The goal of supervised learning is to learn the relationship between the input x and the output y by modeling y = f ( x ; θ ) or p ( y x ; θ ) . For the Max-cut problem, the pointer network uses an n × n symmetric matrix Q to represent the input sequence of the n nodes, where q i j is an element in the symmetric matrix, which represents the weight of the connection between vertex i vertex and vertex j ( q i j 0 , q i j = 0 means there is no connection between vertex i and vertex j). The output sequence of the pointer network is represented by X = x 1 , x 2 , , x n , which contains two variables; that is 0 and 1. Vertices with 0 and vertices with 1 are divided into two different sets. The result of summing weights with all edges across the two cut sets is the solution to the Max-cut problem.
The following example is used to explain the input and output design of the pointer network to solve the Max-cut problem.
Example 1.
f ( x ) = 3 x 1 x 2 + 4 x 1 x 4 + 5 x 2 x 3 + 2 x 2 x 4 + x 3 x 4 x i { 0 , 1 } , ( i = 1 , , 4 ) .
The symmetric matrix Q of the above problem can be expressed as:
Q = 0 3 0 4 3 0 5 2 0 5 0 1 4 2 1 0 ,
and the characteristics of the variables x 1 , x 2 , x 3 , and x 4 are represented by the vectors q 1 = ( 0 , 3 , 0 , 4 ) T , q 2 = ( 3 , 0 , 5 , 2 ) T , q 3 = ( 0 , 5 , 0 , 1 ) T , and q 4 = ( 4 , 2 , 1 , 0 ) T , respectively.
For the Max-cut problem, the optimal solution of the above example is x. The sequence ( q 1 , q 2 , q 3 , q 4 ) is the input of the pointer network, and the known optimal solution is used to train the network model and guide the model to select q 1 and q 3 . The input vector selected by the decoder represents the corresponding variable value of 1, while the corresponding variable value of the unselected vector is 0.
For the output part of the pointer network model, for the n × n matrix, we design a matrix of dimension ( n + 1 ) to represent the network output. Exactly as in Example 1, the output result is a label that be described by the matrix O l a b e l :
O l a b e l = 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 .
The relationship between O l a b e l and the variable x is:
x j = 1 , if o i j = 1 , j 0 ; EOS , if o i j = 1 , j = 0 ; 0 , others .
We use EOS = 1 , 0 , , 0 T to indicate the end of the pointer network solution process. After the model training is completed, the probability distribution of the softmax of the output matrix is obtained. The corresponding result may be as described by the matrix O p r e d i c t . In the solution phase, we select the one with the highest probability in the output probability distribution and set it to 1, and the rest of the positions to 0. According to the result of O p r e d i c t , the pointer network selects the variables x 1 and x 3 with a value of 1, and the remaining variables have a value of 0—which is consistent with the result selected by O l a b e l :
O p r e d i c t = 0.03 0.8 0.02 0.1 0.05 0.1 0 0.2 0.7 0 0.9 0.03 0.03 0.01 0 1 0 0 0 0 1 0 0 0 0 .

4.1.2. Algorithm Design

When training deep neural networks, for N given training samples x ( n ) , y ( n ) n = 1 N , the softmax regression in supervised learning uses cross entropy as a loss function and uses gradient descent to optimize the parameter matrix W. The goal of neural network training is to learn the parameters which minimize the value of the cross-entropy loss function. In practical applications, the mini-batch stochastic gradient descent (SGD) method has the advantages of fast convergence and small computational overhead, so, it has gradually become the main optimization algorithm used in large-scale machine learning [23]. Therefore, during the training process, we use mini-batch SGD. At each iteration, we randomly select a small number of training samples to calculate the gradient and update the parameters. Assuming that the number of samples per mini-batch is K, the training process of softmax regression is: initialize W 0 0 , and then iteratively update by the following equation
W t + 1 W t + α ( 1 K n = 1 N x ( n ) ( y ( n ) y ^ W t ( n ) ) T ) ,
where α is the learning rate and y ^ W t ( n ) is the output of the softmax regression model when the parameter is W t .
The training process of mini-batch SGD is shown in Algorithm 2.
Algorithm 2 Mini-batch SGD of pointer network
Input: training set: D = { ( x ( n ) , y ( n ) ) } n = 1 N ; mini-batch size: K; number of training steps: L; learning rate: α ;
Output: optimal: W
1:
random initial W;
2:
repeat
3:
    randomly reorder the samples in training set D;
4:
    for   t = 1 , , L   do
5:
        select samples ( x ( n ) , y ( n ) ) from the training set D;
6:
        update parameters: W t + 1 W t + α ( 1 K n = 1 N x ( n ) ( y ( n ) y ^ W t ( n ) ) T ) ;
7:
    end for
8:
until the error rate of model f ( x ; W ) no longer decreases.

4.2. Reinforcement Learning

Reinforcement learning is a very attractive method in machine learning. It can be described as an agent continuously learning from interaction with the environment to achieve a specific goal (such as obtaining the maximum reward value). The difference between reinforcement learning and supervised learning is that reinforcement learning does not need to give the “correct” strategy as supervised information, it only needs to give the return of the strategy and then adjust the strategy to achieve the maximum expected return. Reinforcement learning is closer to the nature of biological learning and can cope with a variety of complex scenarios, thus coming closer to the goal of general artificial intelligence systems.
The basic elements in reinforcement learning include:
  • The a g e n t can sense the state of the external environment and the reward of feedback, then make decisions;
  • The e n v i r o n m e n t is everything outside the a g e n t , which is affected by the actions of the a g e n t by changing its state and feeding the corresponding reward back to the a g e n t ;
  • s is a description of the e n v i r o n m e n t , which can be discrete or continuous, and its state space is S;
  • a is a description of the behavior of the a g e n t , which can be discrete or continuous, and its action space is A;
  • The reward r ( s , a , s ) is a scalar function—that is, after the agent makes an action a based on the current state s, the environment will give a reward to the agent. This reward is related to the state s at the next moment.
For simplicity, we consider the interactions between a g e n t and e n v i r o n m e n t as a discrete time-series in this paper. Figure 4 shows the interaction between an a g e n t and an e n v i r o n m e n t .

4.2.1. Input and Output Design

The pointer network input under reinforcement learning is similar to that under supervised learning. The only difference is that, when applying reinforcement learning, a special symbol S p l i t needs to be added, as reinforcement learning only focuses on those variables selected before the variable S p l i t . S p l i t is a separator that divides a variable into two types. We use the following rules: when inputting into the pointer network, all variables before S p l i t are set to 1, and all variables after S p l i t are set to 0. We use the zero vector to represent the S p l i t . Therefore, in order to change the n-dimensional matrix Q into n + 1 dimensions, we add a row and a column of 0 to the last row and the last column of matrix Q. Under this rule, taking Example 1 as an example, to convert the matrix Q into the matrix P, the input sequence of the pointer network is ( p 1 , p 2 , p 3 , p 4 , S p l i t ) :
P = 0 3 0 4 0 3 0 5 2 0 0 5 0 1 0 4 2 1 0 0 0 0 0 0 0 .
Similar to supervised learning, at the output of the pointer network, a symbol EOS is added to divide the set of output vertices. As in Example 1, the output of the pointer network is 1 , 3 , EOS , 2 , 4 , which means that the four vertices are divided into two sets, which are 1 , 3 and 2 , 4 . The numbers in front of EOS indicate that the value at these vertex positions is 1, and the numbers after EOS indicate that the value at these positions is 0. Thus, the max-cut value can be calculated according to the divided sets, and it is this value that is used as the reward in reinforcement learning.

4.2.2. Actor–Critic Algorithm

The actor–critic algorithm is a reinforcement learning method which combines a policy gradient and temporal difference learning. We combine the input–output structure characteristics of the Max-cut problem with the actor–critic algorithm in reinforcement learning to train the pointer network model. The actor–critic algorithm used for solving such combinatorial optimization problems uses the same pointer network encoder for both the actor network and the critic network. First, the actor network encodes the input sequence. Next, the decoder part selects the variable with value 1, according to the probability. The critic network encodes the input sequence, then predicts the optimal value of the Max-cut problem using a value function.
In the actor–critic algorithm, ϕ ( s ) is the input to the actor network, which corresponds to the given symmetric matrix Q in the Max-cut problem; that is, Q is used as the input sequence of the actor network. The actor refers to the policy function π θ ( s , a ) , which can learn a strategy to obtain the highest possible reward. For the Max-cut problem, π θ ( s , a ) represents the strategy scheme in which variables are selected as 1. The critic refers to the value function V ϕ ( s ) , which estimates the value function of the current strategy. With the help of the value function, the actor–critic algorithm can update the parameters in a single step, without having to wait until the end of the round to update. In the actor–critic algorithm, the policy function π θ ( s , a ) and the value function V ϕ ( s ) are both functions that need to be learned simultaneously during the training process.
Assuming the return G τ t : T from time t, we use Equation (21) to approximate it:
G ^ τ t : T = r t + 1 + γ V ϕ s t + 1 ,
where s t + 1 is the state at t + 1 and r t + 1 is the instant reward.
In each step of the update, the strategy function π θ ( s , a ) and the value function V ϕ ( s ) are learned. On one hand, the parameter ϕ is updated, such that the value function V ϕ ( s t ) is close to the estimated real return G ^ τ t : T :
min ϕ G ^ τ t : T V ϕ s t 2 .
On the other hand, the value function V ϕ ( s t ) is used as a basis function to update the parameter, in order to reduce the variance of the policy gradient:
θ θ + α γ t G ^ τ t : T V ϕ s t θ log π θ a t | s t .
In each update step, the actor performs an action a, according to the current environment state s and the strategy π θ ( a s ) ; the environment state becomes s and the actor obtains an instant reward r. The critic (value function V ϕ ( s ) ) adjusts its own scoring standard, according to the real reward given by the environment and the previous score ( r + γ V ϕ ( s ) ) , such that its own score is closer to the real return of the environment. The actor adjusts its strategy π θ according to the critic’s score, and strives to do better next time. At the beginning of the training, actors performs randomly and critic gives random marks. Through continuous learning, the critic’s ratings become more and more accurate, and the actor’s movements become better and better.
Algorithm 3 shows the training process of the actor–critic algorithm.
Algorithm 3 Actor–critic algorithm
Input: state space: S; action space: A; differentiable strategy function: π θ ( a s ) ; differentiable state value function: V ϕ ( s ) ; discount rate: γ ; learning rate: α > 0 , β > 0 ;
Output: strategy: π θ
1:
random initial θ ϕ ;
2:
repeat
3:
    initial starting state s;
4:
     λ = 1 ;
5:
    repeat
6:
        In state s, select an action a = π θ ( a s ) ;
7:
        perform the action a to get an instant reward r and a new state s ;
8:
         δ r + γ V ϕ s V ϕ ( s ) ;
9:
         ϕ ϕ + β δ ϕ V ϕ ( s ) ;
10:
         θ θ + α λ δ θ log π θ ( a | s ) ;
11:
         λ γ λ ;
12:
         s s ;
13:
    until s is the termination state;
14:
untilθ converges.

5. Experimental Results and Analysis

Based on the TensorFlow framework, this paper uses two learning strategies (supervised learning and reinforcement learning) to train and predict the Max-cut problem with a pointer network. The model is trained on a deep learning server platform consisting of two NVIDIA TITAN Xp GPUs and an Intel Core i9-7960X CPU.
The initial parameters in the pointer network are randomly generated by a uniform distribution in [−0.08, 0.08], and the initial learning rate is 0.001. During the training process, when supervised learning is applied to train the pointer network, the model uses a single-layer LSTM with 256 hidden units and is trained with mini-batch SGD. When applying reinforcement learning to train the pointer network, the model uses three layers of LSTMs, with each layer consisting of 128 hidden units, and is trained with the actor–critic algorithm. In the prediction stage, the heat parameter in the pointer network is set to 3, and the initial reward baseline is set to 100. The model tested during the prediction phase is the last iteration of the training phase. For the Max-cut problem of different dimensions, except for increasing the sequence length, the other hyperparameter settings are the same. In the implementation, we use the Adam algorithm to adjust the learning rate. Adam algorithm can make effective dynamic adjustments to the model to make the changes in hyperparameters relatively stable [24].
We constructed a data set based on the method mentioned in Section 2.2 (using the {−1,1} quadratic programming problem transformed into the Max-cut problem), which we refer to as the Zhou data set. In order not to lose generality, we also used the Binary quadratic and Max cut Libraries (Biq Mac Library), which are the most commonly used benchmark generators for the Max-cut problem. We performed experiments on the Zhou data set and the Biq Mac Library data set, respectively.

5.1. Experiments on Zhou Data Set

According to the method for randomly generating Max-cut problems in Section 2.2, a large number of Max-cut problems with known exact solutions were obtained, which we formed into data sets with specified input and output formats. The training set and the test set are both data generated from the same probability distribution, and the density of the input matrix Q in the data sample is 94.6%. Each sample in the training and test sets is unique. Then, we divided the data sets randomly into training and test sets according to the ratio of 10:1. For different dimensions, the training set contained 1000 samples and the test set contained 100 samples. The maximum number of training iterations was set to 100,000. The accuracy of the solution trained by the model is defined as:
Accuracy = v Ptr - Net v Opt × 100 % ,
where v Ptr - Net is the solution of trained pointer network model, and v Opt is the optimal value of the Max-cut problem.
We first used supervised learning to train the pointer network on the 10-, 30-, 50-, 60-, 70-, 80-, 90-, 100-, 150-, and 200-dimensional Max-cut problems, respectively. Table 1 shows average accuracy of the Max-cut problem of the above dimensions. And the detailed experimental results are listed in Table 2.
Then the pointer network based on reinforcement learning was also trained with the Zhou data set, on 10-, 50-, 150-, 200-, 250-, and 300-dimensional Max-cut problems. Table 3 shows the average accuracy of the Max-cut problem for the above dimensions. And the detailed experimental results are listed in Table 4.
It can be seen, from Table 1 and Table 3 that, regardless of whether supervised learning or reinforcement learning was used, the average accuracy of the pointer network solution decreased as the number of dimensions increased. However, the average accuracy of reinforcement learning decreased very slightly. Secondly, by comparing the two tables, we find that the pointer network model obtained through reinforcement learning was more accurate than that obtained by supervised learning. Finally, it can be seen that the time taken to train the model with reinforcement learning was faster than that for supervised learning. Figure 5 shows the accuracy of the solution for the Max-cut problem samples trained with supervised learning and reinforcement learning.

5.2. Experiments on Biq Mac Library

In order to further verify the generalization ability of the pointer network model, ten groups of 60-, 80-, and 100-dimensional Max-cut samples were selected from the Biq Mac Library (http://biqmac.uni-klu.ac.at/biqmaclib.html). As the Max-cut problem data set of each dimension in the Biq Mac Library only has ten groups of data, the amount of data was not enough to train the pointer network (training the pointer network model requires at least 100 groups of data), so we only used the Biq Mac Library as the test set; the Zhou data set was still used as the training set.
Table 5 shows the detailed experimental results of the Max-cut problem with 60, 80, and 100 dimensions using the Biq Mac Library by reinforcement learning.
The average prediction results are shown in Table 6.
It can be seen, from Table 3 and Table 6, that the average accuracies when predicting the Biq Mac Library using reinforcement learning were lower than the accuracies on Zhou dataset. This is because the Biq Mac Library is composed of data samples with different distributions, which can better characterize the essential characteristics of the Max-cut problem. We believe that, in future research, if the model can be trained on a larger training set with the distribution of the Biq Mac Library, its performance can be definitely improved.

6. Conclusions

In this paper, we proposed an effective deep learning method based on a pointer network for the Max-cut problem. We first analyzed the structural characteristics of the Max-cut problem and introduced a method to generate a large data set of Max-cut problems. Then, the algorithmic frameworks for training the pointer network model under two learning strategies (supervised learning and reinforcement learning) were introduced in detail. We applied supervised learning and reinforcement learning strategies separately to train the pointer network model, and experimented with Max-cut problems with different dimensions. The experimental results revealed that, for the low-dimensional Max-cut problem (below 50 dimensions), the models trained by supervised learning and reinforcement learning both have high accuracy and that the accuracies are basically consistent. For high-dimensional cases (above 50 dimensions), the accuracy of the solution in the training mode using reinforcement learning was significantly better than that with supervised learning. This illustrates that reinforcement learning can better discover the essential characteristics behind the Max-cut problem and can mine better optimal solutions from the data. This important finding will instruct us to further improve the performance and potential of pointer networks as a deep learning method for Max-cut problems and other combinatorial optimization problems in future research.

Author Contributions

S.G. put forward the idea and algorithms. S.G. investigated and supervised the project. Y.Y. and S.G. simulated the results. Y.Y. validated and summarized the results in tables. S.G. and Y.Y. prepared and wrote the article. All authors have read and agreed to the published version of the manuscript.

Funding

The work described in the paper was supported by the National Science Foundation of China under Grant 61876105.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
NP-hardNon-deterministic Polynomial-hard
NPNon-deterministic Polynomial
VLSIVery Large Scale Integration
LKHLin–Kernighan Heuristic
CAPCredit Assignment Problem
RNNRecurrent Neural Network
UBQPUnconstrained Boolean Quadratic Programming
BQPBoolean Quadratic Programming
SDPSemi-Definite Programming
LSTMLong Short-Term Memory
SLSupervised Learning
RLReinforcement Learning
SGDStochastic Gradient Descent

Variables

The following variables are used in this manuscript:
DTraining set
EEdge set, E V × V
GUndirected graph of the Max-cut problem, G = ( V , E )
G τ t : T The return from time t in actor–critic
KThe mini-batch size
LNumber of training steps
MNetwork parameter in LSTM, M R 4 d × ( d + e )
O l a b e l Label matrix of the output
O p r e d i c t The probability distribution of the output matrix
PThe transformed reinforcement learning input matrix
QAdjacency matrix, Q = Q T = ( q i j ) n × n and q i j ( i = j ) are zero
SSubset of vertex set
ULearnable parameter in attention mechanism
VVertex set, V = { 1 , 2 , , n }
V ϕ Value function of actor–critic algorithm
WParameter matrix to be updated in mini-batch SGD
XInput sequence, X = x 1 : T
YOutput sequence, Y = y 1 : T
aAction in agent–environment interaction
bNetwork parameter in LSTM, b R 4 d
cNon-zero vector, c R n
c t Memory unit for the current moment
f t Forget gate for the current moment
f ( x ) f ( x ) = 1 2 x T Q x c T x
hHidden layer
i t Input gate for the current moment
nDimensions of the Max-cut problem
o t Output gate for the current moment
pConditional probability
rReward of agent-environment interaction
sState of agent-environment interaction
s i , j Non-normalized attention distribution of each input vector
vLearnable parameter in attention mechanism
w i , j The weight on the edge connecting vertex i and vertex j
α Learning rate in mini-batch SGD
β Learning rate in actor–critic algorithm
γ Discount rate in actor–critic algorithm
θ Parameters to be updated in strategy function
λ Lagrange multiplier, λ R n
π θ Strategy function of actor–critic algorithm
σ Logistic function in LSTM
ϕ Parameters to be updated in value function

References

  1. Goemans, M.X. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM 1995, 42, 1115–1145. [Google Scholar] [CrossRef]
  2. Mcmahan, H.B.; Holt, G.; Sculley, D.; Young, M.; Kubica, J. Ad click prediction: A view from the trenches. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, Chicago, IL, USA, 14 August 2013. [Google Scholar]
  3. Barahona, F.; Grötschel, M.; Junger, M.; Reinelt, G. An application of combinatorial optimization to statistical physics and circuit layout design. Oper. Res. 1988, 36, 493–513. [Google Scholar] [CrossRef] [Green Version]
  4. Xi, Y.J.; Dang, Y.Z. The method to analyze the robustness of knowledge network based on the weighted supernetwork model and its application. Syst. Eng. Theory Pract. 2007, 27, 134–140. [Google Scholar] [CrossRef]
  5. Dreiseitla, S.; Ohno-Machadob, L. Logistic regression and artificial neural network classification models: A methodology review. J. Biomed. Inf. 2002, 35, 352–359. [Google Scholar] [CrossRef] [Green Version]
  6. Croce, F.D.; Kaminski, M.J.; Paschos, V.T. An exact algorithm for max-cut in sparse graphs. Oper. Res. Lett. 2007, 35, 403–408. [Google Scholar] [CrossRef] [Green Version]
  7. Krishnan, K.; Mitchell, J.E. A semidefinite programming based polyhedral cut and price approach for the maxcut problem. Comput. Optim. Appl. 2006, 33, 51–71. [Google Scholar] [CrossRef]
  8. Funabiki, N.; Kitamichi, J.; Nishikawa, S. An evolutionary neural network algorithm for max cut problems. In Proceedings of the International Conference on Neural Networks (ICNN’97), Houston, TX, USA, 12 June 1997. [Google Scholar]
  9. Denrell, J.; Fang, C.; Levinthal, D.A. From t-mazes to labyrinths: Learning from model-based feedback. Manag. Sci. 2004, 50, 1366–1378. [Google Scholar] [CrossRef] [Green Version]
  10. Vinyals, O.; Fortunato, M.; Jaitly, N. Pointer networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2014; pp. 2692–2700. [Google Scholar]
  11. Bello, I.; Pham, H.; Le, Q.V.; Norouzi, M.; Bengio, S. Neural combinatorial optimization with reinforcement learning. arXiv 2016, arXiv:1611.09940. [Google Scholar]
  12. Mirhoseini, A.; Pham, H.; Le, Q.V.; Steiner, B.; Larsen, R.; Zhou, Y. Device placement optimization with reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 11 August 2017; pp. 2430–2439. [Google Scholar]
  13. Milan, A.; Rezatofighi, S.H.; Garg, R.; Dick, A.; Reid, I. Data-Driven Approximations to NP-Hard Problems. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4 February 2017; pp. 1453–1459. [Google Scholar]
  14. Mottini, A.; Acuna-Agost, R. Deep Choice Model Using Pointer Networks for Airline Itinerary Prediction. In Proceedings of the 23rd ACM SIGKDD International Conference, Halifax, NS, Canada, 13–17 August 2017; pp. 1575–1583. [Google Scholar]
  15. Bahdanau, D.; Cho, K.; Bengio, Y. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  16. Gu, S.; Yang, Y. A Pointer Network Based Deep Learning Algorithm for the Max-Cut Problem. ICONIP 2018, LNCS 11301, 238–248. [Google Scholar]
  17. Gu, S.; Hao, T.; Yao, H. A Pointer Network Based Deep Learning Algorithm for Unconstrained Binary Quadratic Programming Problem. Neurocomputing 2020. (accepted). [Google Scholar]
  18. Zhou, M.X. A benchmark generator for boolean quadratic programming. Comput. Sci. 2015, arXiv:1406.4812. [Google Scholar]
  19. Barahona, F.; Michael, J.; Reinelt, G. Experiments in quadratic 0-1 programming. Math. Program. 1989, 44, 127–137. [Google Scholar] [CrossRef]
  20. Sutskever, I.; Vinyals, O.; Le, Q.V. Sequence to Sequence Learning with Neural Networks. In Advances in Neural Information Processing Systems; MIT Press: Cambridge, MA, USA, 2014; pp. 3104–3112. [Google Scholar]
  21. Jing, L.; Shen, Y.; Dubček, T.; Peurifoy, J.; Skirlo, S.; Lecun, Y.; Tegmark, M. Tunable efficient unitary neural networks (eunn) and their application to rnns. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6 August 2016. [Google Scholar]
  22. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  23. Konecny, J.; Liu, J.; Richtarik, P.; Takac, M. Mini-batch semi-stochastic gradient descent in the proximal setting. IEEE J. Sel. Top. Signal Process. 2016, 10, 242–255. [Google Scholar] [CrossRef]
  24. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
Figure 1. Long short-term memory (LSTM) loop unit structure.
Figure 1. Long short-term memory (LSTM) loop unit structure.
Mathematics 08 00298 g001
Figure 2. Encoder–decoder model.
Figure 2. Encoder–decoder model.
Mathematics 08 00298 g002
Figure 3. The architecture of pointer network (encoder in green, decoder in purple).
Figure 3. The architecture of pointer network (encoder in green, decoder in purple).
Mathematics 08 00298 g003
Figure 4. Agent–environment interaction.
Figure 4. Agent–environment interaction.
Mathematics 08 00298 g004
Figure 5. Average accuracies of SL and RL.
Figure 5. Average accuracies of SL and RL.
Mathematics 08 00298 g005
Table 1. Average accuracies and training times for Max-cut problems with different dimensions by SL.
Table 1. Average accuracies and training times for Max-cut problems with different dimensions by SL.
DimensionsAverage AccuracyAverage Training Time
10100%1:13:20
3098.78%2:12:35
5097.56%5:29:16
6094.51%6:37:10
7090.95%7:33:42
8088.64%8:39:08
9086.35%9:58:06
10080.50%11:14:57
15074.94%14:52:46
20071.95%19:28:25
Table 2. Detailed solutions and accuracies for Max-cut problems with different dimensions by supervised learning (SL).
Table 2. Detailed solutions and accuracies for Max-cut problems with different dimensions by supervised learning (SL).
SampleOptimumSolutionAccuracy SampleOptimumSolutionAccuracy
S10.1330330100% S80.185,46274,63487.33%
S10.2281281100% S80.294,55283,35788.16%
S10.3240240100% S80.3100,51282,78282.36%
S10.4236236100% S80.492,10883,73590.91%
S10.5171171100% S80.589,31179,29988.79%
S10.6124124100% S80.6100,86297,62497.79%
S10.7208208100% S80.788,91981,99692.21%
S10.8230230100% S80.891,04579,97487.84%
S10.9245245100% S80.987,87375,87086.34%
S10.10257257100% S80.10111,32795,37485.67%
S30.148614861100% S90.1136,959114,25183.42%
S30.25820569897.90% S90.2134,022124,03392.55%
S30.34708461798.07% S90.3145,727115,44879.22%
S30.461236123100% S90.4132,287117,39188.74%
S30.56033600899.59% S90.5134,420126,34193.99%
S30.65380534299.29% S90.6133,817116,49187.05%
S30.76927679998.15% S90.7142,957123,29286.24%
S30.84914474196.48% S90.8120,026104,20786.82%
S30.96401634099.05% S90.9145,635120,74982.91%
S30.105185514799.27% S90.10141,741117,01982.56%
S50.124,46823,38695.58% S100.1174,947144,96382.86%
S50.222,46221,64696.37% S100.2199,441181,96691.24%
S50.323,24623,246100% S100.3166,682130,99578.59%
S50.419,77619,27397.46% S100.4179,885146,42681.40%
S50.525,05723,94795.57% S100.5184,363146,65379.55%
S50.627,51027,03798.28% S100.6191,636165,28386.25%
S50.726,69826,36898.76% S100.7189,959136,78472.01%
S50.820,62720,26198.23% S100.8177,545144,37381.32%
S50.920,49320,21398.63% S100.9181,022145,64280.46%
S50.1022,13021,40496.72% S100.10189,239134,96571.32%
S60.143,17340,06292.79% S150.1542,081453,34883.63%
S60.242,05739,28093.40% S150.2571,793390,33368.26%
S60.343,19040,36093.45% S150.3678,393551,32781.27%
S60.454,17453,13898.09% S150.4574,523481,57483.82%
S60.543,63840,18092.08% S150.5545,008412,71875.73%
S60.638,25537,33397.59% S150.6613,130467,82076.30%
S60.752,68949,26093.49% S150.7545,500354,31464.95%
S60.843,90240,74192.80% S150.8632,578521,15582.39%
S60.939,09837,98097.14% S150.9612,560349,40657.04%
S60.1041,00538,65594.27% S150.10630,733479,42076.01%
S70.164,91459,37191.46% S200.11,444,2641,080,54574.82%
S70.263,30662,87299.31% S200.21,488,7011,006,85167.63%
S70.371,12762,85588.37% S200.31,368,3591,052,51776.92%
S70.465,67357,28687.23% S200.41,352,3011,102,92381.56%
S70.559,04554,86492.92% S200.51,309,8151,152,57087.99%
S70.660,01650,33783.87% S200.61,338,423928,16269.35%
S70.763,15857,11790.44% S200.71,311,058805,25761.42%
S70.863,47859,21193.28% S200.81,462,304822,69156.26%
S70.967,01960,87190.83% S200.91,350,0771,114,42382.55%
S70.1070,61664,81891.79% S200.101,347,381822,03761.01%
Table 3. Average accuracies and training times for Max-cut problems with different dimensions by RL.
Table 3. Average accuracies and training times for Max-cut problems with different dimensions by RL.
DimensionsAverage AccuracyAverage Training Time
10100%0:10:07
5098.28%0:23:03
10096.32%0:42:33
15095.06%1:03:57
20092.38%1:27:18
25089.88%1:53:28
30087.64%2:21:30
Table 4. Detailed solutions and accuracies for Max-cut problems with different dimensions by reinforcement learning (RL).
Table 4. Detailed solutions and accuracies for Max-cut problems with different dimensions by reinforcement learning (RL).
SampleOptimumSolutionAccuracy SampleOptimumSolutionAccuracy
R10.1233233100% R150.6584,968572,68997.90%
R10.2248248100% R150.7553,878453,36181.86%
R10.3193193100% R150.8618,615583,54594.33%
R10.4192192100% R150.9529,739522,42798.62%
R10.5302302100% R150.10559,414513,46391.79%
R10.6187187100% R200.11,274,8661,267,88499.45%
R10.7341341100% R200.21,392,2001,165,17483.69%
R10.8133133100% R200.31,358,3201,345,87099.07%
R10.9301301100% R200.41,320,0061,118,70584.75%
R10.10272272100% R200.51,368,1991,020,05674.55%
R50.125,56525,30298.97% R200.61,397,4321,292,62892.50%
R50.222,52822,44199.61% R200.71,421,0611,420,17299.94%
R50.325,42624,78397.47% R200.81,376,2291,357,87598.67%
R50.425,78725,42598.60% R200.91,344,4361,266,44294.20%
R50.521,03019,75593.94% R200.101,388,1521,332,22595.97%
R50.625,07924,61498.15% R250.12,590,9182,242,26386.54%
R50.722,07721,82098.84% R250.22,700,2942,503,76892.72%
R50.828,89928,71599.36% R250.32,542,4432,230,46087.73%
R50.929,10128,61498.33% R250.42,542,4132,357,06092.71%
R50.1025,72925,60799.53% R250.52,702,8332,547,46394.25%
R100.1187,805182,36997.02% R250.62,764,9012,750,19799.47%
R100.2197,470193,17197.82% R250.72,777,9482,027,29672.98%
R100.3187,495185,92999.16% R250.82,671,8352,530,10394.70%
R100.4216,339211,43197.73% R250.92,593,1312,031,28478.33%
R100.5161,961161,06799.45% R250.102,596,8432,579,79899.34%
R100.6178,737176,72398.87% R300.14,430,2633,858,90387.10%
R100.7183,560183,31599.87% R300.24,363,4822,344,35453.73%
R100.8155,038154,29299.52% R300.34,459,6824,248,76195.27%
R100.9191,120142,05974.33% R300.44,562,3192,369,01551.93%
R100.10174,202173,72999.73% R300.54,404,8954,113,11393.38%
R150.1568,452554,12897.48% R300.64,497,9124,483,64499.68%
R150.2549,303542,73198.80% R300.74,364,6404,298,76098.49%
R150.3672,601628,65593.47% R300.84,589,7444,372,10695.26%
R150.4590,417553,86093.81% R300.94,655,6314,652,61399.94%
R150.5563,674561,55499.62% R300.104,956,3324,944,88799.77%
Table 5. Solution and accuracy on Biq Mac Library data set by RL.
Table 5. Solution and accuracy on Biq Mac Library data set by RL.
SampleOptimumSolutionAccuracy SampleOptimumSolutionAccuracy
R60.153644182.28% R80.692681788.23%
R60.253247889.85% R80.792977383.21%
R60.352946387.52% R80.892978584.50%
R60.453847888.85% R80.992583089.73%
R60.552748692.22% R80.1092364069.34%
R60.653347989.87% R100.12019153075.78%
R60.753143882.49% R100.22060150773.16%
R60.853547388.41% R100.32032146171.90%
R60.953046888.30% R100.42067157376.10%
R60.1053348390.62% R100.52039143370.28%
R80.192982989.24% R100.62108148370.35%
R80.294175380.02% R100.72032146472.04%
R80.393482488.22% R100.82074158576.42%
R80.492381988.73% R100.92022147773.05%
R80.593280586.37% R100.102005144672.20%
Table 6. Average accuracies of different dimensional Max-cut problems using the Biq Mac Library by RL.
Table 6. Average accuracies of different dimensional Max-cut problems using the Biq Mac Library by RL.
DimensionsAverage Accuracy
6088.05%
8084.76%
10073.09%

Share and Cite

MDPI and ACS Style

Gu, S.; Yang, Y. A Deep Learning Algorithm for the Max-Cut Problem Based on Pointer Network Structure with Supervised Learning and Reinforcement Learning Strategies. Mathematics 2020, 8, 298. https://doi.org/10.3390/math8020298

AMA Style

Gu S, Yang Y. A Deep Learning Algorithm for the Max-Cut Problem Based on Pointer Network Structure with Supervised Learning and Reinforcement Learning Strategies. Mathematics. 2020; 8(2):298. https://doi.org/10.3390/math8020298

Chicago/Turabian Style

Gu, Shenshen, and Yue Yang. 2020. "A Deep Learning Algorithm for the Max-Cut Problem Based on Pointer Network Structure with Supervised Learning and Reinforcement Learning Strategies" Mathematics 8, no. 2: 298. https://doi.org/10.3390/math8020298

APA Style

Gu, S., & Yang, Y. (2020). A Deep Learning Algorithm for the Max-Cut Problem Based on Pointer Network Structure with Supervised Learning and Reinforcement Learning Strategies. Mathematics, 8(2), 298. https://doi.org/10.3390/math8020298

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop