Next Article in Journal
Nash Equilibrium Investment-Reinsurance Strategies for an Insurer and a Reinsurer with Intertemporal Restrictions and Common Interests
Next Article in Special Issue
Relevant Aspects for an EF3-Evaluation of E-Cognocracy
Previous Article in Journal
Geodesic Vector Fields on a Riemannian Manifold
Previous Article in Special Issue
Evaluating Personal Default Risk in P2P Lending Platform: Based on Dual Hesitant Pythagorean Fuzzy TODIM Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Basic Algorithm for the Constrained Zero-One Quadratic Programming Problem with k-diagonal Matrix and Its Application in the Power System

School of Mechatronic Engineering and Automation, Shanghai University, Shanghai 200444, China
*
Author to whom correspondence should be addressed.
Current address: 99 Shangda Road, Shanghai 200444, China.
Mathematics 2020, 8(1), 138; https://doi.org/10.3390/math8010138
Submission received: 30 December 2019 / Revised: 15 January 2020 / Accepted: 16 January 2020 / Published: 19 January 2020
(This article belongs to the Special Issue Optimization for Decision Making II)

Abstract

:
Zero-one quadratic programming is a classical combinatorial optimization problem that has many real-world applications. However, it is well known that zero-one quadratic programming is non-deterministic polynomial-hard (NP-hard) in general. On one hand, the exact solution algorithms that can guarantee the global optimum are very time consuming. And on the other hand, the heuristic algorithms that generate the solution quickly can only provide local optimum. Due to this reason, identifying polynomially solvable subclasses of zero-one quadratic programming problems and their corresponding algorithms is a promising way to not only compromise these two sides but also offer theoretical insight into the complicated nature of the problem. By combining the basic algorithm and dynamic programming method, we propose an effective algorithm in this paper to solve the general linearly constrained zero-one quadratic programming problem with a k-diagonal matrix. In our algorithm, the value of k is changeable that covers different subclasses of the problem. The theoretical analysis and experimental results reveal that our proposed algorithm is reasonably effective and efficient. In addition, the placement of the phasor measurement units problem in the power system is adopted as an example to illustrate the potential real-world applications of this algorithm.

1. Introduction

Optimization problems normally fall into two categories, one for continuous variables and the other for discrete variables. The latter one is called combinatorial optimization, which is an active field in applied mathematics [1]. Common problems are the maximum flow problem, traveling salesman problem, matching problem, knapsack problem, etc. Among those famous combinatorial optimizations, zero-one quadratic programming (01QP), whose variables can only be either 0 or 1 [2], is very important and attracts a lot of attention.
Zero-one quadratic programming, which can be divided into constrained (01CQP) and unconstrained (01UQP), is a combinatorial optimization and has practical significance. For example, it can be applied in circuit design [3], pattern recognition [4], capital budgeting [5], portfolio optimization [6], etc. Except for these well known applications, 01QP also has potential applications in the nonlinear control related fields [7,8,9,10,11]. Among these applications, the phasor measurement unit (PMU) placement has been widely studied. Phasor measurement units can be used in dynamic monitoring, system protection, and system analysis and prediction. Therefore, the placement of PMU has become an important issue. As the scale of the electric grid grows, the PMU placement problem becomes more difficult and must be addressed considering certain requirements. In [12], a modified bisecting search combined with simulated annealing method was proposed, and the latter randomly selected arrays were used to test the observability of the system. Considering the incomplete observability of the system, a calculation method based on graph theory was proposed in [13]. This method is time-consuming, and with the increase of dimensions, the calculation load is too heavy. An integer linear programming method [14] and an improved algorithm [15] were put forward, considering system redundancy, and full and incomplete observability. Researchers in [16] proposed a binary programming method considering the joint placement of the conventional measurement units and phasor measurement units.
Zero-one quadratic programming also has some theoretical significance. Many classical problems, such as the max-cut problem [17,18] and max-bisection problem [19] can be converted to zero-one quadratic programming. Therefore, designing the algorithm that can solve 01QP effectively and efficiently is very meaningful not only in practical fields but also in theoretical fields.
However, zero-one quadratic programming is a well known NP-hard problem in general. The common solutions are exact solution and heuristic algorithms. Exact solution algorithms can guarantee the global optimum. In [20], the branch-and-bound method was used to solve zero-one quadratic programming problems without constraints. In [21,22], some exact solutions were proposed by means of geometric properties. Penalty parameters were introduced to solve zero-one quadratic programming problems with constraints in [23]. But exact solution algorithm are very time consuming and suitable for small scale problems only. Oppositely, heuristic algorithms, such as simulated annealing [24], genetic algorithms [25], neural networks [26], and ant colony algorithms [27], can solve the medium and large scale problems quickly in general. But most of them can only find the local optimum.
Therefore, identifying polynomially solvable subclasses of zero-one quadratic programming problems and their corresponding algorithms is a promising way to not only compromise these two sides but also offer theoretical insight into the complicated nature of the problem. In our past studies [28,29], the problem of five-diagonal matrix quadratic programming with linear constraints has been solved effectively. In [30], an algorithm for solving 01QP with a seven-diagonal matrix Q was presented. However, for these algorithms, the applicable problem is very specific. This narrows their applications. Then, we proposed an algorithm for the unconstrained problems with a k-diagonal matrix in [31]. In this paper, based on our previous results, we further propose an algorithm for the general linearly constrained zero-one quadratic programming problem with a k-diagonal matrix by combining the basic algorithm and dynamic programming method. For the algorithm we proposed, the value of k is changeable. That means the algorithm can cover different subclasses of the problem. The theoretical analysis and experimental results reveal that our proposed algorithm is reasonably effective and efficient. We also apply the algorithm to real-world applications and then verify its feasibility.
The main contributions of this paper are reflected in the following aspects: (1) The previous algorithm targeted a fixed k value in Q matrix. While the algorithm in this paper targeted a general problem with changeable k values. (2) We analyze the time complexity and give the proof process of the rationality of the algorithm. (3) We apply the algorithm to the phasor measurement units placement in real-world application.
This paper is organized as follows: in Section 2, we review the algorithm of solving unconstrained zero-one k-diagonal matrix quadratic programming. In Section 3, a constrained zero-one k-diagonal matrix quadratic programming algorithm with the proof of the algorithm is proposed. Application of zero-one quadratic programming in the phasor measurement units placement is put forward in Section 4. Experimental results and discussion are given in Section 5. We draw our conclusions and put forward the prospects in Section 6.

2. Basic Algorithm to 01UQP

The following Equation (1) shows the form of the k-diagonal matrix zero-one quadratic programming problem. The special point is the form of the matrix Q called the k-diagonal matrix, where k = 2 m + 1 ( m = 0 , 1 , 2 , , n 1 ) .
min x { 0 , 1 } n f ( x ) = 1 2 x T Q x + c T x
where Q = ( q i j ) n × n , q i j = q j i ( i , j = 1 , 2 , , n ) indicates that it is a symmetric matrix. Note that all the numbers in this matrix are zero except q i j and q j i ( i = 1 , 2 , , n 1 , j = i + 1 , i + 2 , , i + m ) .
0 q 1 , 2 q 1 , 3 0 0 0 q 2 , 1 0 q 2 , 3 0 0 0 q 3 , 1 q 3 , 2 0 0 0 0 0 0 0 0 q n 2 , n 1 q n 2 , n 0 0 0 q n 1 , n 2 0 q n 1 , n 0 0 0 q n , n 2 q n , n 1 0
Based on past works [30,31], we can have the algorithm, as follows, to solve 01UQP. The feasibility and effect of the algorithm can be seen in [31]. Figure 1 shows the Algorithm 1 process intuitively.
Algorithm 1 Process of solving 01UQP
Step 1: 
Assign x n m + 1 ,…, x n to 0 or 1.
(1) Adjacent terms are the combination of x n m + 1 ,…, x n , whose value is the same except for the value of x n .
(2) Get the corresponding f ( x ) .
(3) Label these states s t a t e 0 , s t a t e 1 , , s t a t e ( 2 m 1 ) .
(4) Compare f ( x ) in the two adjacent terms. (only the coefficient of x n m and the constant term are different.)
Step 2: 
Change the value of x n m i to 0 and 1( i = 0 , 1 , , n m 1 ).
(1) Compare every two adjacent states and take the result with the smaller constant term as the new state.
(2) Update all the 2 m states.
Step 3: 
Get the optimal solution x.
(1) Update the states based on Step 2 until only the constant term is in f ( x ) .
(2) The optimal value is the minimal one.
(3) Trace back and get the optimal solution x.

3. Basic Algorithm to 01CQP

3.1. 01CQP Algorithm Description

Consider the constrained k-diagonal matrix zero-one quadratic programming problem:
min x { 0 , 1 } n 1 2 x T Q x + c T x s . t . a T x b
where Q has the same meaning as that of the 01UQP formula in Section 2, a Z + n , b Z + .
In this section, we utilize the dynamic programming method to solve the 01CQP problem. To apply the dynamic programming method, we introduce a state variable s k ( s k Z ) and a stage variable k ( 0 < k n ) , which should satisfy the following iteration. s k + 1 can be expressed as:
s k + 1 = s k + a k + 1 x k + 1 ( k = 1 , , n 1 ) .
We only need to consider the integer point of the state space since a Z + n and b Z + . Since s k satisfies 0 s k b , we define a set s k = { s k | 0 s k b , s k Z } .
Algorithms 2 and 3 show the detailed calculation process.
Algorithm 2 Calculation Method of f ( s k )
Case 1: 
When k = 1 , there are two cases.
(1a) s 1 < a 1
x 1 = 0 , x 1 * = 0 , f ( s 1 ) = f ( 0 , , x n )
(1b) s 1 a 1
x 1 = 1 , x 1 * = 1 , f ( s 1 ) = f ( 1 , , x n )
We will get a series of functions f ( x ) after executing Case 1.
Case 2: 
When k 2 , there are also two cases.
(2a) s k < a k
x k must be 0 to satisfy s k 1 = s k a k x k . At this time, s k = s k 1 , by which we can obtain the function f ( s k ) = f ( s k 1 ) | x k = 0 .
(2b) s k a k
In this case, x k can be both 0 or 1, which generates two more situations:
1) If x k * = 0 and s k = s k 1 , we can get f ( s k ) = f ( s k 1 ) | x k = 0 .
2) If x k * = 1 and s k 1 = s k a k x k , we can get f ( s k ) = f ( s k 1 ) | x k = 1 .
We can see that there is only one function f ( s k ) corresponding to each state s k when k = 1 , and there are several f ( s k ) to each state s k when k > 1 . To save storage and computational time, f ( s k ) should be selected satisfactorily in the next step. We need to find the optimal f ( s k ) = f ( s k 1 ) | x k = 0 and f ( s k ) = f ( s k 1 ) | x k = 1 , that the optimizing process refers to for Algorithm 3.
Algorithm 3 s t a t e 0 and s t a t e 1
  • s t a t e 0 ( x k = 0 )
                Case 1: There is only one f ( s k ) = f ( s k 1 ) | x k = 0 :
                Set x k * = 0 , f ( s k ) = f ( s k 1 ) | x k = 0
                Case 2: There are more than one f ( s k ) = f ( s k 1 ) | x k = 0 :
                1) Compare f ( s k 1 ) | x k = 0 , which are almost the same except for the constant term and pick up the smallest one.
                2) Set x k * = 0 and f ( s k ) = f ( s k 1 ) | x k = 0 .
  • s t a t e 1 ( x k = 1 )
                Case 1: There is only one f ( s k ) = f ( s k 1 ) | x k = 1 :
                Set x k * = 1 , f ( s k ) = f ( s k 1 ) | x k = 1
                Case 2: There are more than one f ( s k ) = f ( s k 1 ) | x k = 1 :
                1) Find the f ( x ) using a similar approach to that in s t a t e 0 Case 2.
                2) Set x k * = 1 and f ( s k ) = f ( s k 1 ) | x k = 1 .
According to the above algorithm, the maximum number of functions per S t a t e is shown in Table 1. The primary time is spent generating the state table. The number of times that the core steps calculated is focused on the state number in Table 1. In total, we need to update the state table n times and the number of state is b, so the time complexity is O ( 2 m 1 × n × b ) .

3.2. Analysis on the Effectiveness and Rationality of 01CQP Algorithm

In this section, we analyze the properties of the polynomial time algorithm for 01CQP. To analyze the algorithm, we need to demonstrate the rationality of Algorithms 2 and 3. Suppose f ( x ) has the form:
f ( x ) = q 12 x 1 x 2 + q 13 x 1 x 3 + + q 1 , 1 + m x 1 x 1 + m + + q n m , n m + 1 x n m x n m + 1 + + q n m , n x n m x n + + q n 1 , n x n 1 x n + c 1 x 1 + + c n x n .
Step 1: Set k = 1
(1) s 1 < a 1 , x 1 = 0
f 1 ( s 1 ) = f ( x ) | x 1 = 0 = q i , i + 1 x i x i + 1 + q i , i + 2 x i x i + 2 + + q i , i + m x i x i + m + + q n m , n m + 1 x n m x n m + 1 + + q n m , n x n m x n + + q n 1 , n x n 1 x n + c 2 x 2 + + c n x n ( i = 2 , , n m 1 ) .
(2) s 1 a 1 , x 1 = 1
f 1 ( s 1 ) = f ( x ) | x 1 = 1 = q i , i + 1 x i x i + 1 + q i , i + 2 x i x i + 2 + + q i , i + m x i x i + m + + q n m , n m + 1 x n m x n m + 1 + + q n m , n x n m x n + + q n 1 , n x n 1 x n + c 2 ^ x 2 + + c 1 + m ^ x 1 + m + c m + 2 x m + 2 + + c n x n + c 1 .
Clearly, when k = 1 , each state s 1 has only one function f 1 ( s 1 ) . The coefficient of x 2 , , x m + 1 and the constants are the only different terms in the function.
Step 2: Set k = 2 , 3 , m
(1) If s k < a k , execute s t a t e 0 .
If s k 1 < a k 1 , f k ( s k ) has the same form as function (3):
f ( s k ) = q k , k + 1 x k x k + 1 + q k , k + 2 x k x k + 2 + + q k , k + m x k x k + m + + q i , i + 1 x i x i + 1 + q i , i + 2 x i x i + 2 + + q i , i + m x i x i + m + + q n m , n m + 1 x n m x n m + 1 + + q n m , n x n m x n + + q n 1 , n x n 1 x n + c k + 1 x k + 1 + + c n x n + c o n s t .
( c o n s t represents the constant term of the f ( s k 1 ) .)
If s k a k 1 , f k ( s k ) has the same form as function (4):
f ( s k 1 ) = q k , k + 1 x k x k + 1 + q k , k + 2 x k x k + 2 + + q k , k + m x k x k + m + + q i , i + 1 x i x i + 1 + q i , i + 2 x i x i + 2 + + q i , i + m x i x i + m + + q n m , n m + 1 x n m x n m + 1 + + q n m , n x n m x n + + q n 1 , n x n 1 x n + c k + 1 ^ x k + 1 + + c k + m ^ x k + m + c k + m x k + m + + c n x n + c o n s t + c k .
Then, f k ( s k ) can be shown as:
f k ( s k ) = f k 1 ( s k 1 ) | x k = 0 = q k + 1 , k + 2 x k + 1 x k + 2 + q k + 1 , k + 3 x k + 1 x k + 3 + + q k + 1 , k + m + 1 x k + 1 x k + m + 1 + + q i , i + 1 x i x i + 1 + q i , i + 2 x i x i + 2 + + q i , i + m x i x i + m + + q n m , n m + 1 x n m x n m + 1 + + q n m , n x n m x n + + q n 1 , n x n 1 x n + c k + 1 ^ x k + 1 + + c k + m ^ x k + m + c k + m + 1 x k + m + 1 + + c n x n + c o n s t ^ .
At this point, the maximum number of f k ( s k ) corresponding to each s k is 2 k 2 .
(2) If s k a k , execute s t a t e 0 and s t a t e 1 .
Here, the s t a t e 0 is the same as the one in Case 1, so we do not need to repeat it.
Consider s t a t e 1 , x k = 1 , f k ( s k ) = f k 1 ( s k a k x k ) , s k 1 = s k a k x k = s k a k .
If s k 1 < a k 1 , f ( s k 1 ) has the form of function (5). If s k 1 a k 1 , f ( s k 1 ) has the form of function (6), then, f k ( s k ) can be expressed as:
f k ( s k ) = f k 1 ( s k 1 ) | x k = 1 = q k + 1 , k + 2 x k + 1 x k + 2 + q k + 1 , k + 3 x k + 1 x k + 3 + + q k + 1 , k + m + 1 x k + 1 x k + m + 1 + + q i , i + 1 x i x i + 1 + q i , i + 2 x i x i + 2 + + q i , i + m x i x i + m + + q n m , n m + 1 x n m x n m + 1 + + q n m , n x n m x n + + q n 1 , n x n 1 x n + c k + 1 ^ x k + 1 + + c k + m ^ x k + m + c k + m + 1 x k + m + 1 + + c n x n + c o n s t ^ .
All f k ( s k ) are only different in the constant terms and the coefficient of x k + 1 , , x m + 2 .
Step 3: k = n m 1
Consider the most complex case, s k a k and the state variable s k corresponds to 2 m 1 f k ( s k ) and 2 m 1 f k ( s k ) = f k 1 ( s k a k ) respectively ( 2 m 1 is the number of both). Therefore, we need to execute s t a t e 0 first, and then execute s t a t e 1 .
(1) s t a t e 0 :
Since f k ( s k ) has the form as function (8), the s t a t e 0 is executed for the f k ( s k ) respectively, and the following expression is obtained:
f k ( s k ) = f k 1 ( s k ) | x k = 0 = q n m , n m + 1 x n m x n m + 1 + + q n m , n x n m x n + + q n 1 , n x n 1 x n + c n m ^ x n m + c n m + 1 x n m + 1 + + c n x n + c o n s t ^ .
Clearly, these f k ( s k ) are only different in the constants and the coefficients of x n m .
(2) s t a t e 1 :
It is possible to obtain f k ( s k ) , which has the following form:
f k ( s k ) = f k 1 ( s k 1 ) | x k = 1 = q n m , n m + 1 x n m x n m + 1 + + q n m , n x n m x n + + q n 1 , n x n 1 x n + c n m ^ x n m + c n m + 1 x n m + 1 + + c n x n + c o n s t ^ + c n m 1 ^ .
For both case (1) and (2), they are only different in the constants and the coefficients of x k + 1 .
Step 4: k = m + 2 , , n
Based on Algorithm 2 Case 2, we calculate the function f k ( s k ) , and the core is the implementation of s t a t e 0 and s t a t e 1 . When executing s t a t e 0 and s t a t e 1 , the difference between the two f k ( s k ) is the constant and the coefficients of x k . Therefore, when executing s t a t e 0 , we only need to compare the constant terms. When executing s t a t e 1 , we only need to compare the sum of the constants and the coefficients of x k . In this way, we can avoid solving the excess f k ( s k ) .
The form of the function f k ( s k ) and the maximum number of f k ( s k ) in each stage k are similar to the one in stage k = n m , which also ensures the repeatability of the algorithm. The algorithm is supplemented by concrete examples and numerical simulations.

3.3. Calculation Example of 01CQP

For example, the parameters Q, c, a, and b are:
Q = 0 23 37 56 0 0 0 0 23 0 41 16 34 0 0 0 37 41 0 62 27 76 0 0 56 16 62 0 81 14 58 0 0 34 27 81 0 90 25 42 0 0 76 14 90 0 12 31 0 0 0 58 25 12 0 94 0 0 0 0 42 31 94 0 c = 24 , 54 , 17 , 36 , 72 , 63 , 46 , 18 T a = 1 , 2 , 3 , 2 , 4 , 2 , 3 , 2 T b = 6 .
In this example, we will omit some of the states.
(1) Set k = 1 , s 1 = a 1 x 1 , a 1 = 1
We can set s 1 = 0 , 1 , , 6 . According to Algorithm 2 Case 1, we can obtain Table 2.
When s 1 = 0 , we can calculate x 1 * and f ( s 1 ) through Algorithm 2 Case (1a) for s 1 < a 1 .
When s 1 = 1 , , 6 , we can calculate x 1 * and f ( s 1 ) through Algorithm 2 Case (1b) for s 1 a 1 .
We omit s t a t e 3 , , s t a t e 6 , which is the same as s t a t e 1 , s t a t e 2 .
(2) Set k = 2 , 3 , s 2 = a 1 x 1 + a 2 x 2 = s 1 + a 2 x 2 , s 3 = s 2 + a 3 x 3
We can set s 3 = 0 , 1 , , 6 . According to Algorithm 2 Case 2, we obtain the Table 3. The case of s 2 = 0 , 1 , , 6 is omitted.
(3) Set k = 4 , 5 , 6 , 7
According to Algorithm 2 Case 2 and Algorithm 3, we can gain f k ( s k ) with k = 3 , , 7 . Table 4 and Table 5 show part of the calculation process. From the table, we can see that only the coefficient of s k + 1 and constant terms are different in the adjacent items.
(4) Set k = 8
Finally, we have Table 6, which only has the constant terms. Through it, we can see that x 8 * = 0 , f ( x * ) = 160 , and by the backtracking method, we find the optimum solution x * = ( 0 , 1 , 0 , 0 , 1 , 0 , 0 , 0 ) .

4. Application of Zero-One Quadratic Programming in Phasor Measurement Units Placement

How to find the installation location and the number of installations is the focus of the phasor measurement units placement problem. The matrix in the PMU placement problem differs from the previous k-diagonal matrix in the elements of the main diagonal. However, since x is either 0 or 1, we can convert it into a k-diagonal matrix. Then, we can take advantage of the algorithms designed above to work out PMU placement without considering too many constraints and realistic requirements.

4.1. Modelling

For the power system composed of n nodes, the placement of PMU is represented by n-dimensional vector X = ( x 1 , x 2 , , x n ) . Here, i = 1 , 2 , , n ,
x i = 0 a PMU is installed at bus i 1 otherwise .
The matrix H represents the network graph structure,
h i j = 1 i = j 1 i is connected to j 0 otherwise ,
and the objective function is
V ( x ) = λ ( N H X ) T R ( N H X ) + X T Q X
where λ is the weight. N R n represents the upper bound of the maximum redundancy of each bus. R R n × n and Q R n × n are diagonal arrays, representing the importance of each bus and the cost of placing the PMU on the bus respectively.
We expand Equation (11),
V ( x ) = λ ( N H X ) T R ( N H X ) + X T Q X = λ [ N T R N N T R H X X T H T R N + X T H T R H X ] + X T Q X = λ N T R N 2 λ N T R H X + λ X T H T R H X + X T Q X = 1 2 X T ( 2 λ H T R H + 2 Q ) X + ( 2 λ N T R H ) X + λ N T R N .
Then, Equation (12) can be expressed as integer quadratic programming as,
min 1 2 x T G x + f T x s . t . M ( x ) = 0 x i 0 , 1 n
where G = 2 λ H T R H + 2 Q , f = ( 2 λ N T R H ) T . M ( x ) is the column vector consisting of m i ( x ) = ( 1 x i ) j A i ( 1 x j ) ( i Ω ) . A i and Ω represent a set of nodes adjacent to the bus i and the bus set respectively. The above constraints require at least one PMU unit to be placed between the bus and its adjacent nodes to ensure that each adjacent node of the bus can be observed. The above problem can be expressed as unconstrained problems by the weighted least square method:
min 1 2 x T G x + f T x + M ( x ) T V M ( x ) = 1 2 x T G x + f T x + i = 1 n ( v i ( 1 x i ) 2 ( j A i ( 1 x j ) ) 2 ) s . t . x i 0 , 1 n
where V = d i a g ( v i ) .

4.2. Example and Experimental Result

Suppose the coefficient matrix H of a power system is
H = 1 1 1 0 0 0 1 1 0 1 0 0 1 0 1 1 0 0 0 1 1 1 1 0 0 0 0 1 1 1 0 0 0 0 1 1 .
Set Q as the unit matrix, λ = 0.5 ,
R = 12 0 0 0 0 0 0 128 0 0 0 0 0 0 50 0 0 0 0 0 0 140 0 0 0 0 0 0 72 0 0 0 0 0 0 10 , N = 2 , 2 , 2 , 3 , 2 , 1 T .
Then, we can have
G = 192 140 62 178 0 0 140 282 152 268 140 0 62 152 204 190 140 0 178 268 190 392 212 72 0 140 140 212 224 82 0 0 0 72 82 84 ,
f = 380 , 700 , 544 , 920 , 574 , 154 T .
The main diagonal of the matrix for this problem is 1, while the definition of k-diagonal matrix Q requires the main diagonal elements to be zero. Since x 0 , 1 n , the diagonal of G can be converted into a linear term and can be considered as a seven-diagonal matrix. This indicates that the diagonal of G becomes zero and f is updated as seen in Equation (15). Using the algorithm in Section 2, we can find the optimal solution ( 0 , 1 , 1 , 1 , 0 , 1 ) without considering the constraints. This is to install four PMUs in bus 2, 3, 4, and 6. Through observation, it can be seen that the configuration results meet the system observability and the constraint is reached.
f = 284 , 559 , 442 , 724 , 462 , 112 T .
The above problem considers the placement of PMU under the condition that the system is completely observable. When considering the PMU placement problem with the N 1 principle, constraint conditions 2 x i + 2 j P i 2 x j + j P i 1 x j 2 ( i N ) are added based on the definition of a single node observable. N is a set that includes the nodes required to have objectivity when a fault occurs in the system. P i 2 and P i 1 represents a node set connected to node i with two and one line respectively.
If the bus between nodes 5 and 6 breaks down, we set H ( 5 , 6 ) and H ( 6 , 5 ) as zero. Then, the constraint is added.
0 0 0 1 1 0 0 0 0 0 0 1 x 1 1 .
We can get the optimal solution is the same as that in the former example. As the connection between nodes 5 and 6 fails, the system after placing PMU will not lose its observability. We only consider the algorithm to solve the constrained k-diagonal zero-one matrix quadratic programming and consider that it can be applied to some form of PMU placement problems. Some of the conditions to the placement of PMU have not been taken into account here, which has yet to be studied by specialized engineers.

5. 01CQP Algorithm Simulation and Discussion

5.1. Algorithm Experimental Simulation

Now, the experimental results are provided for this algorithm to illustrate its performance. We implement the algorithm with C++ and run it on an Intel(R) Core(TM) i7-8550U CPU. For the problems we tested, the dimension of matrix Q ranges from 10 to 100 and k takes 5 , 7 , , 25 . All simulation data ( Q , c , a , b ) are generated randomly. Q , c is set up as in Section 2 with numbers ranging from 100 to 100, and a , b range between 0 and 20. Then, we obtain Table 7, which shows the detailed computation time for different dimensional problems with different diagonal numbers.

5.2. Experimental Results Discussion

Here, based on the experimental results, we investigate the influence of n and k on the algorithm and discuss the importance and implications of our study results. Firstly, we fix the values of k but increase n. Figure 2 shows the experimental situations when the matrix dimension n changes from 10 to 100 with k is fixed at 9, 13, 19 and 23 respectively. As can be seen, the calculation time increases slightly with the increase of dimension. Secondly, we fix the values of n but increase k. Figure 3 shows the experimental situations when k changes from 5 to 25 with n is fixed at 20, 40, 60 and 80 respectively. It can be observed clearly that time changes significantly with the increase of diagonal number. Finally, we increase n and k simultaneously. Figure 4 illustrates this situation. It is obvious that k has an obvious influence on the calculation speed. All these observations coincide with the time complexity we derived in Section 3. It means that if the diagonal number is within an appropriate range, our algorithm can perform effectively and efficiently even for very large scale problems. And these kind of problems cover a large portion of the whole problem set. Therefore, the algorithm we proposed will have great potential in many real-world applications.

6. Conclusions

In this paper, a novel exact algorithm to the general linearly constrained zero-one quadratic programming problem with k-diagonal matrix is proposed. The algorithm is designed by analyzing the property of matrix Q and then combining the famous basic algorithm and dynamic programming method. The complexity of the algorithm is analyzed and shows that it is polynomially solvable when m is fixed. The experimental results also illustrate the feasibility and efficiency of the algorithm. Designing efficient algorithm to this special class of problem 01 C Q P not only provides useful information for designing efficient algorithms for other special classes but also can provide hints and facilitate the derivation of efficient relaxations for the general problems. And finally, the phasor measurement units placement problem is used to demonstrate that the algorithm has wide potential applications in decision-making real-life problems.

Author Contributions

S.G. put forward ideas and algorithms. S.G. and X.C simulated the results and wrote important parts of the article. X.C. formatted the whole paper, and summarized the results in tables. All authors have read and agreed to the published version of the manuscript.

Funding

The work described in the paper was supported by the National Science Foundation of China under Grants 61876105 and 61503233.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
01QPZero-One Quadratic Programming
01UQPUnconstrained Zero-One Quadratic Programming
01CQPConstrained Zero-One Quadratic Programming
PMUPhasor Measurement Units
NP-hardNon-deterministic polynomial-hard

Variables

The following variables are used in this manuscript:
Q Q = ( q i j ) n × n and only q i j = q j i ( i = 1 , 2 , , n 1 , j = i + 1 , i + 2 , , i + m ) are not zero
c c = ( c i j ) n × 1
a a = ( a i j ) n × 1
b b Z +
nDimensions of matrix Q
m m [ 0 , n 1 ] and m Z
k k = 2 m + 1 ( m = 0 , 1 , 2 , , n 1 )
s k State variable s k Z , stage variable k ( 0 < k n )
f ( x ) f ( x ) = 1 2 x T Q x + c T x
H H = ( h i j ) n × n is a network graph structure
N N R n represents the upper bound of the maximum redundancy of each bus
R R R n × n is the importance of each bus
Q Q R n × n in the application is the cost of placing the PMU on the bus
M ( x ) Column vector consisted of m i ( x ) = ( 1 x i ) j A i ( 1 x j ) ( i Ω )
Ω Bus set
λ Weight
A i Nodes adjacent to the bus i

References

  1. Cook, W.J.; Cunningham, W.H.; Pulleyblank, W.R.; Schrijver, A. Combinatorial Optimization; Wiley-Interscience: Hoboken, NJ, USA, 1997. [Google Scholar]
  2. Hammer, P.L.; Rudeanu, S. Boolean Methods in Operations Research and Related Areas; Spring: Berlin/Heidelberg, Germany; New York, NY, USA, 1968. [Google Scholar]
  3. Chang, K.C.; Du, H.C. Efficient algorithms for layer assignment problem. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 1987, 6, 67–78. [Google Scholar] [CrossRef]
  4. Dehghan, A.; Mubarak, S. Binary quadratic programing for online tracking of hundreds of people in extremely crowded scenes. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 40, 568–581. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Laughhunn, D.J. Quadratic binary programming with application to capital-budgeting problems. Oper. Res. 1970, 18, 454–461. [Google Scholar] [CrossRef]
  6. Kizys, R.; Juan, A.; Sawik, B.; Calvet, L. A biased-randomized iterated local search algorithm for rich portfolio optimization. Appl. Sci. 2019, 9, 3509. [Google Scholar] [CrossRef] [Green Version]
  7. Huang, L.; Zhang, Q.; Sun, L.; Sheng, Z. Robustness analysis of iterative learning control for a class of mobile robot systems with channel noise. IEEE Access 2019, 7, 34711–34718. [Google Scholar] [CrossRef]
  8. Manoli, G.; Rossi, M.; Pasetto, D.; Deiana, R.; Ferraris, S.; Cassiani, G.; Putti, M. An iterative particle filter approach for coupled hydro-geophysical inversion of a controlled infiltration experiment. J. Comput. Phys. 2015, 283, 37–51. [Google Scholar] [CrossRef]
  9. Mesbah, A. Stochastic model predictive control with active uncertainty learning: A Survey on dual control. Ann. Rev. Control 2018, 45, 107–117. [Google Scholar] [CrossRef]
  10. Imani, M.; Braga-Neto, U. Maximum-Likelihood adaptive filter for partially-observed boolean dynamical systems. IEEE Trans. Signal Process. 2017, 65, 359–371. [Google Scholar] [CrossRef]
  11. Imani, M.R.; Dougherty, E.; Braga-Neto, U. Boolean Kalman filter and smoother under model uncertainty. Automatica 2020, 111, 108609. [Google Scholar] [CrossRef]
  12. Baldwin, T.L.; Mili, L.; Boisen, M.B. Power system observability with minimal phasor measurement placement. IEEE Trans. Power Syst. 1993, 8, 707–715. [Google Scholar] [CrossRef]
  13. Nuqui, R.F.; Phadke, A.G. Phasor measurement unit placement techniques for complete and incomplete observability. IEEE Trans. Power Deliv. 2005, 20, 2381–2388. [Google Scholar] [CrossRef]
  14. Xu, B.; Abur, A. Observability analysis and measurement placement for system with PMUs. IEEE PES Power Syst. Conf. Expos. 2004, 2, 943–946. [Google Scholar]
  15. Gou, B. Generalized integer linear programming formulation for optimal PMU placement. IEEE Trans. Power Syst. 2008, 23, 1099–1104. [Google Scholar] [CrossRef]
  16. Kavasseri, R.; Srinivasan, S.K. Joint placement of phasor and power flow measurements for observability of power systems. IEEE Trans. Power Syst. 2011, 26, 1929–1936. [Google Scholar] [CrossRef]
  17. Poljak, S.; Rendl, F. Solving the max-cut problem using eigenvalues. Discret. Appl. Math. 1995, 62, 249–278. [Google Scholar] [CrossRef] [Green Version]
  18. Rendl, F.; Rinaldi, G.; Wiegele, A. Solving Max-Cut to optimality by intersecting semidefinite and polyhedral relaxations. Lect. Notes Comput. 2007, 4513, 295–309. [Google Scholar] [CrossRef]
  19. Ye, Y.A. 699-approximation algorithm for Max-Bisection. Math. Program. 2001, 90, 101–111. [Google Scholar] [CrossRef]
  20. Pardalos, P.M.; Rodgers, G.P. Computational aspects of a branch and bound algorithm for quadratic zero-one programming. Computing 1990, 45, 131–144. [Google Scholar] [CrossRef]
  21. Li, D.; Sun, X.L.; Liu, C.L. An exact solution method for unconstrained quadratic 0-1 programming: A geometric approach. J. Glob. Optim. 2012, 52, 797–829. [Google Scholar] [CrossRef]
  22. Gu, S.; Chen, X.Y.; Wang, L. Global optimization of binary quadratic programming: A neural network based algorithm and its FPGA Implementation. Neural Process. Lett. 2019, 1–20. [Google Scholar] [CrossRef]
  23. Zhu, W.X. Penalty parameter for linearly constrained 0-1 quadratic programming. J. Optim. Theory Appl. 2003, 116, 229–239. [Google Scholar] [CrossRef]
  24. Katayama, K.; Narihisa, H. Performance of simulated annealing-based heuristic for the unconstrained binary quadratic programming problem. Eur. J. Oper. Res. 2001, 134, 103–119. [Google Scholar] [CrossRef]
  25. Jiang, D.; Zhu, S. A method to solve nonintersection constraint 0-1 quadratic programming model. J. Southwest Jiaotong Univ. 1997, 32, 667–671. [Google Scholar]
  26. Ranjbar, M.; Effati, S.; Miri, S.M. An artificial neural network for solving quadratic zero-one programming problems. Neurocomputing 2017, 235, 192–198. [Google Scholar] [CrossRef]
  27. Ping, W.; Weiqing, X. Binary ant colony algorithm with controllable search bias for unconstrained binary quadratic problem. In Proceedings of the 2012 International Conference on Electronics, Communications and Control, Zhoushan, China, 16–18 October 2012. [Google Scholar]
  28. Gu, S.; Cui, R. Polynomial time solvable algorithm to linearly constrained binary quadratic programming problems with Q being a five-diagonal matrix. In Proceedings of the Fiveth International Conference on Intelligent Control and Information Processing, Dalian, China, 18–20 August 2014; pp. 366–372. [Google Scholar]
  29. Gu, S.; Cui, R.; Peng, J. Polynomial time solvable algorithms to a class of unconstrained and linearly constrained binary quadratic programming problems. Neurocomputing 2016, 198, 171–179. [Google Scholar] [CrossRef] [Green Version]
  30. Gu, S.; Peng, J.; Cui, R. A polynomial time solvable algorithm to binary quadratic programming problems with Q being a seven-diagonal matrix and its neural network implementation. In Proceedings of the ISNN 2014—Advances in Neural Networks, Macao, China, 28 November–1 December 2014; pp. 338–346. [Google Scholar]
  31. Gu, S.; Chen, X.Y. The basic algorithm for zero-one unconstrained quadratic programming problem with k-diagonal matrix. In Proceedings of the Twelfth International Conference on Advanced Computational Intelligence International, Dali, China, 14–16 March 2020. [Google Scholar]
Figure 1. Flow chart of the zero-one unconstrained quadratic programming (01UQP) algorithm.
Figure 1. Flow chart of the zero-one unconstrained quadratic programming (01UQP) algorithm.
Mathematics 08 00138 g001
Figure 2. Calculation time with different dimensions when the diagonal number k = 9 , 13 , 19 , 23 .
Figure 2. Calculation time with different dimensions when the diagonal number k = 9 , 13 , 19 , 23 .
Mathematics 08 00138 g002
Figure 3. Calculation time with different diagonal numbers when the dimension n = 20 , 40 , 60 , 80 .
Figure 3. Calculation time with different diagonal numbers when the dimension n = 20 , 40 , 60 , 80 .
Mathematics 08 00138 g003
Figure 4. The variation of calculation time with different dimensions n and diagonal number k.
Figure 4. The variation of calculation time with different dimensions n and diagonal number k.
Mathematics 08 00138 g004
Table 1. Maximum number of functions per S t a t e .
Table 1. Maximum number of functions per S t a t e .
SituationThe Maximum Number of Functions
Algorithm 2 Case 11
Algorithm 3 Case 1 2 m 1
Algorithm 3 Case 2 2 m 1
Table 2. Functions f 1 ( s 1 ) .
Table 2. Functions f 1 ( s 1 ) .
s 1 x 1 f 1 ( s 1 ) ( a 1 = 1 )
00 41 x 2 x 3 + 16 x 2 x 4 34 x 2 x 5 62 x 3 x 4 27 x 3 x 5 + 76 x 3 x 6 81 x 4 x 5 + 14 x 4 x 6 58 x 4 x 7 + 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 54 x 2 17 x 3 + 36 x 4 72 x 5 + 63 x 6 + 46 x 7 18 x 8
11 41 x 2 x 3 + 16 x 2 x 4 34 x 2 x 5 62 x 3 x 4 27 x 3 x 5 + 76 x 3 x 6 81 x 4 x 5 + 14 x 4 x 6 58 x 4 x 7 + 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 31 x 2 54 x 3 20 x 4 72 x 5 + 63 x 6 + 46 x 7 18 x 8 + 24
21 41 x 2 x 3 + 16 x 2 x 4 34 x 2 x 5 62 x 3 x 4 27 x 3 x 5 + 76 x 3 x 6 81 x 4 x 5 + 14 x 4 x 6 58 x 4 x 7 + 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 31 x 2 54 x 3 20 x 4 72 x 5 + 63 x 6 + 46 x 7 18 x 8 + 24
Table 3. Functions f 3 ( s 3 ) .
Table 3. Functions f 3 ( s 3 ) .
s 3 x 3 f 3 ( s 3 ) ( a 3 = 3 )
00 f 3 ( s 3 ) = f 2 ( 0 ) | x 3 = 0 = 81 x 4 x 5 + 14 x 4 x 6 58 x 4 x 7 + 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 + 36 x 4 72 x 5 + 63 x 6 + 46 x 7 18 x 8
10 f 3 ( s 3 ) = f 2 ( 1 ) | x 3 = 0 = 81 x 4 x 5 + 14 x 4 x 6 58 x 4 x 7 + 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 20 x 4 72 x 5 + 63 x 6 + 46 x 7 18 x 8 + 24
30 f 3 ( s 3 ) = f 2 ( 3 ) | x 3 = 0 = 81 x 4 x 5 + 14 x 4 x 6 58 x 4 x 7 + 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 20 x 4 72 x 5 + 63 x 6 + 46 x 7 18 x 8 + 24
30 f 3 ( s 3 ) = f 2 ( 3 ) | x 3 = 0 = 81 x 4 x 5 + 14 x 4 x 6 58 x 4 x 7 + 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 4 x 4 106 x 5 + 63 x 6 + 46 x 7 18 x 8 7
31 f 3 ( s 3 ) = f 2 ( 3 ) | x 3 = 1 = 81 x 4 x 5 + 14 x 4 x 6 58 x 4 x 7 + 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 26 x 4 99 x 5 + 139 x 6 + 46 x 7 18 x 8 17
50 f 3 ( s 3 ) = f 2 ( 5 ) | x 3 = 0 = 81 x 4 x 5 + 14 x 4 x 6 58 x 4 x 7 + 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 20 x 4 72 x 5 + 63 x 6 + 46 x 7 18 x 8 + 24
50 f 3 ( s 3 ) = f 2 ( 5 ) | x 3 = 0 = 81 x 4 x 5 + 14 x 4 x 6 58 x 4 x 7 + 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 4 x 4 106 x 5 + 63 x 6 + 46 x 7 18 x 8 7
51 f 3 ( s 3 ) = f 2 ( 2 ) | x 3 = 1 = 81 x 4 x 5 + 14 x 4 x 6 58 x 4 x 7 + 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 84 x 4 99 x 5 + 139 x 6 + 46 x 7 18 x 8 30
51 f 3 ( s 3 ) = f 2 ( 2 ) | x 3 = 1 = 81 x 4 x 5 + 14 x 4 x 6 58 x 4 x 7 + 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 10 x 4 133 x 5 + 139 x 6 + 46 x 7 18 x 8 30
Table 4. Functions f 4 ( s 4 ) .
Table 4. Functions f 4 ( s 4 ) .
s 4 x 4 f 4 ( s 4 ) ( a 4 = 2 )
10 f 4 ( s 4 ) = f 3 ( 1 ) | x 4 = 0 = 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 72 x 5 + 63 x 6 + 46 x 7 18 x 8 + 24
50 f 4 ( s 4 ) = f 3 ( 5 ) | x 4 = 0 = 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 72 x 5 + 63 x 6 + 46 x 7 18 x 8 + 24
50 f 4 ( s 4 ) = f 3 ( 5 ) | x 4 = 0 = 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 106 x 5 + 63 x 6 + 46 x 7 18 x 8 7
50 f 4 ( s 4 ) = f 3 ( 5 ) | x 4 = 0 = 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 99 x 5 + 139 x 6 + 46 x 7 18 x 8 30
50 f 4 ( s 4 ) = f 3 ( 5 ) | x 4 = 0 = 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 133 x 5 + 139 x 6 + 46 x 7 18 x 8 30
51 f 4 ( s 4 ) = f 3 ( 3 ) | x 4 = 1 = 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 153 x 5 + 77 x 6 12 x 7 18 x 8 + 4
51 f 4 ( s 4 ) = f 3 ( 3 ) | x 4 = 1 = 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 187 x 5 + 77 x 6 12 x 7 18 x 8 11
51 f 4 ( s 4 ) = f 3 ( 3 ) | x 4 = 1 = 90 x 5 x 6 + 25 x 5 x 7 42 x 5 x 8 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 180 x 5 + 153 x 6 12 x 7 18 x 8 43
Table 5. Functions f 5 ( s 5 ) .
Table 5. Functions f 5 ( s 5 ) .
s 5 x 5 f 5 ( s 5 ) ( a 5 = 4 )
50 f 5 ( s 5 ) = f 4 ( 5 ) | x 5 = 0 = 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 72 x 6 + 46 x 7 18 x 8 7
50 f 5 ( s 5 ) = f 4 ( 5 ) | x 5 = 0 = 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 72 x 6 + 46 x 7 18 x 8 30
50 f 5 ( s 5 ) = f 4 ( 5 ) | x 5 = 0 = 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 + 77 x 6 12 x 7 18 x 8 11
50 f 5 ( s 5 ) = f 4 ( 5 ) | x 5 = 0 = 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 + 153 x 6 12 x 7 18 x 8 43
51 f 5 ( s 5 ) = f 4 ( 1 ) | x 5 = 1 = 12 x 6 x 7 + 31 x 6 x 8 94 x 7 x 8 + 153 x 6 + 71 x 7 60 x 8 48
Table 6. Functions f 8 ( s 8 ) .
Table 6. Functions f 8 ( s 8 ) .
s 8 x 8 f 8 ( s 8 )
000
1024
20 54
21 18
30 17
316
40 72
41 72
50 48
51 66
60 160
61 132
Table 7. Corresponding computation time in different diagonal numbers with dimension n = 10 to 100 of 01CQP.
Table 7. Corresponding computation time in different diagonal numbers with dimension n = 10 to 100 of 01CQP.
n102030405060708090100
k = 5 0.03560.06780.14040.20390.32890.41020.45890.69420.73911.0060
k = 7 0.03360.07030.14190.20300.30970.40240.48390.69080.74751.0519
k = 9 0.03200.06380.14240.18510.31290.36400.48960.68310.78350.9771
k = 11 0.03350.06190.14700.20250.27490.33570.41550.54820.66210.8812
k = 13 0.02460.04820.10930.15770.25910.35000.41760.60290.71090.8872
k = 15 0.01850.05440.11380.17340.25610.36240.44690.57820.73560.8873
k = 17 0.02340.05490.11850.17490.28380.37660.47320.72200.89081.1064
k = 19 0.03970.07440.16920.22100.39440.48880.60590.91470.98861.2304
k = 21 0.08720.17970.28360.46560.57140.69951.10971.25441.7283
k = 23 0.12830.26400.44090.54990.96151.09421.31581.83811.8951
k = 25 0.19610.36730.61690.82951.35561.32352.04182.37174.2689

Share and Cite

MDPI and ACS Style

Gu, S.; Chen, X. The Basic Algorithm for the Constrained Zero-One Quadratic Programming Problem with k-diagonal Matrix and Its Application in the Power System. Mathematics 2020, 8, 138. https://doi.org/10.3390/math8010138

AMA Style

Gu S, Chen X. The Basic Algorithm for the Constrained Zero-One Quadratic Programming Problem with k-diagonal Matrix and Its Application in the Power System. Mathematics. 2020; 8(1):138. https://doi.org/10.3390/math8010138

Chicago/Turabian Style

Gu, Shenshen, and Xinyi Chen. 2020. "The Basic Algorithm for the Constrained Zero-One Quadratic Programming Problem with k-diagonal Matrix and Its Application in the Power System" Mathematics 8, no. 1: 138. https://doi.org/10.3390/math8010138

APA Style

Gu, S., & Chen, X. (2020). The Basic Algorithm for the Constrained Zero-One Quadratic Programming Problem with k-diagonal Matrix and Its Application in the Power System. Mathematics, 8(1), 138. https://doi.org/10.3390/math8010138

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop