Next Article in Journal
Prediction of Airfoil Stall Based on a Modified kv2¯ω Turbulence Model
Next Article in Special Issue
Algorithm for Two Generalized Nonexpansive Mappings in Uniformly Convex Spaces
Previous Article in Journal
Lattice Computing: A Mathematical Modelling Paradigm for Cyber-Physical System Applications
Previous Article in Special Issue
On a Periodic Boundary Value Problem for Fractional Quasilinear Differential Equations with a Self-Adjoint Positive Operator in Hilbert Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Relaxed and Bound Algorithm Based on Auxiliary Variables for Quadratically Constrained Quadratic Programming Problem

1
School of Mathematics and Information Sciences, North Minzu University, Yinchuan 750021, China
2
Ningxia Province Cooperative Innovation Center of Scientific Computing and Intelligent Information Processing, North Minzu University, Yinchuan 750021, China
3
Ningxia Province Key Laboratory of Intelligent Information and Data Processing, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(2), 270; https://doi.org/10.3390/math10020270
Submission received: 22 December 2021 / Revised: 9 January 2022 / Accepted: 14 January 2022 / Published: 16 January 2022
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications II)

Abstract

:
Quadratically constrained quadratic programs (QCQP), which often appear in engineering practice and management science, and other fields, are investigated in this paper. By introducing appropriate auxiliary variables, QCQP can be transformed into its equivalent problem (EP) with non-linear equality constraints. After these equality constraints are relaxed, a series of linear relaxation subproblems with auxiliary variables and bound constraints are generated, which can determine the effective lower bound of the global optimal value of QCQP. To enhance the compactness of sub-rectangles and improve the ability to remove sub-rectangles, two rectangle-reduction strategies are employed. Besides, two ϵ -subproblem deletion rules are introduced to improve the convergence speed of the algorithm. Therefore, a relaxation and bound algorithm based on auxiliary variables are proposed to solve QCQP. Numerical experiments show that this algorithm is effective and feasible.

1. Introduction

In this paper, we investigate the following quadratically constrained quadratic program (QCQP):
min f 0 ( x ) = x T Q 0 x + c 0 T x s . t . f s ( x ) = x T Q s x + c s T x d s , s = 1 , 2 , , N , x D ,
where Q s = ( q s i j ) n × n R n × n is real symmetric, c s = ( c s i ) n × 1 R n , d s R , D = { x R n | A x b } with A R m × n , b R m . Furthermore, we assume that the set D is non-empty bounded and that the feasible region F = { x D | f s ( x ) = x T Q s x + c s T x d s , s = 1 , 2 , , N } has at least one interior point (i.e., Slater condition holds). Clearly, F D .
Problem QCQP has a wide range of applications in such fields as wireless communication, network, radar, signal processing [1,2] and so on. Besides, some of the solutions of this problem can also be applied to solve the nonlinear singular models [3] and nonlinear mathematical based medical smoking model [4,5]. It is also often employed to construct many models of management problems, such as portfolio selection problem [6,7], minimum distance problem [8], location problem [9], 0–1 quadratic programming problem [10], maximum cut problem [11], trust-region subproblem [12,13], etc. If all the matrices Q s , s = 0 , 1 , 2 , , N are semi-positive definite, QCQP is a convex quadratic program with convex quadratic constraints (CQPCQC), which can be reconstructed into a second-order cone programming problem (SOCP) that can be solvable in polynomial time [14,15]. In other words, the problem CQPCQC is solvable in polynomial time. Of course, other special QCQP problems in reference [11,16,17,18] can also be solved in polynomial time. Generally speaking, QCQP is NP-hard, and its complexity is mainly reflected in the non-convexity of quadratic objective function and the feasible region with non-connectivity. Therefore, most of these problems cannot be solved in polynomial time, and it is difficult to search for its global optimal solution.
Over the decades, many scholars have studied QCQP problems and proposed global optimization methods which generally include branch-and-bound methods [19,20,21], outer approximation method [22], Lagrangian decomposition [23], Benders decomposition [24,25] and their hybrid methods. Branch-and-bound (BB) algorithm is a typical enumeration method [19,20,26], which usually obtains a new lower bound for the optimal value of the original problem and tries to update the upper bound by solving the relaxation subproblem in each enumeration. During the enumeration, the compactness of the relaxation subproblem is one of the main factors. Therefore, BB Algorithms embedded with different types of relaxation problems have been investigated in many pieces of literature about QCQP. Among them, the semidefinite programming (SDP) [27] relaxation method, which can be solved in polynomial time by the interior point method, has attracted the attention of many researchers. Besides, another relaxation method widely studied is linear programming relaxation such as reconstruction linearization technique (RLT) [26], which was adopted in [28,29]. Based on the idea of RLT, a branch and cut algorithm for solving QCQP were proposed in [30]. To find the global approximate optimal solution for QCQP, the author of [31] developed a branch and bound algorithm by mixing the outer approximation method with the linear relaxation approximation subproblem. In [20,32], two branch-and-bound algorithms with linear relaxation, which are based on the method of simply dividing the feasible region, can also solve QCQP globally. By adopting the quadratic proxy function, literature [33] convexifies all quadratic inequality constraint functions, and thus presents a new algorithm based on the quadratic convex reconstruction method. All these methods mentioned above can solve QCQP and its variants well.
In this paper, by introducing the auxiliary variable Y = x x T , the problem QCQP is equivalently converted into an equivalent problem with non-linear equality constraints. After the equality constraint is relaxed, a linear relaxation subproblem with auxiliary variables and bounded constraints is generated for QCQP, and then the lower bound of the global optimal value of QCQP is determined. Then, by combining two rectangle-reduction strategies and two ϵ -subproblem-deletion rules, a relaxation and bound algorithm for QCQP are proposed. Numerical experiments and analysis of the results show that the proposed algorithm is feasible and effective, and its computational performance is better than the software package BARON [34], so it is also a promising algorithm.
The follow-up arrangement of this paper is as follows. In Section 2, the construction method of linear relaxation subproblem with auxiliary variables and bound constraints are given. The rectangle-reduction strategies are presented respectively in Section 3. In Section 4, the proposed algorithm is described and its finite convergence is proved under given precision. Section 5 lists the numerical results and reveals the feasibility and effectiveness of the proposed algorithm. The last section is the conclusion.

2. Bounded Relaxation Technique

By introducing variable Y = x x T = ( x i x j ) n × n , QCQP can be reformulated as
( EP ) : min g 0 ( Y , x ) = Q 0 · Y + c 0 T x s . t . g s ( Y , x ) = Q s · Y + c s T x d s , s = 1 , 2 , , N , Y = x x T = ( x i x j ) n × n , x D ,
where “·” denotes the inner product of two matrices. Further, for each j = 1 , 2 , , n , let x ̲ j 0 = min x X x j , x ¯ j 0 = max x X x j , j = 1 , 2 , , n , then we can construct an initial rectangle H 0 = { x R n | x ̲ j 0 x j x ¯ j 0 , j = 1 , 2 , , n } = j = 1 n [ x ̲ j 0 , x ¯ j 0 ] that satisfies H 0 X F . Of course, x ̲ j 0 and x ¯ j 0 can be obtained by solving linear programs. Also, if D is a bounded convex set, x ̲ j 0 and x ¯ j 0 must be obtained by solving convex programs.
Next, we mainly study the problem EP, whose non-convexity is mainly reflected in the nonlinear equality constraint Y = x x T . Suppose H = j = 1 n [ x ̲ j , x ¯ j ] is denoted as H 0 or any subrectangle of H 0 , then each element of Y = ( y i j ) n × n , y i j , satisfies y ̲ i j y i j = x i x j y ¯ i j , where y ̲ i j = min { x ̲ i x ̲ j , x ̲ i x ¯ j , x ̲ j x ¯ i , x ¯ j x ¯ j } , y ¯ i j = max { x ̲ i x ̲ j , x ̲ i x ¯ j , x ̲ j x ¯ i , x ¯ j x ¯ j } . Thus, we obtain the linear relaxation problem of EP over H:
( LRP H ) : min g 0 ( Y , x ) = Q 0 · Y + c 0 T x s . t . g s ( Y , x ) = Q s · Y + c s T x d s , s = 1 , 2 , , N , y ̲ i j y i j y ¯ i j , i , j = 1 , 2 , , n , x D H ,
which of course also relaxes the following subproblem of QCQP on H:
( QCQP H ) : min f 0 ( x ) = x T Q 0 x + c 0 T x s . t . f s ( x ) = x T Q s x + c s T x d s , s = 1 , 2 , , N , x D H .
Obviously, any feasible solution of QCQP H is feasible for QCQP.
For convenience, we denote the feasible regions of QCQP H and LRP H as F H and F ¯ H respectively. In addition, throughout this paper, ϵ > 0 denotes the convergence accuracy of the proposed algorithm, · denotes the Euclidean norm of “·”.
The following five lemmas illustrate the relationship between feasible solutions and optimal solutions of LRP H and QCQP H .
Lemma 1.
If ( Y ¯ , x ¯ ) is a feasible solution of LRP H with Y ¯ = x ¯ x ¯ T , then x ¯ is feasible for QCQP H .
Proof. 
Since ( Y ¯ , x ¯ ) is a feasible solution of LRP H with Y ¯ = x ¯ x ¯ T , it follows that x ¯ T Q s x ¯ + c s T x ¯ = Q s · Y ¯ + c s T x ¯ d s for s = 1 , 2 , , N . This means x ¯ F H .    □
Lemma 2.
If ( Y ¯ , x ¯ ) is an optimal solution of LRP H with Y ¯ = x ¯ x ¯ T , then x ¯ is a global optimal solution of QCQP H .
Proof. 
Since ( Y ¯ , x ¯ ) is an optimal solution of LRP H with Y ¯ = x ¯ x ¯ T , then we have g 0 ( Y , x ) g 0 ( Y ¯ , x ¯ ) = f 0 ( x ¯ ) for any ( Y , x ) F ¯ H . Let Y ˜ = x ˜ x ˜ T for any x ˜ F H , then ( Y ˜ , x ˜ ) F ¯ H is obvious. Thus, for any x ˜ F H , it follows that f 0 ( x ˜ ) = g 0 ( Y ˜ , x ˜ ) g 0 ( Y ¯ , x ¯ ) = f 0 ( x ¯ ) , which means x ¯ is an optimal solution to QCQP H .    □
Definition 1.
If there is a ( Y ¯ , x ¯ ) F ¯ H such that x ¯ F H and f 0 ( x ¯ ) min x F H f 0 ( x ) + ϵ , then x ¯ is called an ϵ-globally optimal solution for QCQP H .
Definition 2.
If there is a ( Y ¯ , x ¯ ) F ¯ H such that | f s ( x ¯ ) g s ( Y ¯ , x ¯ ) | ϵ for s = 1 , , N and satisfies f 0 ( x ¯ ) min x F H f 0 ( x ) + ϵ , then x ¯ is called a forced ϵ-globally optimal solution for QCQP H .
Definition 3.
If ( Y ¯ , x ¯ ) is an optimal solution of LRP H with | f s ( x ¯ ) g s ( Y ¯ , x ¯ ) | ϵ for s = 0 , 1 , , N, then LRP H is called a forced ϵ-approximation problem for QCQP H .
In Definition 1, ( Y ¯ , x ¯ ) is a feasible solution of LRP H . In fact, there is no absolute optimal solution in practice, so we just need to find the approximate optimal solution under the required tolerance ϵ > 0 . If a feasible point x ¯ satisfies the inequality f 0 ( x ¯ ) min x F H f 0 ( x ) + ϵ , it means that f 0 ( x ¯ ) is close enough to the optimal value min x F H f 0 ( x ) of the problem QCQP H to be usable under the given tolerance. In Definition 2, if  | f s ( x ¯ ) g s ( Y ¯ , x ¯ ) | ϵ for every s = 1 , , N , which means that x ¯ is very close to the feasible domain F H of QCQP H . On this basis, if there is such a solution x ¯ that also satisfies the inequality f 0 ( x ¯ ) min x F H f 0 ( x ) + ϵ , we call it a forced ϵ -globally optimal solution, which is inferior to the ϵ -global optimal solution. For some problems with more complex feasible regions, feasible solutions may not always be available, so it may be a better choice to have an infeasible forced ϵ -globally optimal solution to save more computation. It can be seen from Definition 3 that when | f s ( x ¯ ) g s ( Y ¯ , x ¯ ) | ϵ for s = 0 , 1 , , N , not only x ¯ is close to the feasible region of QCQP H , but also the optimal value of the relaxation problem LRP H is close to the objective function value f 0 ( x ¯ ) of the problem QCQP H . In addition, it is not too difficult to deduce that f 0 ( x ¯ ) g 0 ( Y ¯ , x ¯ ) + ϵ = min ( Y , x ) F ¯ H g 0 ( Y , x ) + ϵ min ( Y , x ) G ¯ H g 0 ( Y , x ) + ϵ = min x F H f 0 ( x ) + ϵ , where the feasible region of EP H is G ¯ H that satisfies G ¯ H F ¯ H .
Lemma 3.
Let ( Y ¯ , x ¯ ) be an optimal solution to LRP H . If Y ¯ x ¯ x ¯ T ϵ max { max { Q s : s = 0 , 1 , , N } , 1 } , then LRP H is a forced ϵ-approximation problem for QCQP H .
Proof. 
Since Y ¯ x ¯ x ¯ T ϵ max { max { Q s : s = 0 , 1 , , N } , 1 } , then for each s = 0 , 1 , , N , we have
| f s ( x ¯ ) g s ( Y ¯ , x ¯ ) | = | Q s · x ¯ x ¯ T + c s T x ¯ Q s · Y ¯ c s T x ¯ | = | Q s · x ¯ x ¯ T Q s · Y ¯ | = | Q s · ( x ¯ x ¯ T Y ¯ ) | Q s x ¯ x ¯ T Y ¯ ϵ Q s max { max { Q s : s = 0 , 1 , , N } , 1 } ϵ Q s max { Q s , 1 } ϵ .
Therefore, LRP H is a forced ϵ -approximation problem for QCQP H .    □
Lemma 4.
Let ( Y ¯ , x ¯ ) be an optimal solution to LRP H . If Y ¯ x ¯ x ¯ T ϵ max { max { Q s : s = 0 , 1 , , N } , 1 } , then x ¯ is a forced ϵ-globally optimal solution for QCQP H .
Proof. 
Since Y ¯ x ¯ x ¯ T ϵ max { max { Q s : s = 0 , 1 , , N } , 1 } , it follows from Formula (1) that
| f s ( x ¯ ) g s ( Y ¯ , x ¯ ) | = | Q s · ( x ¯ x ¯ T Y ¯ ) | ϵ Q s max { Q s , 1 } ϵ , s = 0 , 1 , , N .
Let x H min x F H f 0 ( x ) and Y H = x H ( x H ) T , then g 0 ( Y H , x H ) = f 0 ( x H ) and ( Y H , x H ) F ¯ H are obvious. It follows from the optimality of ( Y ¯ , x ¯ ) on LRP H that
f 0 ( x H ) = g 0 ( Y H , x H ) g 0 ( Y ¯ , x ¯ ) .
Besides, it knows from Formula (2) that
ϵ f 0 ( x ¯ ) g 0 ( Y ¯ , x ¯ ) ϵ .
By combining Formulas (3) and (4), we have
f 0 ( x ¯ ) g 0 ( Y ¯ , x ¯ ) + ϵ f 0 ( x H ) + ϵ = min x F H f 0 ( x ) + ϵ .
Thus, the formulas (2) and (5) indicate that x ¯ is a strong ϵ -globally optimal solution for QCQP H .    □
Lemma 3 gives a sufficient condition that LRP H is a forced ϵ -approximation problem of QCQP H ; under this condition, Theorem 4 states that the optimal solution of problem LRP H can also provide a forced ϵ -globally optimal solution for QCQP H .
Lemma 5.
Let ( Y ¯ , x ¯ ) be an optimal solution to LRP H . If Y ¯ x ¯ x ¯ T ϵ max { Q 0 , 1 } and x ¯ F H , then x ¯ is an ϵ-globally optimal solution for QCQP H .
Proof. 
Since Y ¯ x ¯ x ¯ T ϵ max { Q 0 , 1 } and x ¯ F H , it easily deduced that
f 0 ( x ¯ ) g 0 ( Y ¯ , x ¯ ) = Q 0 · ( x ¯ x ¯ T Y ¯ ) ϵ Q 0 max { Q 0 , 1 } ϵ .
Let x H min x F H f 0 ( x ) and Y H = x H ( x H ) T , then g 0 ( Y H , x H ) = f 0 ( x H ) and ( Y H , x H ) F ¯ H . It follows from the optimality of ( Y ¯ , x ¯ ) on LRP H that
f 0 ( x H ) = g 0 ( Y H , x H ) g 0 ( Y ¯ , x ¯ ) .
Therefore, by combining formulas (6) and (7), it once again derives (5), which also means x ¯ is an ϵ -globally optimal solution for QCQP H .    □
Lemma 5 gives a sufficient condition that the optimal solution of LRP H can provide an ϵ -globally optimal solution for QCQP H .
Remark 1.
In our proposed algorithm(see Algorithm 3), over a certain rectangle H, when the ϵ-globally optimal solution or forced ϵ-globally optimal solution of the subproblem QCQP H is found, if H is deleted, some computational costs will be saved, because such two kinds of solutions are already very close to the global optimal solution of QCQP H .

3. Rectangle-Reduction Strategy

In this section, two rectangle-reduction strategies are given, which can improve the computational efficiency of the algorithm as much as possible.

3.1. Rectangle Reduction Technique Based on Linear Constraints

For any sub-rectangle H = j = 1 n [ x ̲ j , x ¯ j ] H 0 , we assume that all linear constraints of the problem QCQP H can be expanded as:
j = 1 n a l j x j b l , l { 1 , 2 , , m } .
Below, the pseudo-code for rectangle-reduction technique based on linear constraints (RRTLC) (see Algorithm 1) in [21] is given.
Algorithm 1 Rectangle-Reduction Technique based on Linear Constraints
 Require:
H = j = 1 n [ x ̲ j , x ¯ j ] .
1:
Set I : = { 1 , 2 , , m } .
2:
for l = 1 m do
3:
    Compute r U l : = j = 1 n max { a l j x ̲ j , a l j x ¯ j }  and   r L l : = j = 1 n min { a l j x ̲ j , a l j x ¯ j } .
4:
    if  r L l > b l  then
5:
        Problem QCQP H has no feasible solution and the algorithm stops; delete the rectangle H.
6:
    end if
7:
    if  r U l b l  then
8:
        Set I : = I { l } and delete the lth linear inequality constraint for QCQP H .
9:
    end if
10:
    if  r U l > b l r L l  then
11:
        for  j = 1 n  do
12:
           if  a l j > 0  then
13:
                x ¯ j : = min { x ¯ j , b l r L l + min { a l j x ̲ j , a l j x ¯ j } a l j }
14:
           end if
15:
           if  a l j < 0  then
16:
                x ̲ j : = max { x ̲ j , b l r U l + max { a l j x ̲ j , a l j x ¯ j } a l j }
17:
           end if
18:
        end for
19:
    end if
20:
end for
21:
return H = j = 1 n [ x ̲ j , x ¯ j ] or .

3.2. Rectangle Reduction Technique Based on Quadratic Constraints

To further reduce or delete H = j = 1 n [ x ̲ j , x ¯ j ] , the following rectangle-reduction Theorem based on quadratic constraints is given.
Theorem 1.
Let γ s i = d s η s + min { c s i x ̲ i , c s i x ¯ i } for s = 1 , 2 , , N , i = 1 , 2 , , n , then for any s = 1 , 2 , , N , if  γ s i < min { c s i x ̲ i , c s i x ¯ i } , there is no optimal solution to problem QCQP over H; otherwise, if there is an index i { 1 , 2 , , n } such that c s i > 0 and γ s i < c s i x ¯ i , there is no optimal solution for problem QCQP over H ^ ; if there is an index i { 1 , 2 , , n } such that c s i < 0 and γ s i < c s i x ̲ i , there is no optimal solution for problem QCQP over H ˜ , where
η s = i = 1 n j = 1 n min { q s i j y ̲ i j , q s i j y ¯ i j } + i = 1 n min { c s i x ̲ i , c s i x ¯ i } , y ̲ i j = min { x ̲ i x ̲ j , x ̲ i x ¯ j , x ̲ j x ¯ i , x ¯ j x ¯ j } , y ¯ i j = max { x ̲ i x ̲ j , x ̲ i x ¯ j , x ̲ j x ¯ i , x ¯ j x ¯ j } , H ^ = j = 1 n H ^ j with H ^ j = [ x ̲ j , x ¯ j ] , j i ( γ s i c s i , x ¯ i ] [ x ̲ i , x ¯ i ] , j = i , H ˜ = j = 1 n H ˜ j with H ˜ j = [ x ̲ j , x ¯ j ] , j i [ x ̲ i , γ s i c s i ) [ x ̲ i , x ¯ i ] , j = i .
Proof. 
If γ s i < min { c s i x ̲ i , c s i x ¯ i } , the inequality η s > d s is obvious. Therefore, problem QCQP H is unsolvable.
For any x H ^ and a s { 1 , 2 , , N } , there must be an index i { 1 , 2 , , n } such that γ s i c s i < x i x ¯ j . It follows from c s i > 0 that γ s i < c s i x i . Therefore, by using the Definition of γ s i and η s , it can be concluded that d s < η s c s i x ̲ i + c s i x i < i = 1 n j = 1 n q s i j x i x j + i = 1 n c s i x i , which means that for any x H ^ , the sth quadratic constraint of problem QCQP is violated.
For any x H ˜ and a s { 1 , 2 , , N } , there must be an i { 1 , 2 , , n } such that x ¯ j x i < γ s i c s i . It follows from c s i < 0 that γ s i < c s i x i . we also have d s < η s c s i x ¯ i + c s i x i < i = 1 n j = 1 n q s i j x i x j + i = 1 n c s i x i , which means that for any x H ˜ , the sth quadratic constraint of problem QCQP is violated.    □
Next, by using Theorem 1, the following pseudo-code for the rectangle-reduction technique based on quadratic constraints (RRTQC) (see Algorithm 2) is also given.
Algorithm 2 Rectangle-Reduction Technique based on Quadratic Constraints
 Require:
H = j = 1 n [ x ̲ j , x ¯ j ] .
1:
for i = 1 n do
2:
    for  j = 1 n  do
3:
        Compute ω i j = { x ̲ i x ̲ j , x ̲ i x ¯ j , x ̲ j x ¯ i , x ¯ i x ¯ j } , y ̲ i j = min ω i j , y ¯ i j = max ω i j .
4:
    end for
5:
end for
6:
for s = 1 N do
7:
    Compute η s = i = 1 n j = 1 n min { q s i j y ̲ i j , q s i j y ¯ i j } + i = 1 n min { c s i x ̲ i , c s i x ¯ i } .
8:
    if  η s > d s  then
9:
        Problem QCQP H has no feasible solution and the algorithm stops; delete the rectangle H.
10:
    else
11:
        for  i = 1 n  do
12:
           if  c s i > 0  then
13:
                x ¯ i = min { γ s i c s i , x ¯ i }
14:
           end if
15:
           if  c s i < 0  then
16:
                x ̲ i = max { γ s i c s i , x ̲ i }
17:
           end if
18:
        end for
19:
    end if
20:
end for
21:
return H = j = 1 n [ x ̲ j , x ¯ j ] or .

4. Algorithm and Its Convergence

By embedding the two rectangular reduction algorithms given in the previous section into the branch-and-bound scheme, we develop a new global optimization algorithm for solving QCQP. In addition, in each iteration of the branch-and-bound algorithm, one or two new linear relaxation subproblems are generated, whose optimal values are not lower than the current lower bound. Therefore, the lower bound does not decrease in the current iteration. The update of the upper bound is performed by solving linear relaxation subproblems, and it is not difficult to conclude that the upper bound will not increase in the current iteration. Based on this idea, we give the pseudo-code of the relaxation-and-bound algorithm (RBA) in Algorithm 3 below.
Lemma 6.
In the above algorithm RBA, let ( Y k , x k ) be an optimal solution to problem LRP H k with H k H 0 , there are three conclusions as follows:
(a)
if Y k x k ( x k ) T = 0 , x k is a global optimal solution to problem QCQP H k ;
(b)
if Y k x k ( x k ) T ϵ max { Q 0 , 1 } and x k F H k , x k is an ϵ-global optimal solution to problem QCQP H k ;
(c)
if Y k x k ( x k ) T ϵ max { max { Q s : s = 0 , 1 , , N } , 1 } , x k is a forced ϵ-globally optimal solution to problem QCQP H k .
Proof. 
(a) Since Y k x k ( x k ) T = 0 , then Y k = x k ( x k ) T . It follows from Lemma 2 that x k is a global optimal solution to problem QCQP H k . Besides, conclusions (b) and (c) are derived from Lemmas 4 and 5 respectively, which are not stated here.    □
A further explanation of the pseudo-code of the algorithm RBA is given in Remarks 2–8 below.
Remark 2.
The above algorithm is based on the branch and bound algorithm framework, and all branching operations are performed according to the standard bisection method described in line 18.
Remark 3.
The termination criterion of the algorithm adopts U k L k ϵ with upper bound U k and lower bound L k . When line 16 is violated, the algorithm does not iterate and the termination criterion is established.
Remark 4.
If line 28 of the algorithm is violated, it indicates that the node in the branch and bound tree corresponding to sub-problem LRP H k ι is empty, so it is meaningless to consider LRP H k ι further.
Remark 5.
If line 30 of the algorithm is violated, it indicates that the optimal value L k ι of sub-problem LRP H k ι is larger than the current upper bound U k , and the node corresponding to LRP H k ι is not further considered, which also implies the execution of the pruning operation; besides, even if x k ι is feasible (i.e. x k ι F ), it cannot be adopted to update the upper bound U k , because in this subproblem, there is f 0 ( x k ι ) L k ι > U k .
Algorithm 3 Relaxation-and-Bound Algorithm
 Require:
Given a QCQP instance and an error tolerance ϵ > 0 .
1:
Construct the initial rectangle H 0 = j = 1 n [ x ̲ j 0 , x ¯ j 0 ] by solving 2 n linear programs x ̲ j 0 = min x X x j and x ¯ j 0 = max x X x j for each j = 1 , 2 , , n ; in this process, all feasible solutions of problem QCQP obtained are stored in the set W; if there is no feasible solution for QCQP, set W = .
2:
if W = then Set U 0 = + ,
3:
else Set U 0 = min { f 0 ( x ) : x W } , x ^ 0 { x R n | f 0 ( x ) = U * , x W } , Y ^ 0 = x ^ 0 ( x ^ 0 ) T .
4:
end if
5:
for i = 1 n do
6:
    for  j = 1 n  do
7:
        Compute ω i j 0 = { x ̲ i 0 x ̲ j 0 , x ̲ i 0 x ¯ j 0 , x ̲ j 0 x ¯ i 0 , x ¯ i 0 x ¯ j 0 } , y ̲ i j 0 = min ω i j 0 , y ¯ i j 0 = max ω i j 0 .
8:
    end for
9:
end for
10:
Solve LRP H 0 to obtain its optimal solution ( Y 0 , x 0 ) and its optimal value L 0 .
11:
if x 0 F then U ¯ = f 0 ( x 0 ) .
12:
    if  U ¯ < U 0 then Set U 0 = U ¯ , x ^ 0 = x 0 , Y ^ 0 = Y 0 .
13:
    end if
14:
end if
15:
Set T = , k : = 0 .
16:
while U k L k > ϵ do
17:
    Set   i d = a r g max { x ¯ j k x ̲ j k : j = 1 , 2 , , n } and let x ¨ i d = 1 2 ( x ̲ i d k + x ¯ i d k ) .
18:
    Construct two subrectangles:
H k 1 = j = 1 i d 1 [ x ̲ j k , x ¯ j k ] × [ x ̲ i d k , x ¨ i d ] × j = i d + 1 n [ x ̲ j k , x ¯ j k ] , H k 2 = j = 1 i d 1 [ x ̲ j k , x ¯ j k ] × [ x ¨ i d , x ¯ i d k ] × j = i d + 1 n [ x ̲ j k , x ¯ j k ] .
19:
    for  ι = 1 2  do
20:
        Compress or delete rectangle H k ι by using algorithms RRTLC and RRTQC.
21:
        if  H k ι  then
22:
           for  i = 1 n  do
23:
               for  j = 1 n  do
24:
                   Compute ω i j k ι = { x ̲ i k ι x ̲ j k ι , x ̲ i k ι x ¯ j k ι , x ̲ j k ι x ¯ i k ι , x ¯ i k ι x ¯ j k ι } , y ̲ i j k ι = min ω i j k ι , y ¯ i j k ι = max ω i j k ι .
25:
               end for
26:
           end for
27:
           Solve LRP H k ι to identify the feasibility of this problem.
28:
           if LRP H k ι is feasible then
29:
               its optimal solution ( Y k ι , x k ι ) and optimal value L k ι must be obtained.
30:
               if  L k ι U k  then
31:
                   if  x k ι F then let U ¯ = f 0 ( x k ι ) .
32:
                       if  U ¯ < U k then set U k = U ¯ , x ^ k = x k ι , Y ^ k = Y k ι .
33:
                       end if
34:
                       if  Y k ι x k ι ( x k ι ) T ϵ max { Q 0 , 1 }  then
35:
                          Put { H k ι , L k ι , ( Y k ι , x k ι ) } into T.
36:
                       end if
37:
                   else
38:
                       if  Y k ι x k ι ( x k ι ) T ϵ max { max { Q s : s = 0 , 1 , , N } , 1 }  then
39:
                          Put { H k ι , L k ι , ( Y k ι , x k ι ) } into T.
40:
                       else
41:
                          let U ˜ = f 0 ( x k ι )
42:
                          if  U ˜ < U k then set U k = U ˜ , x ^ k = x k ι Y ^ k = Y k ι .
43:
                          end if
44:
                       end if
45:
                   end if
46:
               end if
47:
           end if
48:
        end if
49:
    end for
50:
    Set k k + 1 ; L k = min { L | { · , L , · } T } .
51:
    Choose a subproblem { H k , L k , ( Y k , x k ) } such that H k = j = 1 n [ x ̲ j k , x ¯ j k ] { H | { H , L k , ( Y k , x k ) } T } .
52:
    Set T = T { H k , L k , ( Y k , x k ) } .
53:
end while
54:
return ( Y ^ k , x ^ k ) , U k .
Remark 6.
According to lines 34–35 of the above algorithm RBA, only all nodes with ϵ-globally optimal solutions are deleted; also, lines 38-39 imply that only nodes with forced ϵ-globally optimal solutions are deleted. It can be found that when these two stages are not executed, it is actually what we call the “ϵ-subproblem-deletion rule” in action.
Remark 7.
If line 32 or 42 is satisfied, the updating of the upper bound is performed, and the updating of the lower bound is performed online 50.
Remark 8.
Line 51 is the node selection operation of the next iteration. Here, we adopt the node corresponding to the first subproblem { H k , L k , ( Y k , x k ) } with H k = j = 1 n [ x ̲ j k , x ¯ j k ] { H | { H , L k , ( Y k , x k ) } T } .
To ensure that the two ϵ -subproblem-deletion rules mentioned in Remark 6 can be realized, it is necessary that as rectangle H k H 0 is gradually thinned, Y k x k ( x k ) T gradually approaches zero, where ( Y k , x k ) is an optimal solution to problem LRP H k . For this reason, the following Lemma is given.
Lemma 7.
For any H k H 0 , let ( H k ) = max { x ¯ j k x ̲ j k : j = 1 , 2 , , n } for k N . Let ( Y k , x k ) be an optimal solution to problem LRP H k , then Y k x k ( x k ) T 0 as ( H k ) 0 .
Proof. 
Since ( Y k , x k ) is an optimal solution to problem LRP H k with H k H 0 . Let ω i j k = { x ̲ i k x ̲ j k , x ̲ i k x ¯ j k , x ̲ j k x ¯ i k , x ¯ i k x ¯ j k } for any i , j { 1 , 2 , , n } , it follows that
max { x ̲ i k x ̲ j k ω i j k } = max { x ̲ i k x ̲ j k { x ̲ i k x ̲ j k , x ̲ i k x ¯ j k , x ̲ j k x ¯ i k , x ¯ i k x ¯ j k } } = max { x ̲ i k x ̲ j k x ̲ i k x ̲ j k , x ̲ i k x ̲ j k x ̲ i k x ¯ j k , x ̲ i k x ̲ j k x ̲ j k x ¯ i k , x ̲ i k x ̲ j k x ¯ i k x ¯ j k } = max { 0 , x ̲ i k ( x ̲ j k x ¯ j k ) , x ̲ j k ( x ̲ i k x ¯ i k ) , x ̲ i k ( x ̲ j k x ¯ j k ) + x ¯ j k ( x ̲ i k x ¯ i k ) } max { 0 , x ̲ i k ( x ̲ j k x ¯ j k ) , x ̲ j k ( x ̲ i k x ¯ i k ) , x ̲ i k ( x ̲ j k x ¯ j k ) + x ̲ j k ( x ̲ i k x ¯ i k ) } | x ̲ i k | | x ¯ j k x ̲ j k | + | x ̲ j k | | x ¯ i k x ̲ i k | ,
max { x ̲ i k x ¯ j k ω i j k } = max { x ̲ i k x ¯ j k x ̲ i k x ̲ j k , x ̲ i k x ¯ j k x ̲ i k x ¯ j k , x ̲ i k x ¯ j k x ̲ j k x ¯ i k , x ̲ i k x ¯ j k x ¯ i k x ¯ j k } = max { x ̲ i k ( x ¯ j k x ̲ j k ) , 0 , x ̲ i k ( x ¯ j k x ̲ j k ) + x ̲ j k ( x ̲ i k x ¯ i k ) , x ¯ j k ( x ̲ i k x ¯ i k ) } max { x ¯ i k ( x ¯ j k x ̲ j k ) , 0 , x ¯ i k ( x ¯ j k x ̲ j k ) + x ̲ j k ( x ̲ i k x ¯ i k ) , x ̲ j k ( x ̲ i k x ¯ i k ) } | x ¯ i k | | x ¯ j k x ̲ j k | + | x ̲ j k | | x ¯ i k x ̲ i k | ,
max { x ̲ j k x ¯ i k ω i j k } = max { x ̲ j k x ¯ i k x ̲ i k x ̲ j k , x ̲ j k x ¯ i k x ̲ i k x ¯ j k , x ̲ j k x ¯ i k x ̲ j k x ¯ i k , x ̲ j k x ¯ i k x ¯ i k x ¯ j k } = max { x ̲ j k ( x ¯ i k x ̲ i k ) , x ̲ j k ( x ¯ i k x ̲ i k ) + x ̲ i k ( x ̲ j k x ¯ j k ) , 0 , x ¯ i k ( x ̲ j k x ¯ j k ) } max { x ¯ j k ( x ¯ i k x ̲ i k ) , x ¯ j k ( x ¯ i k x ̲ i k ) + x ̲ i k ( x ̲ j k x ¯ j k ) , 0 , x ̲ i k ( x ̲ j k x ¯ j k ) } | x ̲ i k | | x ¯ j k x ̲ j k | + | x ¯ j k | | x ¯ i k x ̲ i k |
and
max { x ¯ i k x ¯ j k ω i j k } = max { x ¯ i k x ¯ j k x ̲ i k x ̲ j k , x ¯ i k x ¯ j k x ̲ i k x ¯ j k , x ¯ i k x ¯ j k x ̲ j k x ¯ i k , x ¯ i k x ¯ j k x ¯ i k x ¯ j k } = max { x ¯ j k ( x ¯ i k x ̲ i k ) + x ̲ i k ( x ¯ j k x ̲ j k ) , x ¯ j k ( x ¯ i k x ̲ i k ) , x ¯ i k ( x ¯ j k x ̲ j k ) , 0 } max { x ¯ j k ( x ¯ i k x ̲ i k ) + x ¯ i k ( x ¯ j k x ̲ j k ) , x ¯ j k ( x ¯ i k x ̲ i k ) , x ¯ i k ( x ¯ j k x ̲ j k ) , 0 } | x ¯ i k | | x ¯ j k x ̲ j k | + | x ¯ j k | | x ¯ i k x ̲ i k | .
Now, let y ̲ i j k = min ω i j k , y ¯ i j k = max ω i j k for any i , j { 1 , 2 , , n } , it follows that
y ̲ i j k y i j k = x i k x j k y ¯ i j k for Y k = ( y i j k ) n × n .
Upon formulas (8)–(12), it can be deduced that
y i j k x i k x j k y ¯ i j k y ̲ i j k = max ω i j k min ω i j k = max { ω i j k min ω i j k } = max { { x ̲ i k x ̲ j k , x ̲ i k x ¯ j k , x ̲ j k x ¯ i k , x ¯ i k x ¯ j k } min ω i j k } = max max { x ̲ i k x ̲ j k ω i j k , x ̲ i k x ¯ j k ω i j k , x ̲ j k x ¯ i k ω i j k , x ¯ i k x ¯ j k ω i j k } = max { max { x ̲ i k x ̲ j k ω i j k } , max { x ̲ i k x ¯ j k ω i j k } , max { x ̲ j k x ¯ i k ω i j k } , max { x ¯ i k x ¯ j k ω i j k } } max { | x ̲ i k | | x ¯ j k x ̲ j k | + | x ̲ j k | | x ¯ i k x ̲ i k | , | x ¯ i k | | x ¯ j k x ̲ j k | + | x ̲ j k | | x ¯ i k x ̲ i k | , | x ̲ i k | | x ¯ j k x ̲ j k | + | x ¯ j k | | x ¯ i k x ̲ i k | , | x ¯ i k | | x ¯ j k x ̲ j k | + | x ¯ j k | | x ¯ i k x ̲ i k | } max { | x ̲ i k | , | x ¯ i k | } ( H k ) + max { | x ̲ j k | , | x ¯ j k | } ( H k ) = max { | x ̲ i k | , | x ¯ i k | , | x ̲ j k | , | x ¯ j k | } ( H k ) .
By (13), it follows that
Y k x k ( x k ) T = ( y i j k ) n × n ( x i k x j k ) n × n = ( y i j k x i k x j k ) n × n ( max { | x ̲ i k | , | x ¯ i k | , | x ̲ j k | , | x ¯ j k | } ( H k ) ) n × n = ( max { | x ̲ i k | , | x ¯ i k | , | x ̲ j k | , | x ¯ j k | } ) n × n ( H k ) .
Thus, Y k x k ( x k ) T 0 as ( H k ) 0 . □
Theorem 2.
(a) If the algorithm terminates within finite iterations, an approximately optimal solution for QCQP is found. (b) If the algorithm generates an infinite sequence of iterations, then any accumulation point of the sequence { x ^ k } k N is a globally optimal solution to QCQP.
Proof. 
(a) If the algorithm is finite, assume it stops at the kth iteration. From the termination rule of line 16, we know that
U k L k = f 0 ( x ^ k ) L k ϵ .
Assuming that the global optimal solution is x * , we know that
U k = f 0 ( x ^ k ) f 0 ( x * ) L k .
Hence, it follows from inequalities (15) and (16) that
f 0 ( x * ) + ϵ L k + ϵ f 0 ( x ^ k ) .
and then part (a) has been proven.
(b) If the algorithm is infinite, and an infinite sequence { x ^ k } k N is generated for the QCQP problem by solving the linear relaxation problem LRP H k , the sequence of optimal solutions for the corresponding problem LRP H k is { ( Y ^ k , x ^ k ) } k N . Without loss of generality, assume that the rectangular sequence { H k = [ x ̲ k , x ¯ k ] } k N satisfies x ^ k H k and H k + 1 H k . In our algorithm, the rectangles are divided continuously into two parts of equal width, then k = 1 H k = { x ^ k } k N . Thus lim k H k = lim k x ^ k . Let lim k H k = lim k x ^ k = x * , it follows from Lemma 7 that lim k Y ^ k x ^ k ( x ^ k ) T = 0 , which means
Y * : = lim k Y ^ k = lim k x ^ k ( x ^ k ) T = x * ( x * ) T .
Based on the process of determining upper and lower bounds of the algorithm, we have
L k = g 0 ( Y ^ k , x ^ k ) f 0 ( x * ) f 0 ( x ^ k ) = U k , k = 1 , 2 , .
Since the sequence { L k = g 0 ( Y ^ k , x ^ k ) } is nondecreasing and bounded, and { U k = f 0 ( x ^ k ) } is decreasing and bounded, they are convergent sequences. Taking the limit on both sides of (19), we have
lim k L k = lim k g 0 ( Y ^ k , x ^ k ) f 0 ( x * ) lim k f 0 ( x ^ k ) = lim k U k .
Upon the continuity of function f ( x ) , it follows from (18)–(20) that
lim k L k = g 0 ( Y * , x * ) = f 0 ( x * ) = lim k f 0 ( x ^ k ) = lim k U k .
So, the sequence { x ^ k } , of which any accumulation point x * is a global optimal solution of the QCQP problem. □
In Theorem 2, conclusion (a) merely proves that the proposed algorithm, at the termination of finite iterations, returns an approximately global optimal solution related to ϵ > 0 of the problem QCQP. It is not certain which of the two classes of solutions given in Definitions 1 and 2 is the returned solution. In fact, for a given precision ϵ > 0 , the algorithm RBA eventually terminates infinite iterations and returns either an ϵ -global optimal solution or a forced ϵ -globally optimal solution to the problem QCQP. The following Theorem 3 is specified.
Theorem 3.
For a given ϵ > 0 , if the subproblem { H k , L k , ( Y k , x k ) } satisfies
( H k ) ϵ max { max { Q s : s = 0 , 1 , , N } ( max { | x ̲ i k | , | x ¯ i k | , | x ̲ j k | , | x ¯ j k | } ) n × n , 1 } ,
the algorithm RBA terminates at line 16; at the same time, if x ^ k F , x ^ k is an ϵ-globally optimal solution of QCQP; otherwise, x ^ k is a forced ϵ-globally optimal solution of QCQP.
Proof. 
When BBA runs to line 16, the subproblem { H k , L k , ( Y k , x k ) } satisfies
0 f 0 ( x k ) g 0 ( Y k , x k ) = Q 0 · ( x k ( x k ) T Y k ) Q 0 x k ( x k ) T Y k Q 0 ( max { | x ̲ i k | , | x ¯ i k | , | x ̲ j k | , | x ¯ j k | } ) n × n ( H k ) max { Q 0 ( max { | x ̲ i k | , | x ¯ i k | , | x ̲ j k | , | x ¯ j k | } ) n × n , 1 } ( H k ) max { max { Q s : s = 0 , 1 , , N } ( max { | x ̲ i k | , | x ¯ i k | , | x ̲ j k | , | x ¯ j k | } ) n × n , 1 } ( H k ) .
Note that the third-to-last inequality of Formula (22) follows from Formula (14) above. By combining Formulas (21) and (22), we have
0 f 0 ( x k ) g 0 ( Y k , x k ) ϵ ,
Moreover, the subproblem { H k , L k , ( Y k , x k ) } also satisfies
g 0 ( Y k , x k ) = L k U k = f 0 ( x ¯ k ) f 0 ( x k ) .
Further, it follows from Formulas (23) and (24) that 0 U k L k f 0 ( x k ) g 0 ( Y k , x k ) ϵ , which means that the algorithm terminates at line 16. Since L k is the smallest lower bound at the current iteration, then min x F f 0 ( x ) + ϵ L k + ϵ U k = f 0 ( x ^ k ) , which shows that x ^ k is an ϵ -globally optimal solution of QCQP for x ^ k F . Besides, if x ^ k F , x ^ k must come from line 42 of the algorithm RBA, so it knows that Y ^ k x ^ k ( x ^ k ) T ϵ max { max { Q s : s = 0 , 1 , , N } , 1 } . Then, it follows from (2) that
| f s ( x ^ k ) g s ( Y ^ k , x ^ k ) | = | Q s · ( x ^ k ( x ^ k ) T Y ^ k ) | ϵ Q s max { Q s , 1 } ϵ for each s = 1 , , N .
which means that x ^ k is a forced ϵ -globally optimal solution of QCQP for x ^ k F . □
Theorem 4.
Given an error tolerance ϵ > 0 , the algorithm RBA returns a forced ϵ-globally optimal solution for QCQP in at most
j = 1 n max { max { Q s : s = 0 , 1 , , N } ( max { | x ̲ i 0 | , | x ¯ i 0 | , | x ̲ j 0 | , | x ¯ j 0 | } ) n × n , 1 } ϵ ( x ¯ j 0 x ̲ j 0 )
times, where a represents the smallest integer greater than a.
Proof. 
Based on Theorem 3, the specific proof is similar to Theorem 3 in [27] and is thus omitted. □
Remark 9.
If ( H k ) ϵ max { Q 0 ( max { | x ̲ i k | , | x ¯ i k | , | x ̲ j k | , | x ¯ j k | } ) n × n , 1 } and x ^ k F , the penultimate inequality of formula (22) has been satisfied, so it is not difficult to verify that x ^ k is already an ϵ-globally optimal solution to QCQP; at the same time, the algorithm RBA returns x ^ k in at most
j = 1 n max { Q 0 ( max { | x ̲ i 0 | , | x ¯ i 0 | , | x ̲ j 0 | , | x ¯ j 0 | } ) n × n , 1 } ϵ ( x ¯ j 0 x ̲ j 0 )
times.

5. Numerical Experiment and Analysis

In this section, we adopt several test examples to verify the feasibility of the proposed algorithm. Note that all examples are non-convex problems. We compile and execute the code on MATLAB9.0.0.341360(R2016a), respectively. All experimental procedures were performed on a desktop computer with Inter(R) Core(TM) i7-6700, @3.40GHz power processor, 16.00GB memory, and Microsoft Win7 operating system. In the numerical experiment, all linear programs are solved by using the l i n p r o g solver in Matlab. Besides, CVX 2.2 [35] is adopted to execute and solve all convex optimization problems in Matlab.
To demonstrate the practical application of the proposed algorithm, we first consider the following balanced transportation problem presented in [36].
min i = 1 m i = 1 n c i j x i j i = 1 m i = 1 n d i j x i j , s . t . j = 1 n x i j = a i , i = 1 m x i j = b j , x i j 0 ,
where, i = 1 m a i = j = 1 n b j , a i denotes the ith source, b j denotes the jth destination, c i j denotes the unit cost from ith source to jth destination, d i j denotes the unit preference from ith source to jth destination and the variable x i j denotes the amount of source that the ith source supplies to the jth destination, where i = 1 , 2 , , m , j = 1 , 2 , , n . By adopting the data in [36], a known mathematical model of the balanced transport problem can be formulated as
min 9 x 11 + 12 x 12 + 7 x 13 + 6 x 14 + 11 x 21 + 9 x 22 + 17 x 23 + 6 x 24 + 5 x 31 + 4 x 32 + 3 x 33 + 9 x 34 8 x 11 + 10 x 12 + 12 x 13 + 9 x 14 + 6 x 21 + 4 x 22 + 8 x 23 + 11 x 24 + 9 x 31 + 13 x 32 + 11 x 33 + 7 x 34 , s . t . x 11 + x 12 + x 13 + x 14 = 12 , x 21 + x 22 + x 23 + x 24 = 19 , x 31 + x 32 + x 33 + x 34 = 17 , x 11 + x 21 + x 31 = 3 , x 12 + x 22 + x 32 = 22 , x 13 + x 23 + x 33 = 18 , x 14 + x 24 + x 34 = 5 , x i j 0 , i = 1 , 2 , 3 , j = 1 , 2 , 3 , 4 .
Clearly, it follows from the model above that i = 1 4 a i = 48 = j = 1 4 b j .
Further, let
x = ( x 11 , x 12 , x 13 , x 14 , x 21 , x 22 , x 23 , x 24 , x 31 , x 32 , x 33 , x 34 ) T , C = ( 9 , 12 , 7 , 6 , 11 , 9 , 17 , 6 , 5 , 4 , 3 , 9 ) T , D = ( 8 , 10 , 12 , 9 , 6 , 4 , 8 , 11 , 9 , 13 , 11 , 7 ) T , b = ( 12 , 19 , 17 , 3 , 22 , 18 , 5 ) T , A = ( A 1 , A 2 , A 3 , A 4 , A 5 , A 6 , A 7 ) T , A 1 = ( 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) T , A 2 = ( 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 ) T , A 3 = ( 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 1 , 1 , 1 , 1 ) T , A 4 = ( 1 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 , 0 ) T , A 5 = ( 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 ) T , A 6 = ( 0 , 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 1 , 0 ) T , A 7 = ( 0 , 0 , 0 , 1 , 0 , 0 , 0 , 1 , 0 , 0 , 0 , 1 ) T .
Then, the above model can be rewritten into the following form:
min C T x D T x , s . t . A x = b , x i 0 , i = 1 , 2 , , 12 .
which is equivalent to the following quadratic programming problem with a quadratic constraint [37]:
min t , s . t . x X , C T x t D T x 0 , t ̲ t t ¯ ,
where X = { x R 12 | A x = b , x i 0 , i = 1 , 2 , , 12 } , t ̲ = min x X C T x / max x X D T x , t ¯ = max x X C T x / min x X D T x . Then, after solving the balanced transport problem by the proposed algorithm with ϵ = 5 × 10 4 , we obtain the global optimal solution x * = ( 0.0 , 0.0 , 12.0 , 0.0 , 3.0 , 11.0 , 5.0 , 0.0 , 11.0 , 6.0 , 0.0 ) T and the optimal value 0.6553 in 1822.4674 s after 12,549 iteration.
To further test the proposed algorithm, the following 10 test examples are solved by this algorithm and the algorithms in [38,39], and the numerical results are listed in Table 1. The numerical results recorded in Table 2 are obtained by solving a series of random instances generated by Example 11 in four methods(our algorithm, the algorithms in [38,39], and the commercial software package BARON [34]).
In Table 1 and Table 2, the meanings of the symbols in the headers of these two tables are as follows: Iter: number of iterations or the average number of iterations; ϵ : tolerance; CPU: CPU running time or average CPU running time; m: number of quadratic constraints; n: Number of variables; r: The number of negative eigenvalues in the matrix Q 0 ; ”−”: Some algorithms cannot solve the problem within 3600 s in all cases. Moreover, the tolerance ϵ is set to 5 × 10 4 in Examples 1-10 and 5 × 10 3 in Example 11, respectively.
Example 1.
[38,39]
min x 1 2 + x 1 x 2 + x 2 2 + x 1 2 x 2 , s . t . x 1 + x 2 6 , 2 x 1 2 + x 2 2 + 2 x 1 + x 2 4 , 1 x 1 , x 2 6 .
Example 2.
[38,39]
min x 1 , s . t . 1 / 16 x 1 2 1 / 16 x 2 2 + 1 / 4 x 1 + 1 / 2 x 2 1 , 1 / 14 x 1 2 + 1 / 14 x 2 2 3 / 7 x 1 3 / 7 x 2 1 , 1 x 1 , x 2 5.5 .
Example 3.
[38,39]
min x 1 x 2 2 x 1 + x 2 + 1 , s . t . 8 x 1 2 6 x 1 16 x 2 11 , x 2 2 + 3 x 1 + 2 x 2 7 , 1 x 1 2.5 , 1 x 2 2.225 .
Example 4.
[38,39]
min x 1 2 + x 2 2 , s . t . 0.3 x 1 x 2 1 , 2 x 1 5 , 1 x 2 3 .
Example 5.
[38,39]
min x 1 , s . t . 4 x 2 4 x 1 2 1 , x 1 x 2 1 , 0.01 x 1 , x 2 15 .
Example 6.
[38,39]
min 6 x 1 2 + 4 x 2 2 + 5 x 1 x 2 , s . t . 6 x 1 2 + 4 x 2 2 + 5 x 1 x 2 6 x 1 x 2 48 , 0 x 1 , x 2 10 .
Example 7.
[38,39]
min x 1 + x 1 x 2 0.5 x 2 , s . t . 6 x 1 2 + 8 x 2 3 , 3 x 1 x 2 3 , 1 x 1 , x 2 1.5 .
Example 8.
[38,39]
min x 1 2 + x 1 + x 2 2 , s . t . x 1 2 + x 2 2 4 , ( x 1 + x 2 ) 2 + x 2 2 2 x 1 0 , 0 x 1 , x 2 2 .
Example 9.
min x 1 2 + x 1 + x 2 2 , s . t . x 1 2 + x 2 2 4 , x 1 2 + 1 / 4 x 2 2 4 x 1 0 , x 1 + x 2 2 , 0 x 1 , x 2 5 .
Example 10.
min 4 x 2 + ( x 1 1 ) 2 + x 2 2 10 x 3 2 , s . t . x 1 2 + x 2 2 + x 3 2 2 , ( x 1 1 ) 2 + x 2 2 + x 3 2 2 , 2 2 x 1 2 , 0 x 2 , x 3 2 .
From Table 1, it can be observed that all the optimal values obtained by our algorithm in solving these 10 examples are at least consistent with those of the algorithms in [38,39], while the optimal value obtained in solving Example 6 is superior to the other two algorithms. Furthermore, it is not difficult to know that our algorithm consumes the most computing resources, mainly because the linear relaxation subproblem of our algorithm is the worst among these algorithms. Nevertheless, these results are sufficient to demonstrate that the proposed algorithm is effective and feasible for solving QCQP.
For convenience, we represent the algorithm in [38] as JCBB, and the algorithm in [39] as JLBB.
Example 11.
min f 0 ( x ) = x T Q 0 x , s . t . f s ( x ) = x T Q s x + c s T x d s , s = 1 , 2 , , m ,
where, Q 0 , Q s , c s and d s are generated as follows:
  • For each s = 0 , 1 , , m , the matrix W s is generated, in which each element is randomly generated in the interval [−1,1].
  • Set R s = 0.5 ( W s + ( W s ) T ) for each s = 0 , 1 , , m .
  • For each s = 0 , 1 , , m , by using eigenvalue decomposition, R s = ( P s ) T D s P s is generated. Also, it can be noted that D s is a diagonal matrix.
  • The first r( r n ) diagonal elements of matrix D 0 are replaced by the r numbers randomly generated in the interval [−10,0], and the last n r diagonal elements of D 0 are replaced by the numbers randomly generated in [0,10].
  • For each s = 1 , , m , replace all diagonal elements of the matrix D s by randomly generating n numbers in [1,100].
  • For each s = 0 , 1 , , m , let Q s = ( P s ) T D s P s .
  • For each s = 1 , , m , all elements of the n-dimensional vector c s are generated randomly in [−100, 100], and the real number d s is generated randomly in [1,50].
The above construction method of Example 11 shows that the feasible region of this problem consists of m convex quadratic constraints. Therefore, the feasible region of Example 11 is convex, so 2 n convex optimizations with linear objective functions need to be solved to construct the initial rectangle H 0 . By using Example 11 and each set of parameters ( n , m , r ) , 10 random examples are generated and solved by algorithms RBA, JCBB, JLBB, and BARON, and then their average results are recorded in Table 2.
From Table 2, it can be observed that our algorithm can be applied to QCQP problems with different forms of objective functions. The CPU running time and iterations of the algorithm are positively correlated with the number n of decision variables. For fixed parameters m and n, the more the number r of negative eigenvalues of matrix Q 0 , the number of iterations and CPU running time increase accordingly. Unfortunately, the computational performance of algorithm RBA is indeed inferior to the algorithms JCBB and JLBB, and when ( m , n , r ) = ( 15 , 10 , 5 ) , ( 20 , 20 , 6 ) , our algorithm cannot solve the problem within 3600 s. However, our algorithm has a better numerical result than BARON. In particular, BARON can no longer solve the problem in 3600 s at ( m , n , r ) = ( 7 , 5 , 3 ) , but our algorithm can solve four more groups of problems. By comparing the relaxation techniques of algorithms JCBB and JLBB, it is not difficult to find that these two algorithms adopt a tighter relaxation strategy than our algorithm, which is the main reason why the two algorithms are better than our algorithm. In fact, our linear relaxation subproblem is only obtained by simply relaxing the boundary of the introduced variables, which shows that our algorithm is promising, and maybe we can incorporate some relaxation techniques(e.g., SDP or RLT) into our relaxation method to develop better algorithms.
In summary, the above numerical experiments are sufficient to prove the effectiveness and feasibility of the proposed algorithm, and although it is not as good as the algorithms in [38,39], it is better than the commercial software package BARON.

6. Conclusions and Discussion

This paper mainly develops a global optimization algorithm that can solve quadratically constrained quadratic programs. A new linear relaxation technique is proposed by simply extending the feasible domain of the equivalent problem. To speed up the convergence of the algorithm, a new rectangular reduction technique based on quadratic constraint is proposed on the basis of the existing linear constraint-rectangular reduction technique. It is proved that the proposed algorithm converges finitely, and an ϵ -global optimal solution or a forced ϵ -globally optimal solution of the original problem is obtained. Numerical results demonstrate the effectiveness and feasibility of the algorithm. Other techniques that can be integrated into this algorithm are also being investigated. In the future, we will try to generalize and apply the Definitions 1–3 in Section 2 to other optimization problems with quadratic functions, such as the sum of quadratic ratios.

Author Contributions

C.H. carried out the methodology, investigation, and writing the draft. Y.G. supervised the research, gave the methodology, and edited and reviewed the fifinal draft. F.T. and S.M. performed the experiments, and reviewed the fifinal draft. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant (11961001), the Construction Project of first-class subjects in Ningxia higher Education (NXYLXK2017B09), and the Major proprietary funded project of North Minzu University (ZDZX201901).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

We declare there is no conflict of interest.

References

  1. He, S.; Luo, Z.Q.; Nie, J.; Zhang, S.Z. Semidefinite relaxation bounds for indefinite homogeneous quadratic optimization. SIAM J. Optim. 2008, 19, 503–523. [Google Scholar] [CrossRef]
  2. Matskani, E.; Sidiropoulos, N.D.; Luo, Z.; Tassiulas, L. Convex approximation techniques for joint multiuser downlink beamforming and admission control. IEEE Trans. Wirel. Commun. 2008, 7, 2682–2693. [Google Scholar] [CrossRef]
  3. Sabir, Z.; Wahab, H.A.; Guirao, J.L.G. A novel design of Gudermannian function as a neural network for the singular nonlinear delayed, prediction and pantograph differential models. Math. Biosci. Eng. 2022, 19, 663–687. [Google Scholar] [CrossRef] [PubMed]
  4. Sabir, Z.; Raja, M.A.Z.; Alnahdi, A.S.; Jeelani, M.B.; Abdelkawy, M.A. Numerical investigations of the nonlinear smoke model using the Gudermannian neural networks. Math. Biosci. Eng. 2021, 19, 351–370. [Google Scholar] [CrossRef]
  5. Saeed, T.; Sabir, Z.; Alhodaly, M.S.; Alsulami, H.H.; Sánchez, Y.G. An advanced heuristic approach for a nonlinear mathematical based medical smoking model. Results Phys. 2022, 32, 105137. [Google Scholar] [CrossRef]
  6. Kolbert, F.; Wormald, L. Robust portfolio optimization using second-order cone programming. Optim. Optim. 2010, 1, 3–22. [Google Scholar]
  7. Gower, J.C. Euclidean distance geometry. Math. Sci. 1982, 7, 1–14. [Google Scholar]
  8. Klose, A.; Drexl, A. Facility location models for distribution system design. Eur. J. Oper. Res. 2005, 162, 4–29. [Google Scholar] [CrossRef] [Green Version]
  9. Vandenbussche, D.; Nemhauser, G.L. A polyhedral study of nonconvex quadratic programs with box constraints. Math. Program. 2005, 102, 531–557. [Google Scholar] [CrossRef]
  10. Fortin, C.; Wolkowicz, H. The trust region subproblem and semidefinite programming. Optim. Methods Softw. 2004, 19, 41–67. [Google Scholar] [CrossRef]
  11. Goemans, M.X.; Williamson, D.P. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. J. ACM 1995, 42, 1115–1145. [Google Scholar] [CrossRef]
  12. Jeyakumar, V.; Li, G.Y. Trust-region problems with linear inequality constraints: Exact SDP relaxation, global optimality and robust optimization. Math. Program. 2014, 147, 171–206. [Google Scholar] [CrossRef]
  13. Bomze, I.M.; Locatelli, M.; Tardella, F. New and old bounds for standard quadratic optimization: Dominance, equivalence and incomparability. Math. Program. 2008, 115, 31–64. [Google Scholar] [CrossRef]
  14. Vandenberghe, L.; Boyd, S. Semidefinite programming. SIAM Rev. 1996, 38, 49–95. [Google Scholar] [CrossRef] [Green Version]
  15. Beck, A.; Eldar, Y.C. Strong duality in nonconvex quadratic optimization with two quadratic constraints. SIAM J. Optim. 2006, 17, 844–860. [Google Scholar] [CrossRef] [Green Version]
  16. Lu, C.; Deng, Z.B.; Zhou, J.; Guo, X.L. A sensitive-eigenvector based global algorithm for quadratically constrained quadratic programming. J. Glob. Optim. 2019, 73, 371–388. [Google Scholar] [CrossRef]
  17. Kim, S.; Kojima, M. Exact solutions of some nonconvex quadratic optimization problems via SDP and SOCP relaxations. Comput. Optim. Appl. 2003, 26, 143–154. [Google Scholar] [CrossRef]
  18. Vandenbussche, D.; Nemhauser, G.L. A branch-and-cut algorithm for nonconvex quadratic programs with box constraints. Math. Program. 2005, 102, 559–575. [Google Scholar] [CrossRef]
  19. Burer, S.; Vandenbussche, D. A finite branch-and-bound algorithm for nonconvex quadratic programming via semidefinite relaxations. Math. Program. 2008, 113, 259–282. [Google Scholar] [CrossRef]
  20. Linderoth, J. A simplicial branch-and-bound algorithm for solving quadratically constrained quadratic programs. Math. Program. 2005, 103, 251–282. [Google Scholar] [CrossRef]
  21. Gao, Y.; Wei, F. A new bound-and-reduce approach of nonconvex quadratic programming problems. Appl. Math. Comput. 2015, 250, 298–308. [Google Scholar] [CrossRef]
  22. Reemtsen, R. Some outer approximation methods for semi-infinite optimization problems. J. Comput. Appl. Math. 1994, 53, 87–108. [Google Scholar] [CrossRef] [Green Version]
  23. Elloumi, S.; Faye, A.; Soutif, E. Decomposition and Linearization for 0–1 Quadratic Programming. Ann. Oper. Res. 2000, 99, 79–93. [Google Scholar] [CrossRef]
  24. Benders, J.F. Partitioning Procedures for Solving Mixed-Variables Programming Problems. Numer. Math. 1962, 4, 238–252. [Google Scholar] [CrossRef]
  25. Geoffrion, A.M. Generalized Benders decomposition. J. Optimiz. Theory Appl. 1972, 10, 237–260. [Google Scholar] [CrossRef]
  26. Sherali, H.D.; Adams, W.P. A reformulation-linearization technique for solving discrete and continuous nonconvex problems. Comput. Math. Appl. 1999, 38, 288. [Google Scholar]
  27. Lu, C.; Deng, Z.B.; Jin, Q. An eigenvalue decomposition based branch-and-bound algorithm for nonconvex quadratic programming problems with convex quadratic constraints. J. Glob. Optim. 2017, 67, 475–493. [Google Scholar] [CrossRef]
  28. Sherali, H.D.; Adams, W.P. A hierarchy of relaxations between the continuous and convex hull representations for zero-one programming problems. SIAM J. Discret. Math. 1990, 3, 411–430. [Google Scholar] [CrossRef]
  29. Sherali, H.D. Reformulation-Linearization Methods for Global Optimization. In Encyclopedia of Optimization; Floudas, C.A., Pardalos, P.M., Eds.; Springer: Boston, FL, USA, 2001; pp. 2182–2186. [Google Scholar]
  30. Audet, C.; Hansen, P.; Jaumard, B.; Savard, G. A branch and cut algorithm for nonconvex quadratically constrained quadratic programming. Math. Program. 2000, 87, 131–152. [Google Scholar] [CrossRef] [Green Version]
  31. Al-Khayyal, F.A.; Larsen, C.; Van, V.T. A relaxation method for nonconvex quadratically constrained quadratic programs. J. Glob. Optim. 1995, 6, 215–230. [Google Scholar] [CrossRef]
  32. Raber, U. A simplicial branch-and-bound method for solving nonconvex all-quadratic programs. J. Glob. Optim. 1998, 13, 417–432. [Google Scholar] [CrossRef]
  33. Zheng, X.; Pan, Y.; Cui, X. Quadratic convex reformulation for nonconvex binary quadratically constrained quadratic programming via surrogate constraint. J. Glob. Optim. 2018, 70, 719–735. [Google Scholar] [CrossRef]
  34. Sahinidis, N. BARON User Manual v.21.1.13[EB/OL]. 2021. Available online: Http://minlp.com (accessed on 6 January 2022).
  35. Grant, M.; Boyd, S. CVX: Matlab Software for Disciplined Convex Programming, Version 2.2. Available online: http://cvxr.com/cvx/download (accessed on 15 December 2021).
  36. Sivri, M.; Emiroglu, I.; Guler, C.; Tasci, F. A solution proposal to the transportation problem with the linear fractional objective function. In Proceedings of the Fourth International Conference on Modelling, Simulation and Applied Optimization, Kuala Lumpur, Malaysia, 19–21 April 2011. [Google Scholar]
  37. Jiao, H.W.; Liu, S.Y. A new linearization technique for minimax linear fractional programming. Intern. J. Comp. Math. 2014, 91, 1730–1743. [Google Scholar] [CrossRef]
  38. Jiao, H.W.; Chen, Y.Q. A Global Optimization Algorithm for Generalized Quadratic Programming. J. Appl. Math. 2013, 2013, 1–9. [Google Scholar] [CrossRef] [Green Version]
  39. Jiao, H.W.; Liu, S.Y.; Lu, N. A parametric linear relaxation algorithm for globally solving nonconvex quadratic programming. Appl. Math. Comput. 2015, 250, 973–985. [Google Scholar] [CrossRef]
Table 1. Numerical results in Examples 1–10.
Table 1. Numerical results in Examples 1–10.
Ex.Ref.Solution OptimumIterCPU
1ours(5.0000, 1.0000) −16240.7985
[38](5.0000, 1.0000) −1640.0312
[39](5.0000, 1.0000) −1620.0234
2ours(1.1772, 2.1769) 1.17711597.5143
[38](1.1771, 2.1771) 1.1771280.2141
[39](1.1771, 2.1771) 1.1771170.1459
3ours(2.0000, 1.0000) −1872.7883
[38](2.0000, 1.0000) −1190.1523
[39](2.0000, 1.0000) −110.0295
4ours(2.0000, 1.6667) 6.77782227.8239
[38](2.0000, 1.6667) 6.777880.1621
[39](2.0000, 1.6667) 6.7778260.4421
5ours(0.5003, 0.4996) 0.51545.1439
[38](0.5000, 0.5000) 0.5340.6258
[39](0.5000, 0.5000) 0.5220.4287
6ours(2.5540, 3.1323) 118.383599256.5885
[38](2.5557, 3.1303) 118.3837520.7591
[39](2.5554, 3.1307) 118.3837430.6657
7ours(1.5, 1.4998) −1.1629194178.1576
[38](1.5, 1.5) −1.1629180.3905
[39](1.5, 1.5) −1.1629130.3524
8ours(2, 0) −210.1018
[38](2, 0) −210.1118
[39](2, 0) −210.1073
9ours(2, 0) −2210.7572
[38](2, 0) −2170.3026
[39](2, 0) −2100.1906
10ours(1.0000, 0.1817, 0.9829) −11.363617,319770.00
[38](1.0000, 0.1817, 0.9829) −11.3636188283.5967
[39](1.0000, 0.1817, 0.9829) −11.3636106371.3283
Table 2. Numerical results in Example 11.
Table 2. Numerical results in Example 11.
( m , n , r ) IterCPU
RBAJCBBJLBBBARONRBAJCBBJLBBBARON
(5,3,1)2659.5839.5445.422,551.373.95378.75897.0782893.1392
(5,3,2)3676.4731.2378.723,815.4109.01617.86016.9026952.2138
(5,3,3)7546.91163.7581.223,322.3227.123311.91969.9312887.3607
(5,5,1)2968.211,938.35945.1182,711.71303.9179149.3092111.35391776.7202
(5,5,3)30,895.511,209.56148.6151,395.51335.6709133.7491113.07171837.5112
(5,5,5)47,219.618,681.08296.7203,510.92247.0565237.5699157.50383228.4427
(7,5,1)14,117.311,017.94859.4205,995.1687.3498228.5107148.59033348.2776
(7,5,3)12,538.212,736.96232.3579.1162264.7193198.5944
(10,3,3)12,657.32548.11296.4440.390827.561120.9512
(10,10,3)35,602.136,255.816,117.32302.44711456.7120972.1402
(10,10,5)49,875.427,257.314,686.23343.43211271.3252985.0749
(15,10,5)69,293.836,052.93215.04142846.0178
(20,20,6)36,693.83544.4622
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hu, C.; Gao, Y.; Tian, F.; Ma, S. A Relaxed and Bound Algorithm Based on Auxiliary Variables for Quadratically Constrained Quadratic Programming Problem. Mathematics 2022, 10, 270. https://doi.org/10.3390/math10020270

AMA Style

Hu C, Gao Y, Tian F, Ma S. A Relaxed and Bound Algorithm Based on Auxiliary Variables for Quadratically Constrained Quadratic Programming Problem. Mathematics. 2022; 10(2):270. https://doi.org/10.3390/math10020270

Chicago/Turabian Style

Hu, Chenyang, Yuelin Gao, Fuping Tian, and Suxia Ma. 2022. "A Relaxed and Bound Algorithm Based on Auxiliary Variables for Quadratically Constrained Quadratic Programming Problem" Mathematics 10, no. 2: 270. https://doi.org/10.3390/math10020270

APA Style

Hu, C., Gao, Y., Tian, F., & Ma, S. (2022). A Relaxed and Bound Algorithm Based on Auxiliary Variables for Quadratically Constrained Quadratic Programming Problem. Mathematics, 10(2), 270. https://doi.org/10.3390/math10020270

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop