1. Introduction
In this paper, we investigate the following quadratically constrained quadratic program (QCQP):
where
is real symmetric,
,
,
with
,
. Furthermore, we assume that the set
D is non-empty bounded and that the feasible region
has at least one interior point (i.e., Slater condition holds). Clearly,
.
Problem QCQP has a wide range of applications in such fields as wireless communication, network, radar, signal processing [
1,
2] and so on. Besides, some of the solutions of this problem can also be applied to solve the nonlinear singular models [
3] and nonlinear mathematical based medical smoking model [
4,
5]. It is also often employed to construct many models of management problems, such as portfolio selection problem [
6,
7], minimum distance problem [
8], location problem [
9], 0–1 quadratic programming problem [
10], maximum cut problem [
11], trust-region subproblem [
12,
13], etc. If all the matrices
,
are semi-positive definite, QCQP is a convex quadratic program with convex quadratic constraints (CQPCQC), which can be reconstructed into a second-order cone programming problem (SOCP) that can be solvable in polynomial time [
14,
15]. In other words, the problem CQPCQC is solvable in polynomial time. Of course, other special QCQP problems in reference [
11,
16,
17,
18] can also be solved in polynomial time. Generally speaking, QCQP is NP-hard, and its complexity is mainly reflected in the non-convexity of quadratic objective function and the feasible region with non-connectivity. Therefore, most of these problems cannot be solved in polynomial time, and it is difficult to search for its global optimal solution.
Over the decades, many scholars have studied QCQP problems and proposed global optimization methods which generally include branch-and-bound methods [
19,
20,
21], outer approximation method [
22], Lagrangian decomposition [
23], Benders decomposition [
24,
25] and their hybrid methods. Branch-and-bound (BB) algorithm is a typical enumeration method [
19,
20,
26], which usually obtains a new lower bound for the optimal value of the original problem and tries to update the upper bound by solving the relaxation subproblem in each enumeration. During the enumeration, the compactness of the relaxation subproblem is one of the main factors. Therefore, BB Algorithms embedded with different types of relaxation problems have been investigated in many pieces of literature about QCQP. Among them, the semidefinite programming (SDP) [
27] relaxation method, which can be solved in polynomial time by the interior point method, has attracted the attention of many researchers. Besides, another relaxation method widely studied is linear programming relaxation such as reconstruction linearization technique (RLT) [
26], which was adopted in [
28,
29]. Based on the idea of RLT, a branch and cut algorithm for solving QCQP were proposed in [
30]. To find the global approximate optimal solution for QCQP, the author of [
31] developed a branch and bound algorithm by mixing the outer approximation method with the linear relaxation approximation subproblem. In [
20,
32], two branch-and-bound algorithms with linear relaxation, which are based on the method of simply dividing the feasible region, can also solve QCQP globally. By adopting the quadratic proxy function, literature [
33] convexifies all quadratic inequality constraint functions, and thus presents a new algorithm based on the quadratic convex reconstruction method. All these methods mentioned above can solve QCQP and its variants well.
In this paper, by introducing the auxiliary variable
, the problem QCQP is equivalently converted into an equivalent problem with non-linear equality constraints. After the equality constraint is relaxed, a linear relaxation subproblem with auxiliary variables and bounded constraints is generated for QCQP, and then the lower bound of the global optimal value of QCQP is determined. Then, by combining two rectangle-reduction strategies and two
-subproblem-deletion rules, a relaxation and bound algorithm for QCQP are proposed. Numerical experiments and analysis of the results show that the proposed algorithm is feasible and effective, and its computational performance is better than the software package BARON [
34], so it is also a promising algorithm.
The follow-up arrangement of this paper is as follows. In
Section 2, the construction method of linear relaxation subproblem with auxiliary variables and bound constraints are given. The rectangle-reduction strategies are presented respectively in
Section 3. In
Section 4, the proposed algorithm is described and its finite convergence is proved under given precision.
Section 5 lists the numerical results and reveals the feasibility and effectiveness of the proposed algorithm. The last section is the conclusion.
2. Bounded Relaxation Technique
By introducing variable
, QCQP can be reformulated as
where “·” denotes the inner product of two matrices. Further, for each
, let
, then we can construct an initial rectangle
that satisfies
. Of course,
and
can be obtained by solving linear programs. Also, if
D is a bounded convex set,
and
must be obtained by solving convex programs.
Next, we mainly study the problem EP, whose non-convexity is mainly reflected in the nonlinear equality constraint
. Suppose
is denoted as
or any subrectangle of
, then each element of
,
, satisfies
, where
,
. Thus, we obtain the linear relaxation problem of EP over
H:
which of course also relaxes the following subproblem of QCQP on
H:
Obviously, any feasible solution of QCQP is feasible for QCQP.
For convenience, we denote the feasible regions of QCQP and LRP as and respectively. In addition, throughout this paper, denotes the convergence accuracy of the proposed algorithm, denotes the Euclidean norm of “·”.
The following five lemmas illustrate the relationship between feasible solutions and optimal solutions of LRP and QCQP.
Lemma 1. If is a feasible solution of LRP with , then is feasible for QCQP.
Proof. Since is a feasible solution of LRP with , it follows that for . This means . □
Lemma 2. If is an optimal solution of LRP with , then is a global optimal solution of QCQP.
Proof. Since is an optimal solution of LRP with , then we have for any . Let for any , then is obvious. Thus, for any , it follows that , which means is an optimal solution to QCQP. □
Definition 1. If there is a such that and then is called an ϵ-globally optimal solution for QCQP.
Definition 2. If there is a such that for and satisfies then is called a forced ϵ-globally optimal solution for QCQP.
Definition 3. If is an optimal solution of LRP with for N, then LRP is called a forced ϵ-approximation problem for QCQP.
In Definition 1, is a feasible solution of LRP. In fact, there is no absolute optimal solution in practice, so we just need to find the approximate optimal solution under the required tolerance . If a feasible point satisfies the inequality it means that is close enough to the optimal value of the problem QCQP to be usable under the given tolerance. In Definition 2, if for every , which means that is very close to the feasible domain of QCQP. On this basis, if there is such a solution that also satisfies the inequality , we call it a forced -globally optimal solution, which is inferior to the -global optimal solution. For some problems with more complex feasible regions, feasible solutions may not always be available, so it may be a better choice to have an infeasible forced -globally optimal solution to save more computation. It can be seen from Definition 3 that when for , not only is close to the feasible region of QCQP, but also the optimal value of the relaxation problem LRP is close to the objective function value of the problem QCQP. In addition, it is not too difficult to deduce that , where the feasible region of EP is that satisfies .
Lemma 3. Let be an optimal solution to LRP. If , then LRP is a forced ϵ-approximation problem for QCQP.
Proof. Since
, then for each
, we have
Therefore, LRP is a forced -approximation problem for QCQP. □
Lemma 4. Let be an optimal solution to LRP. If , then is a forced ϵ-globally optimal solution for QCQP.
Proof. Since
, it follows from Formula (1) that
Let
and
, then
and
are obvious. It follows from the optimality of
on LRP
that
Besides, it knows from Formula (2) that
By combining Formulas (3) and (4), we have
Thus, the formulas (2) and (5) indicate that is a strong -globally optimal solution for QCQP. □
Lemma 3 gives a sufficient condition that LRP is a forced -approximation problem of QCQP; under this condition, Theorem 4 states that the optimal solution of problem LRP can also provide a forced -globally optimal solution for QCQP.
Lemma 5. Let be an optimal solution to LRP. If and , then is an ϵ-globally optimal solution for QCQP.
Proof. Since
and
, it easily deduced that
Let
and
, then
and
. It follows from the optimality of
on LRP
that
Therefore, by combining formulas (6) and (7), it once again derives (5), which also means is an -globally optimal solution for QCQP. □
Lemma 5 gives a sufficient condition that the optimal solution of LRP can provide an -globally optimal solution for QCQP.
Remark 1. In our proposed algorithm(see Algorithm 3), over a certain rectangle H, when the ϵ-globally optimal solution or forced ϵ-globally optimal solution of the subproblem QCQP is found, if H is deleted, some computational costs will be saved, because such two kinds of solutions are already very close to the global optimal solution of QCQP.
4. Algorithm and Its Convergence
By embedding the two rectangular reduction algorithms given in the previous section into the branch-and-bound scheme, we develop a new global optimization algorithm for solving QCQP. In addition, in each iteration of the branch-and-bound algorithm, one or two new linear relaxation subproblems are generated, whose optimal values are not lower than the current lower bound. Therefore, the lower bound does not decrease in the current iteration. The update of the upper bound is performed by solving linear relaxation subproblems, and it is not difficult to conclude that the upper bound will not increase in the current iteration. Based on this idea, we give the pseudo-code of the relaxation-and-bound algorithm (RBA) in Algorithm 3 below.
Lemma 6. In the above algorithm RBA, let be an optimal solution to problem LRP with , there are three conclusions as follows:
- (a)
if , is a global optimal solution to problem QCQP;
- (b)
if and , is an ϵ-global optimal solution to problem QCQP;
- (c)
if , is a forced ϵ-globally optimal solution to problem QCQP.
Proof. (a) Since , then . It follows from Lemma 2 that is a global optimal solution to problem QCQP. Besides, conclusions (b) and (c) are derived from Lemmas 4 and 5 respectively, which are not stated here. □
A further explanation of the pseudo-code of the algorithm RBA is given in Remarks 2–8 below.
Remark 2. The above algorithm is based on the branch and bound algorithm framework, and all branching operations are performed according to the standard bisection method described in line 18.
Remark 3. The termination criterion of the algorithm adopts with upper bound and lower bound . When line 16 is violated, the algorithm does not iterate and the termination criterion is established.
Remark 4. If line 28 of the algorithm is violated, it indicates that the node in the branch and bound tree corresponding to sub-problem LRP is empty, so it is meaningless to consider LRP further.
Remark 5. If line 30 of the algorithm is violated, it indicates that the optimal value of sub-problem LRP is larger than the current upper bound , and the node corresponding to LRP is not further considered, which also implies the execution of the pruning operation; besides, even if is feasible (i.e. ), it cannot be adopted to update the upper bound , because in this subproblem, there is .
Algorithm 3 Relaxation-and-Bound Algorithm |
- Require:
Given a QCQP instance and an error tolerance . - 1:
Construct the initial rectangle by solving linear programs and for each ; in this process, all feasible solutions of problem QCQP obtained are stored in the set W; if there is no feasible solution for QCQP, set . - 2:
ifthen Set , - 3:
else Set , , . - 4:
end if - 5:
fordo - 6:
for do - 7:
Compute - 8:
end for - 9:
end for - 10:
Solve LRP to obtain its optimal solution and its optimal value . - 11:
ifthen. - 12:
if then Set , , . - 13:
end if - 14:
end if - 15:
Set , . - 16:
whiledo - 17:
Set and let . - 18:
Construct two subrectangles:
- 19:
for do - 20:
Compress or delete rectangle by using algorithms RRTLC and RRTQC. - 21:
if then - 22:
for do - 23:
for do - 24:
Compute - 25:
end for - 26:
end for - 27:
Solve LRP to identify the feasibility of this problem. - 28:
if LRP is feasible then - 29:
its optimal solution and optimal value must be obtained. - 30:
if then - 31:
if then let . - 32:
if then set , . - 33:
end if - 34:
if then - 35:
Put into T. - 36:
end if - 37:
else - 38:
if then - 39:
Put into T. - 40:
else - 41:
let - 42:
if then set , . - 43:
end if - 44:
end if - 45:
end if - 46:
end if - 47:
end if - 48:
end if - 49:
end for - 50:
Set ; . - 51:
Choose a subproblem such that . - 52:
Set . - 53:
end while - 54:
return.
|
Remark 6. According to lines 34–35 of the above algorithm RBA, only all nodes with ϵ-globally optimal solutions are deleted; also, lines 38-39 imply that only nodes with forced ϵ-globally optimal solutions are deleted. It can be found that when these two stages are not executed, it is actually what we call the “ϵ-subproblem-deletion rule” in action.
Remark 7. If line 32 or 42 is satisfied, the updating of the upper bound is performed, and the updating of the lower bound is performed online 50.
Remark 8. Line 51 is the node selection operation of the next iteration. Here, we adopt the node corresponding to the first subproblem with .
To ensure that the two -subproblem-deletion rules mentioned in Remark 6 can be realized, it is necessary that as rectangle is gradually thinned, gradually approaches zero, where is an optimal solution to problem LRP. For this reason, the following Lemma is given.
Lemma 7. For any , let for . Let be an optimal solution to problem LRP, then as .
Proof. Since
is an optimal solution to problem LRP
with
. Let
for any
, it follows that
and
Now, let
it follows that
Upon formulas (8)–(12), it can be deduced that
Thus, as . □
Theorem 2. (a) If the algorithm terminates within finite iterations, an approximately optimal solution for QCQP is found. (b) If the algorithm generates an infinite sequence of iterations, then any accumulation point of the sequence is a globally optimal solution to QCQP.
Proof. (a) If the algorithm is finite, assume it stops at the
kth iteration. From the termination rule of line 16, we know that
Assuming that the global optimal solution is
, we know that
Hence, it follows from inequalities (15) and (16) that
and then part (a) has been proven.
(b) If the algorithm is infinite, and an infinite sequence
is generated for the QCQP problem by solving the linear relaxation problem LRP
, the sequence of optimal solutions for the corresponding problem LRP
is
. Without loss of generality, assume that the rectangular sequence
satisfies
and
. In our algorithm, the rectangles are divided continuously into two parts of equal width, then
. Thus
. Let
, it follows from Lemma 7 that
, which means
Based on the process of determining upper and lower bounds of the algorithm, we have
Since the sequence
is nondecreasing and bounded, and
is decreasing and bounded, they are convergent sequences. Taking the limit on both sides of (19), we have
Upon the continuity of function
, it follows from (18)–(20) that
So, the sequence , of which any accumulation point is a global optimal solution of the QCQP problem. □
In Theorem 2, conclusion (a) merely proves that the proposed algorithm, at the termination of finite iterations, returns an approximately global optimal solution related to of the problem QCQP. It is not certain which of the two classes of solutions given in Definitions 1 and 2 is the returned solution. In fact, for a given precision , the algorithm RBA eventually terminates infinite iterations and returns either an -global optimal solution or a forced -globally optimal solution to the problem QCQP. The following Theorem 3 is specified.
Theorem 3. For a given , if the subproblem satisfiesthe algorithm RBA
terminates at line 16; at the same time, if , is an ϵ-globally optimal solution of QCQP
; otherwise, is a forced ϵ-globally optimal solution of QCQP.
Proof. When BBA runs to line 16, the subproblem
satisfies
Note that the third-to-last inequality of Formula (22) follows from Formula (14) above. By combining Formulas (21) and (22), we have
Moreover, the subproblem
also satisfies
Further, it follows from Formulas (23) and (24) that
which means that the algorithm terminates at line 16. Since
is the smallest lower bound at the current iteration, then
which shows that
is an
-globally optimal solution of QCQP for
. Besides, if
,
must come from line 42 of the algorithm RBA, so it knows that
. Then, it follows from (2) that
which means that
is a forced
-globally optimal solution of QCQP for
. □
Theorem 4. Given an error tolerance , the algorithm RBA
returns a forced ϵ-globally optimal solution for QCQP
in at mosttimes, where represents the smallest integer greater than a. Proof. Based on Theorem 3, the specific proof is similar to Theorem 3 in [
27] and is thus omitted. □
Remark 9. If and , the penultimate inequality of formula (22) has been satisfied, so it is not difficult to verify that is already an ϵ-globally optimal solution to QCQP
; at the same time, the algorithm RBA
returns in at mosttimes. 5. Numerical Experiment and Analysis
In this section, we adopt several test examples to verify the feasibility of the proposed algorithm. Note that all examples are non-convex problems. We compile and execute the code on MATLAB9.0.0.341360(R2016a), respectively. All experimental procedures were performed on a desktop computer with Inter(R) Core(TM) i7-6700, @3.40GHz power processor, 16.00GB memory, and Microsoft Win7 operating system. In the numerical experiment, all linear programs are solved by using the
solver in Matlab. Besides, CVX 2.2 [
35] is adopted to execute and solve all convex optimization problems in Matlab.
To demonstrate the practical application of the proposed algorithm, we first consider the following balanced transportation problem presented in [
36].
where,
,
denotes the
ith source,
denotes the
jth destination,
denotes the unit cost from
ith source to
jth destination,
denotes the unit preference from
ith source to
jth destination and the variable
denotes the amount of source that the
ith source supplies to the
jth destination, where
. By adopting the data in [
36], a known mathematical model of the balanced transport problem can be formulated as
Clearly, it follows from the model above that .
Further, let
Then, the above model can be rewritten into the following form:
which is equivalent to the following quadratic programming problem with a quadratic constraint [
37]:
where
,
,
. Then, after solving the balanced transport problem by the proposed algorithm with
, we obtain the global optimal solution
and the optimal value 0.6553 in 1822.4674 s after 12,549 iteration.
To further test the proposed algorithm, the following 10 test examples are solved by this algorithm and the algorithms in [
38,
39], and the numerical results are listed in
Table 1. The numerical results recorded in
Table 2 are obtained by solving a series of random instances generated by Example 11 in four methods(our algorithm, the algorithms in [
38,
39], and the commercial software package BARON [
34]).
In
Table 1 and
Table 2, the meanings of the symbols in the headers of these two tables are as follows: Iter: number of iterations or the average number of iterations;
: tolerance; CPU: CPU running time or average CPU running time;
m: number of quadratic constraints;
n: Number of variables;
r: The number of negative eigenvalues in the matrix
; ”−”: Some algorithms cannot solve the problem within 3600 s in all cases. Moreover, the tolerance
is set to
in Examples 1-10 and
in Example 11, respectively.
From
Table 1, it can be observed that all the optimal values obtained by our algorithm in solving these 10 examples are at least consistent with those of the algorithms in [
38,
39], while the optimal value obtained in solving Example 6 is superior to the other two algorithms. Furthermore, it is not difficult to know that our algorithm consumes the most computing resources, mainly because the linear relaxation subproblem of our algorithm is the worst among these algorithms. Nevertheless, these results are sufficient to demonstrate that the proposed algorithm is effective and feasible for solving QCQP.
For convenience, we represent the algorithm in [
38] as JCBB, and the algorithm in [
39] as JLBB.
Example 11. where, , , and are generated as follows: For each , the matrix is generated, in which each element is randomly generated in the interval [−1,1].
Set for each .
For each , by using eigenvalue decomposition, is generated. Also, it can be noted that is a diagonal matrix.
The first r() diagonal elements of matrix are replaced by the r numbers randomly generated in the interval [−10,0], and the last diagonal elements of are replaced by the numbers randomly generated in [0,10].
For each , replace all diagonal elements of the matrix by randomly generating n numbers in [1,100].
For each , let .
For each , all elements of the n-dimensional vector are generated randomly in [−100, 100], and the real number is generated randomly in [1,50].
The above construction method of Example 11 shows that the feasible region of this problem consists of
m convex quadratic constraints. Therefore, the feasible region of Example 11 is convex, so
convex optimizations with linear objective functions need to be solved to construct the initial rectangle
. By using Example 11 and each set of parameters
, 10 random examples are generated and solved by algorithms RBA, JCBB, JLBB, and BARON, and then their average results are recorded in
Table 2.
From
Table 2, it can be observed that our algorithm can be applied to QCQP problems with different forms of objective functions. The CPU running time and iterations of the algorithm are positively correlated with the number
n of decision variables. For fixed parameters
m and
n, the more the number
r of negative eigenvalues of matrix
, the number of iterations and CPU running time increase accordingly. Unfortunately, the computational performance of algorithm RBA is indeed inferior to the algorithms JCBB and JLBB, and when
, our algorithm cannot solve the problem within 3600 s. However, our algorithm has a better numerical result than BARON. In particular, BARON can no longer solve the problem in 3600 s at
, but our algorithm can solve four more groups of problems. By comparing the relaxation techniques of algorithms JCBB and JLBB, it is not difficult to find that these two algorithms adopt a tighter relaxation strategy than our algorithm, which is the main reason why the two algorithms are better than our algorithm. In fact, our linear relaxation subproblem is only obtained by simply relaxing the boundary of the introduced variables, which shows that our algorithm is promising, and maybe we can incorporate some relaxation techniques(e.g., SDP or RLT) into our relaxation method to develop better algorithms.
In summary, the above numerical experiments are sufficient to prove the effectiveness and feasibility of the proposed algorithm, and although it is not as good as the algorithms in [
38,
39], it is better than the commercial software package BARON.