1. Introduction
In this study, we consider the following linear multiplicative programs (LMP):
where the feasible domain
is
n-dimensional, nonempty, and bounded;
,
,
,
,
, and
.
The linear multiplicative program problem comes from many application areas, for example, financial optimization [
1,
2], microeconomics [
3], robust optimization [
4], decision tree optimization [
5], multiple-objective decision [
6,
7], VLSI chip design [
8], optimal packing and layout [
9], and control problems [
10,
11,
12,
13,
14]. It is well known that the (LMP) problem does not contain some common properties (convexity or other properties), which is an important challenge and test for solving this kind of problems. In addition, we also note that the (LMP) problem is closely related to the linear maximum multiplicative programming problem (see [
14,
15,
16]). Specifically, the linear maximum multiplicative programming problem can be obtained by changing the
in the objective function of the (LMP) problem to
. Some scholars have shown that the (LMP) problem is NP-hard [
17], but the linear maximum multiplicative programming problem can be solved in polynomial time. In [
18], Kuno pointed out that, without limiting the positive nature of each product term in the objective function, multiplying the negative product term by the negative sign “−” while considering the parity of
p, the original problem can eventually be classified into two categories, namely the (LMP) problem and the linear maximum multiplication programming problem. Therefore, without loss of generality, as in [
18], we investigate the (LMP) problem under the positive assumption of each linear product term. It is stated in advance that the method we mention can be generalized and applied to the maximization formal programming problem of such problems.
Over the past 20 years, some practical algorithms have been developed to solve this (LMP) problem, and, with the increasing reliance on modeling and optimization in real-world problems, great progress has been made in both local and global optimization theories and algorithms. Compared with the local optimization method, the global optimization methods for solving the (LMP) problem are still very few, but the previous scholars have also done a series of research on the global optimization method for (LMP) problem. The methods can be classified as branch-and-bound methods [
18,
19,
20,
21,
22,
23,
24,
25,
26], outer-approximation methods [
27,
28], vertex enumeration methods [
29], heuristic methods [
30,
31], an outcome-space cutting plane method [
32], parameterization based methods [
33,
34,
35], and level set algorithm [
36,
37]. Shao and Ehrgott also proposed a class of global optimization algorithms for solving the (LMP) problem by using multi-objective linear programming and primitive dual conditions [
38]. However, despite this progress, solving the (LMP) globally is still a thorny problem. In view of (LMP) and its variation, several global solutions are proposed. For example, Jiao [
22] establishes a reliable and efficient algorithm for a class of generalized linear multiplicative programming by using the linear approximation of exponential and logarithmic functions. Shen [
24] proposed a new accelerating method for solving generalized linear multiplicative programming by combining the appropriate deletion technique with the branch-and-bound scheme. To solve the linear multiplicative programming problem with exponential form, Liu and Zhao [
37] proposed a level set algorithm based on the research of Youness [
36], while Shen et al. [
39] proposed a complete polynomial time approximation algorithm. For linear programs with multiplicative constraints, Benson [
40] proposed a branch-and-bound algorithm based on decomposition.
Aiming at the (LMP) problem, this paper proposes a branch-and-bound algorithm based on the rectangular branch of the output space. First, for this purpose, an equivalent optimization problem (EP) of the problem (LMP) is proposed. Secondly, the approximation theorem of binary bilinear functions is given and a relaxation subproblem of the problem (EP) is constructed. Finally, a branch-and-bound algorithm based on output space is designed for (EP) problem. Compared with the methods in the above literature, the proposed method has the following characteristics:
(a) Our relaxation method is easy to operate and can be extended to some more generalized optimization problems, especially for the linear maximum multiplicative programming problem, which can be directly generalized.
(b) The branch operation of the branch-and-bound algorithm proposed by us acts on the p-dimensional output space, which greatly reduces the running cost of the computer.
(c) We also propose a new hyper-rectangular compression and reduction technique, which greatly improves the convergence rate of the algorithm, and then analyzes the computational complexity of the algorithm.
(d) To better illustrate the effectiveness of the proposed algorithm, we performed many numerical experiments, and compared with the relevant references to illustrate the characteristics of the algorithm.
The remainder of this article is outlined as follows.
Section 2 first gives the equivalent problem (EP) of (LMP) as well as the algorithm framework, and then analyzes the problem (EP) and gives the required bounding operation, Branching operation and rectangle-reducing operation of the algorithm, respectively, and finally gives the specific steps of the algorithm. In
Section 3, the algorithm is analyzed, and it is shown that the algorithm can be terminated within finite iterations. In
Section 4, some numerical experiments are presented and the characteristics of our algorithm are analyzed. Finally, the method of this paper is briefly reviewed.
2. Output-Space Branch-and-Bound Reduction Algorithm for (LMP)
2.1. Convert (LMP) into an Equivalent Problem (EP)
In this subsection, we show how to convert the problem (LMP) into a non-convex programming problem (EP), and introduce ()-dimension vector y to obtain the initial hyper-rectangle .
First, the new variable
is introduced for the item
in the objective function of the problem (LMP), and then there is the formula
Let
,
satisfy
by taking advantage of the internal structure of
. Thus, the
-dimensional variable
y is established, that is,
.
Secondly, two different operations are used to construct an initial hyper-rectangle
of variable
y in two stages. In the first stage, we construct the hyper-rectangle
, and, in the second stage, the hyper-rectangle
is also constructed, thus the hyper-rectangle
can be represented as
. To obtain
, let
and
,
. The upper and lower bounds of each variable
are defined by both ends of the following inequality Equation (
3), that is,
Then, the initial hyper-rectangle
of the variable
can be recorded as
with
. The initial hyper-rectangle
for the variable
y can be represented as
by using the representation of the Cartesian product. Of course, for each sub-rectangular
, we also define
and
Finally, through the above content, we naturally establish the problem (LMP) in the following form of the equivalent optimization problem (EOP), that is,
In particular, it is observed that (EOP) has
variables. We define the function
with
, and then (EOP) can be rewritten as the equivalent problem (EP):
Theorem 1. Ifis the globally optimal solution of the problem (LMP), if and only ifis the globally optimal solution of the problem (EP), the equations,and,are also established.
Proof of Theorem 1. If
is a globally optimal solution for problem (LMP), we have
and
,
,
. Then,
is the feasible solution of (EP), and there is also the corresponding objective function value, that is,
Let
be any feasible solution of the problem (EP), and naturally there is the equation
which also means
By using the optimal property of
,
must be valid. Therefore, with
and
, a globally optimal solution of problem (EP) can be obtained, that is,
.
On the other hand, we assume that
is a globally optimal solution of the (EP) problem, then
and
must also hold. For the problem (LMP) arbitrary feasible solution
x, if
then
is a feasible solution of the (EP) problem and the objective function value is
.
Through the optimality of
and the feasibility of
x, there is
According to the above inequality, we can obtain that
is a globally optimal solution of the problem (LMP). The conclusion is completely proved. □
Combined with (1), we can notice that the objective function of the problem (EP) has such a property, that is, The above property ensures that in the following sections can be replaced by , that is, .
Although the constraint of (EP) is still nonlinear, we can also solve the problem (EP) with a branch-and-bound algorithm based on rectangular subdivisions of the set . Let denote a rectangular partition of , i.e., In the process of subdivision of rectangular , we must point out that the rectangle of the first part of is subdivided by the standard dichotomous method, and then of the second half is compressed directly. This compression method is described in detail below. Our rectangular branch-and-bound algorithm systematically reduces the rectangular region containing by repeating seven basic steps.
Output-Space Branch-and-Bound Reduction Procedure
Step 0. () Set the tolerance . The initial upper bound , lower bound , best solution , and are obtained by solving the initial lower bound subproblem over .
Step 1. () If or , then the algorithm is terminated, and the -globally optimal solution and the -globally optimal value of the (EP) problem are output immediately, and the -globally optimal solution and the -globally optimal value of (LMP) are also output immediately. Otherwise, go to Step 2.
Step 2. Select an appropriate from and set , satisfies and is the best feasible solution thus far.
Step 3. () Divide into two rectangles and .
Step 4. () Using the rectangle-reduction technique, the two rectangles generated by are refined separately, and the index set of the remaining rectangle after the reduction is represented by , obviously .
Step 5. () Compute the lower bound on the minimum value of h over If with , put into , i.e., .
Step 6. () The new feasible solution is used to update the upper bound. Thus far, the least optimal value of all known sub-problems is chosen as the new lower bound.
Obviously, Step 5 is the key to improve the efficiency of the algorithm. In the next subsection, we give a linear relaxation-subproblem of the problem (EP) to provide an efficient lower bound for the optimal value.
2.2. Novel Linear Relaxation Approach
An important operation of the branch-and-bound procedure for solving (EP) is to establish and solve a series of lower bound relaxer problems of (EP) on
. We then define the associated sub-problems of (EP) as follows:
The main idea of our relaxation method is to linearize the nonlinear constraint of (EP) and finally obtain its linear lower bound relaxation problem. Next, we give Theorem 2, which is associated with the proposed linearization-method.
Theorem 2. Let. For any, define: Then, the following conclusion holds:
- (a)
;
- (b)
; and
- (c)
, , , , as , .
Proof of Theorem 2. (a) Through the linear underestimation and overestimation functions defined by the single variable quadratic function
over the interval
, we have
which is
.
Suppose that
and
are univariate,
and
are convex functions of variables
and
defined on interval
and
, respectively. Then, using inequality (5), we have
and
By (5)–(7), we can find that
and
Therefore, we have .
(c) Since function
is a convex function defined by variable
z over the interval
, the maximum value of function
can reach at the point
or
, that is,
By using (8) and (9), we have , with , and of course, , with .
Through Equation (
8) and the definition of functions
,
,
, and
, we can draw the following conclusion:
Similarly, through Equation (
9) and the definition of functions
,
,
, and
, we also have
Then, by using (10) and (11), we have with , , and, of course, , with , . At this point, the conclusion is proved. □
It is noted that at the right most end of inequalities (10) and (11), the size of positive number has an effect on the upper and lower estimates of function . Let , , , obviously , . According to the derivative function of , we can know that, when is defined on the interval , it is a convex function, and it is also easy to know that, if gets the minimum value , then . At this time, and take the minimum value , and the gap between and its upper and lower estimation functions reaches the minimum value range, and the upper and lower estimation values are more stable.
For any sub-rectangle
and each
, there is no loss of generality, with the following definition:
Using Theorem 2, for each
, let
,
, the function
,
and
also satisfies
,
, as
,
.
Theorem 3. For any sub-rectangleand, let, we haveas.
Proof of Theorem 3. According to Theorem 2 and the definition of
, we easily know that
If
, the conclusion is obviously true, thus we only discuss the case of
. For each
, we have
and
Then, by using trigonometric inequalities to combine inequalities (13) and (14), we obtain the following recurrence formulas:
where
Furthermore, according to Equation (
15), it is necessary to have
where
By combining Equations (12) and (16), we can deduce
It can be noted that, in the most right of inequality (17), when , there is . Because means for each , then the right side of inequality (17) tends to zero, and then . The proof is complete. □
Below, by using the above pre-given conclusions, the linear relaxation problem (LRP
) is obtained by relaxing the nonlinear constraints of the equivalent problem (EP
), which is expressed as follows:
Of course, the relaxation subproblem (LRP
) defined on the rectangle
is shown as follows:
Theorem 3 shows that the feasible domain of the linear relaxation subproblem described above will gradually approximate the feasible domain of the equivalent problem (EP) as the algorithm gradually refines the first part of the hyper-rectangular .
There is a needless to say fact: if (LRP) is not feasible, then (EP) is also not feasible; otherwise, for any optimal solution of (LRP), is obvious. In particular, if and are defined as and then
Finally, our bounding operation of the branch-and-bound procedure can be expressed as:
Set . If with , put into , i.e., .
Remark 1. In this paper, we obtain the lower bound relaxation problem (LRP) by usingto relax the feasible domain of the equivalence problem (EP). If we solve the linear maximum multiplicative programming problem, we only need to useto do similar upper bound relaxation.
Remark 2. If a linear function is added to the objective function of the problem (LMP), then there is a similar equivalent transformation and proof to Section 2.2, which is because we only make the equivalent transformation of the product term of the objective function. In this section, of course, there is a similar relaxation subproblem, and the rectangular partition method in the next subsection is similar, but then the reduction method of the hyper-rectangle is a little different, and we give it after Section 2.4. 2.3. Subdivision and Refinement of Hyper-Rectangle
Branching operation is also indispensable in Branch-and-bound procedure. In this subsection, we give the branch-refinement rule of any , where .
According to Equations (1) and (4) and
, the generation of rectangular
mainly depends on the successive multiplication of the coordinate components of the lower left vertex and the upper right vertex of
. Therefore, we only use the standard dichotomy to segment the former part
of the sub-rectangle
, and then refine the latter part
according to Equation (
4). The specific operations of Step 3 are as follows:
(i) For the rectangle
, let
,
. By using
, the interval
corresponding to the
-edge of rectangle
is divided into two intervals
and
, and then
is also divided into two sub-rectangles
and
. Their forms can be expressed as
by the Cartesian product.
(ii) For the segmentation and thinning of hyper-rectangle
, the upper right vertex of
and the lower left vertex of
are used, respectively. In this way, we finally get two hyper-rectangles,
and
. According to
, the Cartesian product forms of hyper-rectangle
and
are
Obviously, .
Although is a hyper-rectangular space of dimension, it can be seen from the Branch-refinement method of hyper-rectangle mentioned above that we only branch the rectangle , and the boundary of can be obtained directly according to the boundary of , thus the branching process of the branch-and-bound algorithm is completed in dimensional space .
2.4. Reduction of the Hyper-Rectangle
In this subsection, we give a reduction technique for hyper-rectangles to delete the sub-rectangle that do not contain the globally optimal solution or to delete a part of the sub-rectangle that do not have a globally optimal solution. In this way, the number of rectangles in the set will be reduced or the effect of refinement of rectangles in will be achieved, and then the bounding operation will be accelerated.
Without losing the generality, it is assumed that the current hyper-rectangle to be reduced is
, and the best objective function value obtained by the algorithm, thus far, is
. Because
, for each
,
, define
It is easy to know that, if the problem (EP) has a globally optimal solution in the rectangular , there must be a necessary condition, that is, . This necessary condition is also used in the following two rectangular reduction theorems.
In view of the characteristics that the super rectangle consists of two parts and , we reduce it in two steps. For this reason, we give Theorems 4 and 5, and prove that they have set forth the super-rectangular reduction technique in this section.
Theorem 4. For each, if, the original problem (EP) has no globally optimal solution on the rectangle; otherwise, if, the rectangledoes not contain the globally optimal solution of the problem (EP), where Proof of Theorem 4. If there is a
that satisfies
, there will be
then, there is no globally optimal solution of (EP) on
. Next, we prove that there is
for each
. When
, we consider the
tth element
of
y, because
, we have
According to the definition of
and the above inequality, we also have
This means that, for all , there is . Therefore, there is no globally optimal solution for the problem (EP). □
To facilitate the description of Theorem 5, we still record the hyper-rectangle reduced by Theorem 4 as . It can be seen that the second part of does not change, and Theorem 5 is given below to reduce .
Theorem 5. For each, if, the problem (EP) has no globally optimal solution on the hyper-rectangle, where Proof of Theorem 5. First, according to the definition of
, we have
Second, we prove that, for any
, there is
. If
, obviously,
; Then,
does not contain the globally optimal solution of the problem (EP). If
exists and
is satisfied, we continue to consider the
th element
of
y. If
, then
Because
, which means that, for all
, there is
Therefore, there is no globally optimal solution for the original problem (EP) on the hyper-rectangular . □
According to Theorems 4 and 5, we can construct the following reduction techniques to reduce the hyper-rectangle , which makes the compressed hyper-rectangle thinner and removes the part of the hyper-rectangle that does not contain the globally optimal solution, so that the search-space required for the algorithm to solve the problem (EP) is reduced, thus speeding up the convergence of the algorithm.
(i) For each , if , let . Otherwise, if , let .
(ii) For each , if , let .
In addition, if the objective function of the problem (LMP) contains an additional linear function, then, to make the above reduction method valid, we need to make an adjustment to because it is affected by the additional linear term. Assuming that this additional linear term is , then, before the algorithm begins, we need to solve a linear programming problem , so that there is . Of course, e is an n-dimensional column vector while f is a constant. Obviously, the adjusted also satisfies the conditions of the above two theorem.
2.5. Output-Space Branch-and-Bound Reduction Algorithm
In this subsection, we combine the output-space branch-and-bound reduction procedure with the bounding operation, branching operation, and rectangle-reducing operation, and then construct a new deterministic global optimization algorithm for solving the problem (EP), namely the Output-Space Branch-and-Bound Reduction Algorithm ().
To describe the algorithm smoothly, we explain the relevant symbols of the algorithm iteration to step k as follows: is the hyper-rectangle to be subdivided in the current iteration step; is the set of feasible solutions stored in the current iteration step of the problem (EP); is the set of sub-rectangles remaining after the pruning step; is the upper bound of the global optimal value of the problem (EP) in the current iteration step; is the lower bound of the globally optimal value of the problem (EP); and and represent the optimal value and solution of the subproblem on the rectangle , respectively. In addition, any feasible point x of (LMP) must have , with and being a feasible point for (EP). The specific steps of algorithm () are as follows:
Step 0. (Initialization) Set the tolerance . The initial hyper-rectangular is constructed by using Equations (2) and (3), whereas solving each feasible solution x of the (LMP) obtained from the linear programming problem (2) corresponds to one feasible solution of (EP), and then stores all such feasible solutions of (EP) into the set . Solve the initial subproblem on hyper-rectangular . Then, the optimal value and solution corresponding to the initial subproblem are and , respectively. Let . Thus, can be used as the initial lower bound of the globally optimal value of the problem (EP). The initial upper bound is . The initial best solution to the original problem (EP) is . If , then stop, and the -globally optimal solution of the problem (EP) is . Otherwise, set , , , the iteration number , and go to Step 2.
Step 1. (Termination criteria) If or , then the algorithm is terminated, and the -globally optimal solution and the -globally optimal value of the (EP) problem are output immediately, and the -globally optimal solution and the -globally optimal value of (LMP) are also output immediately. Otherwise, go to Step 2.
Step 2. According to , select the sub-rectangle from the set and set , and then go to Step 3.
Step 3. (
Branching operation) By using the subdivision and refinement methods in
Section 2.3,
is divided into two sub-rectangles:
and
that satisfy
. Then, go to Step 4.
Step 4. (
) Through the reduction in
Section 2.4, we compress the two sub-rectangles
and
obtained in the previous iteration, and the index set of the remaining sub-rectangles after compression is expressed as
. Obviously,
.
Step 5. (Bounding operation) For any , let , . If and , return to Step 2. Else, if and , return to Step 1. Else, set .
Step 6. (Updating the upper and lower bound) Let . If , update the current best solution to and set . Let ; Set , , , and return to Step 1.
Remark 3. In Step 5, we save the super-rectangleofinto Ξ after each compression, which implies the pruning operation of the algorithm.
Remark 4. As can be seen from Steps 4–6, the number of elements in Θ does not exceed two in each algorithm loop. At the same time, the phase of updating the upper bound in Step 5 computes at most two function values.
Remark 5. The branch search space of our OSBBRA algorithm is p-dimensional. When p is much smaller than the dimension n of decision variables, the convergence rate of the algorithm is relatively faster than that of n-dimensional decision space search.