Next Article in Journal
Point Orthogonal Projection onto a Spatial Algebraic Curve
Next Article in Special Issue
Ekeland Variational Principle in the Variable Exponent Sequence Spaces p(·)
Previous Article in Journal
A Discontinuous ODE Model of the Glacial Cycles with Diffusive Heat Transport
Previous Article in Special Issue
Common Fixed Point and Endpoint Theorems for a Countable Family of Multi-Valued Mappings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Output-Space Branch-and-Bound Reduction Algorithm for a Class of Linear Multiplicative Programs

1
School of Mathematics and Statistics, Ningxia University, Yinchuan 750021, China
2
Ningxia Province Cooperative Innovation Center of Scientific Computing and Intelligent Information Processing, North Minzu University, Yinchuan 750021, China
3
Ningxia Province Key Laboratory of Intelligent Information and Data Processing, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(3), 315; https://doi.org/10.3390/math8030315
Submission received: 17 November 2019 / Revised: 3 January 2020 / Accepted: 24 February 2020 / Published: 1 March 2020
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
In this paper, a new relaxation bounding method is proposed for a class of linear multiplicative programs. Although the 2 p 1 variable is introduced in the construction of equivalence problem, the branch process of the algorithm is only carried out in p dimensional space. In addition, a super-rectangular reduction technique is also given to greatly improve the convergence rate. Furthermore, we construct an output-space branch-and-bound reduction algorithm based on solving a series of linear programming sub-problems, and prove the convergence and computational complexity of the algorithm. Finally, to verify the feasibility and effectiveness of the algorithm, we carried out a series of numerical experiments and analyzed the advantages and disadvantages of the algorithm by numerical results.

1. Introduction

In this study, we consider the following linear multiplicative programs (LMP):
( L M P ) : = min f ( x ) = j = 1 p ( c j T x + d j ) s . t . A x b .
where the feasible domain X = { x R n | A x b } is n-dimensional, nonempty, and bounded; p 2 , A R m × n , b R m , c j R n , d j R , and c j T x + d j > 0 .
The linear multiplicative program problem comes from many application areas, for example, financial optimization [1,2], microeconomics [3], robust optimization [4], decision tree optimization [5], multiple-objective decision [6,7], VLSI chip design [8], optimal packing and layout [9], and control problems [10,11,12,13,14]. It is well known that the (LMP) problem does not contain some common properties (convexity or other properties), which is an important challenge and test for solving this kind of problems. In addition, we also note that the (LMP) problem is closely related to the linear maximum multiplicative programming problem (see [14,15,16]). Specifically, the linear maximum multiplicative programming problem can be obtained by changing the m i n in the objective function of the (LMP) problem to m a x . Some scholars have shown that the (LMP) problem is NP-hard [17], but the linear maximum multiplicative programming problem can be solved in polynomial time. In [18], Kuno pointed out that, without limiting the positive nature of each product term in the objective function, multiplying the negative product term by the negative sign “−” while considering the parity of p, the original problem can eventually be classified into two categories, namely the (LMP) problem and the linear maximum multiplication programming problem. Therefore, without loss of generality, as in [18], we investigate the (LMP) problem under the positive assumption of each linear product term. It is stated in advance that the method we mention can be generalized and applied to the maximization formal programming problem of such problems.
Over the past 20 years, some practical algorithms have been developed to solve this (LMP) problem, and, with the increasing reliance on modeling and optimization in real-world problems, great progress has been made in both local and global optimization theories and algorithms. Compared with the local optimization method, the global optimization methods for solving the (LMP) problem are still very few, but the previous scholars have also done a series of research on the global optimization method for (LMP) problem. The methods can be classified as branch-and-bound methods [18,19,20,21,22,23,24,25,26], outer-approximation methods [27,28], vertex enumeration methods [29], heuristic methods [30,31], an outcome-space cutting plane method [32], parameterization based methods [33,34,35], and level set algorithm [36,37]. Shao and Ehrgott also proposed a class of global optimization algorithms for solving the (LMP) problem by using multi-objective linear programming and primitive dual conditions [38]. However, despite this progress, solving the (LMP) globally is still a thorny problem. In view of (LMP) and its variation, several global solutions are proposed. For example, Jiao [22] establishes a reliable and efficient algorithm for a class of generalized linear multiplicative programming by using the linear approximation of exponential and logarithmic functions. Shen [24] proposed a new accelerating method for solving generalized linear multiplicative programming by combining the appropriate deletion technique with the branch-and-bound scheme. To solve the linear multiplicative programming problem with exponential form, Liu and Zhao [37] proposed a level set algorithm based on the research of Youness [36], while Shen et al. [39] proposed a complete polynomial time approximation algorithm. For linear programs with multiplicative constraints, Benson [40] proposed a branch-and-bound algorithm based on decomposition.
Aiming at the (LMP) problem, this paper proposes a branch-and-bound algorithm based on the rectangular branch of the output space. First, for this purpose, an equivalent optimization problem (EP) of the problem (LMP) is proposed. Secondly, the approximation theorem of binary bilinear functions is given and a relaxation subproblem of the problem (EP) is constructed. Finally, a branch-and-bound algorithm based on output space is designed for (EP) problem. Compared with the methods in the above literature, the proposed method has the following characteristics:
(a) Our relaxation method is easy to operate and can be extended to some more generalized optimization problems, especially for the linear maximum multiplicative programming problem, which can be directly generalized.
(b) The branch operation of the branch-and-bound algorithm proposed by us acts on the p-dimensional output space, which greatly reduces the running cost of the computer.
(c) We also propose a new hyper-rectangular compression and reduction technique, which greatly improves the convergence rate of the algorithm, and then analyzes the computational complexity of the algorithm.
(d) To better illustrate the effectiveness of the proposed algorithm, we performed many numerical experiments, and compared with the relevant references to illustrate the characteristics of the algorithm.
The remainder of this article is outlined as follows. Section 2 first gives the equivalent problem (EP) of (LMP) as well as the algorithm framework, and then analyzes the problem (EP) and gives the required bounding operation, Branching operation and rectangle-reducing operation of the algorithm, respectively, and finally gives the specific steps of the algorithm. In Section 3, the algorithm is analyzed, and it is shown that the algorithm can be terminated within finite iterations. In Section 4, some numerical experiments are presented and the characteristics of our algorithm are analyzed. Finally, the method of this paper is briefly reviewed.

2. Output-Space Branch-and-Bound Reduction Algorithm for (LMP)

2.1. Convert (LMP) into an Equivalent Problem (EP)

In this subsection, we show how to convert the problem (LMP) into a non-convex programming problem (EP), and introduce ( 2 p 1 )-dimension vector y to obtain the initial hyper-rectangle Y 0 .
First, the new variable y j = c j T x + d j ( j = 1 , 2 , , p ) is introduced for the item j = 1 p ( c j T x + d j ) in the objective function of the problem (LMP), and then there is the formula
j = 1 p ( c j T x + d j ) = j = 1 p y j .
Let y p + s = y p + s 1 y p s , s = 1 , 2 , , p 1 satisfy
y 2 p 1 = y 2 p 2 y 1 = y 2 p 3 y 2 y 1 = = j = 1 p y j .
by taking advantage of the internal structure of j = 1 p y j . Thus, the ( 2 p 1 ) -dimensional variable y is established, that is, y = ( y 1 , y 2 , , y p , y p + 1 , y p + 2 , , y 2 p 1 ) T .
Secondly, two different operations are used to construct an initial hyper-rectangle Y 0 of variable y in two stages. In the first stage, we construct the hyper-rectangle Y ^ 0 , and, in the second stage, the hyper-rectangle Y ˜ 0 is also constructed, thus the hyper-rectangle Y 0 can be represented as Y 0 = Y ^ 0 × Y ˜ 0 . To obtain Y ^ 0 , let
y ̲ j 0 = min x X c j T x + d j , y ¯ j 0 = max x X c j T x + d j , j = 1 , 2 , , p ,
and Y ^ 0 = [ y ^ ̲ 0 , y ¯ ^ 0 ] , y ^ ̲ 0 = ( y ̲ 1 0 , y ̲ 2 0 , , y ̲ p 0 ) T , y ¯ ^ 0 = ( y ¯ 1 0 , y ¯ 2 0 , , y ¯ p 0 ) T . The upper and lower bounds of each variable y p + s ( s = 1 , 2 , , p 1 ) are defined by both ends of the following inequality Equation (3), that is,
0 < y ̲ p + s 0 = y ̲ p + s 1 0 y ̲ p s 0 y p + s = y p + s 1 y p s y ¯ p + s 1 0 y ¯ p s 0 = y ¯ p + s 0 .
Then, the initial hyper-rectangle Y ˜ 0 of the variable y p + s can be recorded as Y ˜ 0 = [ y ̲ ˜ 0 , y ¯ ˜ 0 ] with y ̲ ˜ 0 = ( y ̲ p + 1 0 , y ̲ p + 2 0 , , y ̲ 2 p 1 0 ) T , y ¯ ˜ 0 = ( y ¯ p + 1 0 , y ¯ p + 2 0 , , y ¯ 2 p 1 0 ) T . The initial hyper-rectangle Y 0 for the variable y can be represented as
Y 0 = Y ^ 0 × Y ˜ 0 = j = 1 p [ y ̲ j 0 , y ¯ j 0 ] × s = 1 p 1 [ y ̲ p + s 0 , y ¯ p + s 0 ] = j = 1 2 p 1 [ y ̲ j 0 , y ¯ j 0 ]
by using the representation of the Cartesian product. Of course, for each sub-rectangular Y k Y 0 , we also define
0 < y ̲ p + s k = y ̲ p + s 1 k y ̲ p s k y p + s = y p + s 1 y p s y ¯ p + s 1 k y ¯ p s k = y ¯ p + s k ,
and
Y k = Y ^ k × Y ˜ k = j = 1 p [ y ̲ j k , y ¯ j k ] × s = 1 p 1 [ y ̲ p + s k , y ¯ p + s k ] = j = 1 2 p 1 [ y ̲ j k , y ¯ j k ] .
Finally, through the above content, we naturally establish the problem (LMP) in the following form of the equivalent optimization problem (EOP), that is,
( E O P ) : = min y 2 p 1 , s . t . c j T x + d j = y j , j = 1 , 2 , , p , y p + s = y p + s 1 y p s , s = 1 , 2 , , p 1 , x X , y Y 0 .
In particular, it is observed that (EOP) has n + 2 p 1 variables. We define the function h ( x , y ) = C T ( x T , y T ) T = y 2 p 1 with C = ( 0 , 0 , , 0 , 1 ) T R n + 2 p 1 , and then (EOP) can be rewritten as the equivalent problem (EP):
( E P ) : = min h ( x , y ) = C T ( x T , y T ) T , s . t . c j T x + d j = y j , j = 1 , 2 , , p , y p + s = y p + s 1 y p s , s = 1 , 2 , , p 1 , x X , y Y 0 .
Theorem 1.
If x * is the globally optimal solution of the problem (LMP), if and only if ( x * , y * ) is the globally optimal solution of the problem (EP), the equations y j * = c j T x * + d j , j = 1 , 2 , , p and y p + s * = y p + s 1 * y p s * , s = 1 , 2 , , p 1 are also established.
Proof of Theorem 1.
If x * is a globally optimal solution for problem (LMP), we have y ̲ j y j * = c j T x * + d j y ¯ j and y p + s * = y p + s 1 * y p s * , j = 1 , 2 , , p , s = 1 , 2 , , p 1 . Then, ( x * , y * ) is the feasible solution of (EP), and there is also the corresponding objective function value, that is,
h ( x * , y * ) = y 2 p 1 * = j = 1 p y j * = j = 1 p c j T x * + d j = f ( x * ) .
Let ( x , y ) be any feasible solution of the problem (EP), and naturally there is the equation
y j = c j T x + d j , j = 1 , 2 , , p , y p + s = y p + s 1 y p s , s = 1 , 2 , , p 1 ,
which also means
f ( x ) = j = 1 p c j T x + d j = j = 1 p y j = y 2 p 1 = h ( x , y ) .
By using the optimal property of x * ,
y 2 p 1 * = h ( x * , y * ) h ( x , y ) = y 2 p 1 .
must be valid. Therefore, with x * X and y * Y , a globally optimal solution of problem (EP) can be obtained, that is, ( x * , y * ) .
On the other hand, we assume that ( x * , y * ) is a globally optimal solution of the (EP) problem, then
y j * = c j T x * + d j , j = 1 , 2 , , p , y p + s * = y p + s 1 * y p s * , s = 1 , 2 , , p 1 .
and
h ( x * , y * ) = y 2 p 1 * = j = 1 p y j * = j = 1 p c j T x * + d j = f ( x * ) .
must also hold. For the problem (LMP) arbitrary feasible solution x, if
y j = c j T x + d j , j = 1 , 2 , , p , y p + s = y p + s 1 y p s , s = 1 , 2 , , p 1 ,
then ( x , y ) is a feasible solution of the (EP) problem and the objective function value is h ( x , y ) .
Through the optimality of ( x * , y * ) and the feasibility of x, there is
h ( x ) = j = 1 p c j T x + d j = y 2 p 1 y 2 p 1 * = h ( x * , y * ) = f ( x * ) .
According to the above inequality, we can obtain that x * is a globally optimal solution of the problem (LMP). The conclusion is completely proved. □
Combined with (1), we can notice that the objective function of the problem (EP) has such a property, that is, h ( x , y ) = C T ( x T , y T ) T = y 2 p 1 = y 2 p 2 y 1 = = j = 1 p y j = j = 1 p ( c j T x + d j ) = f ( x ) . The above property ensures that h ( x , y ) in the following sections can be replaced by f ( x ) , that is, h ( x , y ) = f ( x ) .
Although the constraint y p + s = y p + s 1 y p s , s = 1 , 2 , , p 1 of (EP) is still nonlinear, we can also solve the problem (EP) with a branch-and-bound algorithm based on rectangular subdivisions of the set Y 0 = Y ^ 0 × Y ˜ 0 . Let Ξ = { Y k : k = 1 , 2 , , K } denote a rectangular partition of Y 0 , i.e., Y k = Y ^ k × Y ˜ k = j = 1 p [ y ̲ j k , y ¯ j k ] × s = 1 p 1 [ y ̲ p + s k , y ¯ p + s k ] , k = 1 K Y k = Y 0 , int Y k int Y z = if k z . In the process of subdivision of rectangular Y k Y 0 , we must point out that the rectangle Y ^ k of the first part of Y k is subdivided by the standard dichotomous method, and then Y ˜ k of the second half is compressed directly. This compression method is described in detail below. Our rectangular branch-and-bound algorithm systematically reduces the rectangular region containing y * by repeating seven basic steps.
Output-Space Branch-and-Bound Reduction Procedure
  • Step 0. ( I n i t i a l i z a t i o n ) Set the tolerance ϵ > 0 . The initial upper bound U B 0 , lower bound L B 0 , best solution x * , and ( x * , y * ) are obtained by solving the initial lower bound subproblem over Y 0 .
  • Step 1. ( T e r m i n a t i o n c r i t e r i a ) If U B k L B k ϵ or Ξ = , then the algorithm is terminated, and the ϵ -globally optimal solution ( x * , y * ) and the ϵ -globally optimal value h ( x * , y * ) of the (EP) problem are output immediately, and the ϵ -globally optimal solution x * and the ϵ -globally optimal value f ( x * ) = h ( x * , y * ) of (LMP) are also output immediately. Otherwise, go to Step 2.
  • Step 2. Select an appropriate Y k from Ξ and set Ξ = Ξ \ Y k , Y k satisfies L B k < U B k ϵ and ( x * , y * ) is the best feasible solution thus far.
  • Step 3. ( B r a n c h i n g o p e r a t i o n ) Divide Y k into two rectangles Y k 1 and Y k 2 .
  • Step 4. ( r e c t a n g l e r e d u c i n g o p e r a t i o n ) Using the rectangle-reduction technique, the two rectangles generated by S t e p 3 are refined separately, and the index set of the remaining rectangle after the reduction is represented by Γ , obviously | Γ | 2 .
  • Step 5. ( B o u n d i n g o p e r a t i o n ) Compute the lower bound L B ( Y k i ) ( i = Γ ) on the minimum value of h over Y k i { y R n + 2 p 1 : y j = c j T x + d j , y p + s = y p + s 1 y p s , j = 1 , 2 , , p , s = 1 , 2 , , p 1 , x X } . If L B ( Y k i ) < U B k ϵ with U B k = h ( x * , y * ) = f ( x * ) , put Y k i into Ξ , i.e., Ξ = Ξ Y k i .
  • Step 6. ( U p d a t i n g t h e u p p e r a n d l o w e r b o u n d ) The new feasible solution is used to update the upper bound. Thus far, the least optimal value of all known sub-problems is chosen as the new lower bound.
Obviously, Step 5 is the key to improve the efficiency of the algorithm. In the next subsection, we give a linear relaxation-subproblem of the problem (EP) to provide an efficient lower bound for the optimal value.

2.2. Novel Linear Relaxation Approach

An important operation of the branch-and-bound procedure for solving (EP) is to establish and solve a series of lower bound relaxer problems of (EP) on Y k Ξ = { Y k : k = 1 , 2 , , K } . We then define the associated sub-problems of (EP) as follows:
( E P k ) : = min h ( x , y ) = C T ( x T , y T ) T , s . t . c j T x + d j = y j , j = 1 , 2 , , p , y p + s = y p + s 1 y p s , s = 1 , 2 , , p 1 , x X , y Y k .
The main idea of our relaxation method is to linearize the nonlinear constraint of (EP k ) and finally obtain its linear lower bound relaxation problem. Next, we give Theorem 2, which is associated with the proposed linearization-method.
Theorem 2.
Let Ω = { ( z , w ) R 2 | z ̲ z z ¯ , w ̲ w w ¯ } . For any δ > 0 , define:
φ ( z ) = z 2 , φ u ( z ) = ( z ̲ + z ¯ ) z z ̲ z ¯ , φ l ( z ) = ( z ̲ + z ¯ ) z ( z ̲ + z ¯ ) 2 4 , Δ ( z ) = φ ( z ) φ l ( z ) , ( z ) = φ u ( z ) φ ( z ) , ψ ( z , w ) = z w , ψ l ( z , w ) = 1 2 [ ( w ¯ + w ̲ ) z + ( z ¯ + z ̲ ) w + ( z ̲ δ w ¯ ) ( z ¯ δ w ̲ ) 2 δ ( z ¯ + z ̲ + δ ( w ¯ + w ̲ ) ) 2 8 δ ] , ψ u ( z , w ) = 1 2 [ ( w ¯ + w ̲ ) z + ( z ¯ + z ̲ ) w + ( z ¯ + z ̲ δ ( w ¯ + w ̲ ) ) 2 8 δ ( z ̲ + δ w ̲ ) ( z ¯ + δ w ¯ ) 2 δ ] , Δ ( z , w ) = ψ ( z , w ) ψ l ( z , w ) , ( z , w ) = ψ u ( z , w ) ψ ( z , w ) .
Then, the following conclusion holds:
(a) 
φ l ( z ) φ ( z ) φ u ( z ) ;
(b) 
ψ l ( z , w ) ψ ( z , w ) ψ u ( z , w ) ; and
(c) 
Δ ( z ) 0 , ( z ) 0 , Δ ( z , w ) 0 , ( z , w ) 0 , as z ¯ z ̲ 0 , w ¯ w ̲ 0 .
Proof of Theorem 2.
(a) Through the linear underestimation and overestimation functions defined by the single variable quadratic function φ ( z ) = z 2 over the interval [ z ̲ , z ¯ ] , we have
φ u ( z ) = ( z ̲ + z ¯ ) z z ̲ z ¯ φ ( z ) φ l ( z ) = ( z ̲ + z ¯ ) z ( z ̲ + z ¯ ) 2 4 ,
which is φ l ( y ) φ ( y ) φ u ( y ) .
(b) Define
φ ( z + δ w ) = ( z + δ w ) 2 , φ ( z δ w ) = ( z δ w ) 2 , φ l ( z + δ w ) = [ z ̲ + z ¯ + δ ( w ̲ + w ¯ ) ] ( z + δ w ) ( z ̲ + z ¯ + δ ( w ̲ + w ¯ ) ) 2 4 , φ u ( z + δ w ) = [ z ̲ + z ¯ + δ ( w ̲ + w ¯ ) ] ( z + δ w ) ( z ̲ + δ w ̲ ) ( z ¯ + δ w ¯ ) , φ l ( z δ w ) = [ z ̲ + z ¯ δ ( w ̲ + w ¯ ) ] ( z δ w ) ( z ̲ + z ¯ δ ( w ̲ + w ¯ ) ) 2 4 , φ u ( z δ w ) = [ z ̲ + z ¯ δ ( w ̲ + w ¯ ) ] ( z δ w ) ( z ¯ δ w ̲ ) ( z ̲ δ w ¯ ) ,
Suppose that z + δ w and z δ w are univariate, φ ( z + δ w ) = ( z + δ w ) 2 and φ ( z δ w ) = ( z δ w ) 2 are convex functions of variables z + δ w and z δ w defined on interval [ z ̲ + δ w ̲ , z ¯ + δ w ¯ ] and [ z ̲ δ w ¯ , z ¯ δ w ̲ ] , respectively. Then, using inequality (5), we have
φ l ( z + δ w ) φ ( z + δ w ) φ u ( z + δ w ) ,
and
φ l ( z δ w ) φ ( z δ w ) φ u ( z δ w ) .
By (5)–(7), we can find that
ψ ( z , w ) = 1 4 δ [ ( z + δ w ) 2 ( z δ w ) 2 ] = 1 4 δ [ φ ( z + δ w ) φ ( z δ w ) ] , 1 4 δ [ ( z ¯ + δ w ¯ + z ̲ + δ w ̲ ) ( z + δ w ) ( z ¯ + δ w ¯ + z ̲ + δ w ̲ ) 2 4 ( ( z ¯ δ w ̲ + z ̲ δ w ¯ ) ( z δ w ) ( z ¯ δ w ̲ ) ( z ̲ δ w ¯ ) ) ] , = 1 4 δ [ φ l ( z + δ w ) φ u ( z δ w ) ] , = 1 2 [ ( w ¯ + w ̲ ) z + ( z ¯ + z ̲ ) w + ( z ̲ δ w ¯ ) ( z ¯ δ w ̲ ) 2 δ ( z ¯ + z ̲ + δ ( w ¯ + w ̲ ) ) 2 8 δ ] , = ψ l ( z , w ) ,
and
ψ ( z , w ) = 1 4 δ [ ( z + δ w ) 2 ( z δ w ) 2 ] = 1 4 δ [ φ ( z + δ w ) φ ( z δ w ) ] , 1 4 δ [ ( z ¯ + δ w ¯ + z ̲ + δ w ̲ ) ( z + δ w ) ( z ¯ + δ w ¯ ) ( z ̲ + δ w ̲ ) ( ( z ¯ δ w ̲ + z ̲ δ w ¯ ) ( z δ w ) ( z ¯ δ w ̲ + z ̲ δ w ¯ ) 2 4 ) ] , = 1 4 δ [ φ u ( z + δ w ) φ l ( z δ w ) ] , = 1 2 [ ( w ¯ + w ̲ ) z + ( z ¯ + z ̲ ) w + ( z ¯ + z ̲ δ ( w ¯ + w ̲ ) ) 2 8 δ ( z ̲ + δ w ̲ ) ( z ¯ + δ w ¯ ) 2 δ ] , = ψ u ( z , w ) .
Therefore, we have ψ l ( z , w ) ψ ( z , w ) ψ u ( z , w ) .
(c) Since function Δ ( z ) = φ ( z ) φ l ( z ) = z 2 ( z ̲ + z ¯ ) z + ( z ̲ + z ¯ ) 2 4 is a convex function defined by variable z over the interval [ z ̲ , z ¯ ] , the maximum value of function Δ ( z ) can reach at the point z ̲ or z ¯ , that is,
0 Δ ( z ) = φ ( z ) φ l ( z ) ( z ¯ z ̲ ) 2 4 = max z [ z ̲ , z ¯ ] Δ ( z ) .
Similarly,
0 ( z ) = φ u ( z ) φ ( z ) ( z ¯ z ̲ ) 2 4 = max z [ z ̲ , z ¯ ] ( z ) .
By using (8) and (9), we have max z [ z ̲ , z ¯ ] Δ ( z ) = max z [ z ̲ , z ¯ ] ( z ) 0 , with z ¯ z ̲ 0 , and of course, Δ ( z ) 0 , ( z ) 0 , with z ¯ z ̲ 0 .
Now, let us define
Δ ( z + δ w ) = φ ( z + δ w ) φ l ( z + δ w ) , ( z + δ w ) = φ u ( z + δ w ) φ ( z + δ w ) , Δ ( z δ w ) = φ ( z δ w ) φ l ( z δ w ) , ( z δ w ) = φ u ( z δ w ) φ ( z δ w ) .
Through Equation (8) and the definition of functions φ ( z + δ w ) , φ l ( z + δ w ) , φ u ( z δ w ) , and φ ( z δ w ) , we can draw the following conclusion:
Δ ( z , w ) = ψ ( z , w ) ψ l ( z , w ) , = 1 4 δ [ ( φ ( z + δ w ) φ l ( z + δ w ) ) + ( φ u ( z δ w ) φ ( z δ w ) ) ] , = 1 4 δ [ Δ ( z + δ w ) + ( z δ w ) ] , 1 4 δ [ max z + δ w [ z ̲ + δ w ̲ , z ¯ + δ w ¯ ] Δ ( z + δ w ) + max z δ w [ z ̲ + δ w ¯ , z ¯ + δ w ̲ ] ( z δ w ) ] , = ( z ¯ z ̲ + δ ( w ¯ w ̲ ) ) 2 8 δ = max z [ z ̲ , z ¯ ] , w [ w ̲ , w ¯ ] Δ ( z , w ) .
Similarly, through Equation (9) and the definition of functions φ ( z + δ w ) , φ u ( z + δ w ) , φ l ( z δ w ) , and φ ( z δ w ) , we also have
( z , w ) = 1 4 δ [ ( z + δ w ) + Δ ( z δ w ) ] ( z ¯ z ̲ + δ ( w ¯ w ̲ ) ) 2 8 δ = max z [ z ̲ , z ¯ ] , w [ w ̲ , w ¯ ] ( z , w ) .
Then, by using (10) and (11), we have max z [ z ̲ , z ¯ ] , w [ w ̲ , w ¯ ] Δ ( z , w ) = max z [ z ̲ , z ¯ ] , w [ w ̲ , w ¯ ] ( z , w ) 0 with z ¯ z ̲ 0 , w ¯ w ̲ 0 , and, of course, Δ ( z , w ) 0 , ( z , w ) 0 with z ¯ z ̲ 0 , w ¯ w ̲ 0 . At this point, the conclusion is proved. □
It is noted that at the right most end of inequalities (10) and (11), the size of positive number δ has an effect on the upper and lower estimates of function ψ ( z , w ) . Let a = z ¯ z ̲ , b = w ¯ w ̲ , g ( δ ) = ( a + δ b ) 2 8 δ , obviously a > 0 , b > 0 . According to the derivative function g ( δ ) = δ 2 b 2 a 2 8 δ 2 of g ( δ ) , we can know that, when g ( δ ) is defined on the interval ( 0 , + ) , it is a convex function, and it is also easy to know that, if g ( δ ) gets the minimum value ( z ¯ z ̲ ) ( w ¯ w ̲ ) 2 , then δ = a b = z ¯ z ̲ w ¯ w ̲ . At this time, max z [ z ̲ , z ¯ ] , w [ w ̲ , w ¯ ] Δ ( z , w ) and max z [ z ̲ , z ¯ ] , w [ w ̲ , w ¯ ] Δ ( z , w ) take the minimum value ( z ¯ z ̲ ) ( w ¯ w ̲ ) 2 , and the gap between ψ ( z , w ) and its upper and lower estimation functions reaches the minimum value range, and the upper and lower estimation values are more stable.
For any sub-rectangle Y k Ξ and each s = 1 , 2 , , p 1 , there is no loss of generality, with the following definition:
ψ s ( y p s , y p + s 1 ) = y p s y p + s 1 = ( y p s + δ s k y p + s 1 ) 2 ( y p s δ s k y p + s 1 ) 2 4 δ s k , δ s k = y ¯ p s k y ̲ p s k y ¯ p + s 1 k y ̲ p + s 1 k , ψ ̲ s k ( y p s , y p + s 1 ) = 1 2 [ ( y ¯ p + s 1 k + y ̲ p + s 1 k ) y p s + ( y ¯ p s k + y ̲ p s k ) y p + s 1 + ( y ̲ p s k δ s k y ¯ p + s 1 k ) ( y ¯ p s k δ s k y ̲ p + s 1 k ) 2 δ s k ( y ¯ p s k + y ̲ p s k + δ s k ( y ¯ p + s 1 k + y ̲ p + s 1 k ) ) 2 8 δ s k ] , ψ ¯ s k ( y p s , y p + s 1 ) = 1 2 [ ( y ¯ p + s 1 k + y ̲ p + s 1 k ) y p s + ( y ¯ p s k + y ̲ p s k ) y p + s 1 + ( y ¯ p s k + y ̲ p s k δ s k ( y ¯ p + s 1 k + y ̲ p + s 1 k ) ) 2 8 δ s k ( y ̲ p s k + δ s k y ̲ p + s 1 k ) ( y ¯ p s k + δ s k y ¯ p + s 1 k ) 2 δ s k ] .
Using Theorem 2, for each s = 1 , 2 , , p 1 , let z = y p s , w = y p + s 1 , the function ψ s ( y p s , y p + s 1 ) , ψ ̲ s k ( y p s , y p + s 1 ) and ψ ¯ s k ( y p s , y p + s 1 ) also satisfies 0 ψ s ( y p s , y p + s 1 ) ψ ̲ s k ( y p s , y p + s 1 ) 0 , 0 ψ ¯ s k ( y p s , y p + s 1 ) ψ s ( y p s , y p + s 1 ) 0 , as y ¯ p s k y ̲ p s k 0 , y ¯ p + s 1 k y ̲ p + s 1 k 0 .
Theorem 3.
For any sub-rectangle Y k Ξ and y Y k , let ε = max { y ¯ j k y ̲ j k : j = 1 , 2 , , p } , we have y p + s ψ ̲ s k ( y p s , y p + s 1 ) 0 , s = 1 , 2 , , p 1 as ε 0 .
Proof of Theorem 3.
According to Theorem 2 and the definition of ψ ̲ s k ( y p s , y p + s 1 ) , we easily know that
0 y p + s ψ ̲ s k ( y p s , y p + s 1 ) ( y ¯ p s k y ̲ p s k + δ s k ( y ¯ p + s 1 k y ̲ p + s 1 k ) ) 2 8 δ s k = | y ¯ p s k y ̲ p s k | · | y ¯ p + s 1 k y ̲ p + s 1 k | 2 .
If s = 1 , the conclusion is obviously true, thus we only discuss the case of s { 2 , 3 , , p 1 } . For each s = 2 , 3 , , p 1 , we have
| y p + s y ̲ p + s k | = | y p s y p + s 1 y ̲ p s k y ̲ p + s 1 k | , | y p s y p + s 1 y p s y ̲ p + s 1 k | + | y p s y ̲ p + s 1 k y ̲ p s k y ̲ p + s 1 k | , | y p s | · | y p + s 1 y ̲ p + s 1 k | + | y p s y ̲ p s k | · | y ̲ p + s 1 k | , | y ¯ p s k | · | y ¯ p + s 1 k y ̲ p + s 1 k | + | y ¯ p s k y ̲ p s k | · | y ̲ p + s 1 k | , | y ¯ p s 0 | · | y ¯ p + s 1 k y ̲ p + s 1 k | + | y ¯ p s k y ̲ p s k | · | y ¯ p + s 1 0 | , = | y ¯ p s 0 | · | y ¯ p + s 1 k y ̲ p + s 1 k | + | y ¯ p s k y ̲ p s k | · j = p s + 1 p | y ¯ j 0 | ,
and
| y ¯ p + s k y p + s | = | y ¯ p s k y ¯ p + s 1 k y p s y p + s 1 | , | y ¯ p s k y ¯ p + s 1 k y ¯ p s k y p + s 1 | + | y ¯ p s k y p + s 1 y p s y p + s 1 | , | y ¯ p s k | · | y ¯ p + s 1 k y p + s 1 | + | y ¯ p s k y p s | · | y p + s 1 | , | y ¯ p s k | · | y ¯ p + s 1 k y ̲ p + s 1 k | + | y ¯ p + s 1 k | · | y ¯ p s k y ̲ p s k | , | y ¯ p s 0 | · | y ¯ p + s 1 k y ̲ p + s 1 k | + | y ¯ p + s 1 0 | · | y ¯ p s k y ̲ p s k | , = | y ¯ p s 0 | · | y ¯ p + s 1 k y ̲ p + s 1 k | + | y ¯ p s k y ̲ p s k | · j = p s + 1 p | y ¯ j 0 | .
Then, by using trigonometric inequalities to combine inequalities (13) and (14), we obtain the following recurrence formulas:
| y ¯ p + s k y ̲ p + s k | | y ¯ p + s k y p + s | + | y p + s y ̲ p + s k | N 1 s · | y ¯ p + s 1 k y ̲ p + s 1 k | + N 2 s · | y ¯ p s k y ̲ p s k | ,
where
N 1 s = 2 | y ¯ p s 0 | , N 2 s = 2 j = p s + 1 p | y ¯ j 0 | , s = 2 , 3 , , p 1 .
Furthermore, according to Equation (15), it is necessary to have
| y ¯ p + s k y ̲ p + s k | N 1 s · | y ¯ p + s 1 k y ̲ p + s 1 k | + N 2 s · | y ¯ p s k y ̲ p s k | , N 1 s · [ N 1 s 1 · | y ¯ p + s 2 k y ̲ p + s 2 k | + N 2 s 1 · | y ¯ p s + 1 k y ̲ p s + 1 k | ] + N 2 s · | y ¯ p s k y ̲ p s k | , , q = 1 s N 1 q · | y ¯ p k y ̲ p k | + j = p s + 1 p 1 ( u = p + 1 j s N 1 u · N 2 p j ) · | y ¯ j k y ̲ j k | + N 2 s · | y ¯ p s k y ̲ p s k | , N s · j = p s p | y ¯ j k y ̲ j k | .
where
N s = max { q = 1 s N 1 q , N 2 s , u = p + 1 j s N 1 u · N 2 p j : j = p s + 1 , p s + 2 , , p 1 } .
By combining Equations (12) and (16), we can deduce
y p + s ψ ̲ s k ( y p s , y p + s 1 ) 1 2 | y ¯ p s k y ̲ p s k | · | y ¯ p + s 1 k y ̲ p + s 1 k | , 1 2 N s 1 · | y ¯ p s k y ̲ p s k | · j = p s + 1 p | y ¯ j k y ̲ j k | .
It can be noted that, in the most right of inequality (17), when s = 2 , 3 , , p 1 , there is ( p s ) { 1 , 2 , , p 2 } . Because ε 0 means y ¯ j k y ̲ j k 0 for each j = 1 , 2 , , p , then the right side of inequality (17) tends to zero, and then y p + s ψ ̲ s k ( y p s , y p + s 1 ) 0 , s = 1 , 2 , , p 1 . The proof is complete. □
Below, by using the above pre-given conclusions, the linear relaxation problem (LRP k ) is obtained by relaxing the nonlinear constraints of the equivalent problem (EP k ), which is expressed as follows:
( L R P k ) : = min f ̲ k ( x , y ) = C T ( x T , y T ) T , s . t . c j T x + d j = y j , j = 1 , 2 , , p , ψ ̲ s k ( y p s , y p + s 1 ) y p + s , s = 1 , 2 , , p 1 , x X , y Y k .
Of course, the relaxation subproblem (LRP 0 ) defined on the rectangle Y 0 is shown as follows:
( L R P 0 ) : = min f ̲ 0 ( x , y ) = C T ( x T , y T ) T , s . t . c j T x + d j = y j , j = 1 , 2 , , p , ψ ̲ s 0 ( y p s , y p + s 1 ) y p + s , s = 1 , 2 , , p 1 , x X , y Y 0 .
Theorem 3 shows that the feasible domain of the linear relaxation subproblem described above will gradually approximate the feasible domain of the equivalent problem (EP) as the algorithm gradually refines the first part Y ^ 0 of the hyper-rectangular Y 0 .
There is a needless to say fact: if (LRP k ) is not feasible, then (EP k ) is also not feasible; otherwise, for any optimal solution ( x k , y ˜ k ) of (LRP k ), f ( x k ) = h ( x k , y k ) f ̲ k ( x k , y ˜ k ) is obvious. In particular, if y ˜ k and y k are defined as y ˜ k = ( y ˜ 1 k , y ˜ 2 k , , y ˜ p k , y ˜ p + 1 k , y ˜ p + 2 k , , y ˜ 2 p 1 k ) T and y k = ( y 1 k , y 2 k , , y p k , y p + 1 k , y p + 2 k , , y 2 p 1 k ) T = ( y ˜ 1 k , y ˜ 2 k , , y ˜ p k , y p + 1 k , y p + 2 k , , y 2 p 1 k ) T , then y p + s k = y p + s 1 k y ˜ p s k , s = 1 , 2 , , p 1 .
Finally, our bounding operation of the branch-and-bound procedure can be expressed as:
S t e p 5 . ( B o u n d i n g o p e r a t i o n ) Set L B ( Y k ) : = f ̲ k ( x k , y k ) . If L B ( Y k ) U B k ϵ with U B k = h ( x * , y * ) = f ( x * ) , put Y k into Ξ , i.e., Ξ = Ξ Y k .
Remark 1.
In this paper, we obtain the lower bound relaxation problem (LRP k ) by using ψ ̲ s k ( y p s , y p + s 1 ) y p + s to relax the feasible domain of the equivalence problem (EP k ). If we solve the linear maximum multiplicative programming problem, we only need to use ψ ¯ s k ( y p s , y p + s 1 ) y p + s to do similar upper bound relaxation.
Remark 2.
If a linear function is added to the objective function of the problem (LMP), then there is a similar equivalent transformation and proof to Section 2.2, which is because we only make the equivalent transformation of the product term of the objective function. In this section, of course, there is a similar relaxation subproblem, and the rectangular partition method in the next subsection is similar, but then the reduction method of the hyper-rectangle is a little different, and we give it after Section 2.4.

2.3. Subdivision and Refinement of Hyper-Rectangle

Branching operation is also indispensable in Branch-and-bound procedure. In this subsection, we give the branch-refinement rule of any Y k = [ y ̲ k , y ¯ k ] Ξ , where Y k = Y ^ k × Y ˜ k .
According to Equations (1) and (4) and y p + s = y p + s 1 y p s , s { 1 , 2 , , p 1 } , the generation of rectangular Y ˜ k mainly depends on the successive multiplication of the coordinate components of the lower left vertex and the upper right vertex of Y ^ k . Therefore, we only use the standard dichotomy to segment the former part Y ^ k of the sub-rectangle Y k , and then refine the latter part Y ˜ k according to Equation (4). The specific operations of Step 3 are as follows:
(i) For the rectangle Y ^ k = j = 1 p [ y ̲ j k , y ¯ j k ] , let y ¯ μ k y ̲ μ k = max { y ¯ j k y ̲ j k : j = 1 , 2 , , p } , y μ k = y ̲ μ k + y ¯ μ k 2 . By using y μ k , the interval [ y ̲ μ k , y ¯ μ k ] corresponding to the μ -edge of rectangle Y ^ k is divided into two intervals [ y ̲ μ k , y μ k ] and [ y μ k , y ¯ μ k ] , and then Y ^ k is also divided into two sub-rectangles Y ^ k 1 and Y ^ k 2 . Their forms can be expressed as
Y ^ k 1 = j = 1 μ 1 [ y ̲ j k , y ¯ j k ] × [ y ̲ μ k , y μ k ] × j = μ + 1 p [ y ̲ j k , y ¯ j k ] = j = 1 p [ y ̲ j k 1 , y ¯ j k 1 ] , Y ^ k 2 = j = 1 μ 1 [ y ̲ j k , y ¯ j k ] × [ y μ k , y ¯ μ k ] × j = μ + 1 p [ y ̲ j k , y ¯ j k ] = j = 1 p [ y ̲ j k 2 , y ¯ j k 2 ] .
by the Cartesian product.
(ii) For the segmentation and thinning of hyper-rectangle Y ˜ k , the upper right vertex of Y ^ k 1 and the lower left vertex of Y ^ k 2 are used, respectively. In this way, we finally get two hyper-rectangles, Y ˜ k 1 and Y ˜ k 2 . According to y p + s = y p + s 1 y p s , s { 1 , 2 , , p 1 } , the Cartesian product forms of hyper-rectangle Y ˜ k 1 and Y ˜ k 2 are
Y ˜ k 1 = s = 1 p 1 [ y ̲ p + s k , min { y ¯ p + s k , y ¯ p + s 1 k 1 y ¯ p s k 1 } ] = s = 1 p 1 [ y ̲ p + s k 1 , y ¯ p + s k 1 ] , Y ˜ k 2 = s = 1 p 1 [ max { y ̲ p + s 1 k , y ̲ p + s 1 k 2 y ̲ p s k 2 } , y ¯ p + s k ] = s = 1 p 1 [ y ̲ p + s k 2 , y ¯ p + s k 2 ] .
Obviously, Y k 1 Y k 2 = .
Although Y k is a hyper-rectangular space of 2 p 1 dimension, it can be seen from the Branch-refinement method of hyper-rectangle Y k mentioned above that we only branch the rectangle Y ^ k , and the boundary of Y ˜ k 1 ( Y ˜ k 2 ) can be obtained directly according to the boundary of Y ^ k 1 ( Y ^ k 2 ) , thus the branching process of the branch-and-bound algorithm is completed in p dimensional space Y ^ k .

2.4. Reduction of the Hyper-Rectangle

In this subsection, we give a reduction technique for hyper-rectangles to delete the sub-rectangle Y k that do not contain the globally optimal solution or to delete a part of the sub-rectangle Y k that do not have a globally optimal solution. In this way, the number of rectangles in the set Ξ will be reduced or the effect of refinement of rectangles in Ξ will be achieved, and then the bounding operation will be accelerated.
Without losing the generality, it is assumed that the current hyper-rectangle to be reduced is Y k = Y ^ k × Y ˜ k = j = 1 p [ y ̲ j k , y ¯ j k ] × s = 1 p 1 [ y ̲ p + s k , y ¯ p + s k ] Ξ , and the best objective function value obtained by the algorithm, thus far, is U B k . Because y 2 p 1 = j = 1 p y j U B k , for each t = 1 , 2 , , p , v = 1 , 2 , , p 2 , define
γ k = min { U B k , y ¯ 2 p 1 k } , α t k = γ k j = 1 , j t p y ̲ j k , β 0 k = γ k , β v k = β v 1 k y ̲ v k .
It is easy to know that, if the problem (EP) has a globally optimal solution in the rectangular Y k , there must be a necessary condition, that is, y ̲ 2 p 1 k y 2 p 1 γ k . This necessary condition is also used in the following two rectangular reduction theorems.
In view of the characteristics that the super rectangle Y k consists of two parts Y ^ k and Y ˜ k , we reduce it in two steps. For this reason, we give Theorems 4 and 5, and prove that they have set forth the super-rectangular reduction technique in this section.
Theorem 4.
For each t = 1 , 2 , , p , if α t k < y ̲ t k , the original problem (EP) has no globally optimal solution on the rectangle Y k ; otherwise, if α t k < y ¯ t k , the rectangle Y k 1 does not contain the globally optimal solution of the problem (EP), where
Y k 1 = Y ^ k 1 × Y ˜ k Y k & & Y ^ j k 1 = Y ^ j k , j t , ( α t k , y ¯ t k ] Y ^ j k , j = t .
Proof of Theorem 4.
If there is a t { 1 , 2 , , p } that satisfies α t k < y ̲ t k , there will be
γ k = min { U B k , y ¯ 2 p 1 k } = α k j = 1 , j t p y ̲ j k < j = 1 p y ̲ j k j = 1 p y j = y 2 p 1 ,
then, there is no globally optimal solution of (EP) on Y k . Next, we prove that there is γ k < y 2 p 1 for each y Y k 1 . When y Y k 1 , we consider the tth element y t of y, because y t ( α t k , y ¯ t k ] Y ^ t k , we have
α t k < y t y ¯ t k , t = 1 , 2 , , p .
According to the definition of α t k and the above inequality, we also have
γ k = min { U B k , y ¯ 2 p 1 k } = α k j = 1 , j t p y ̲ j k < y t j = 1 , j t p y ̲ j k < j = 1 p y j = y 2 p 1 .
This means that, for all y Y k 1 , there is γ k < y 2 p 1 . Therefore, there is no globally optimal solution for the problem (EP). □
To facilitate the description of Theorem 5, we still record the hyper-rectangle reduced by Theorem 4 as Y k = Y ^ k × Y ˜ k = j = 1 p [ y ̲ j k , y ¯ j k ] × s = 1 p 1 [ y ̲ p + s k , y ¯ p + s k ] Y 0 . It can be seen that the second part of Y k does not change, and Theorem 5 is given below to reduce Y ˜ k .
Theorem 5.
For each v = 1 , 2 , , p 1 , if β p v 1 k < y ¯ p + v k , the problem (EP) has no globally optimal solution on the hyper-rectangle Y k 2 , where
Y k 2 = Y ^ k × Y ˜ k 2 Y k & & Y ˜ s k 2 = Y ˜ s k , s v , ( β p v 1 k , y ¯ p + v k ] Y ˜ s k , s = v .
Proof of Theorem 5.
First, according to the definition of β v k , we have
β v k = β v 1 k y ̲ v k = β v 2 k y ̲ v k y ̲ v 1 k = = β 1 k y ̲ v k y ̲ v 1 k y ̲ 2 k = β 0 k l = 1 v y ̲ l k = γ k l = 1 v y ̲ l k .
Second, we prove that, for any y Y k 2 , there is γ k < y 2 p 1 . If v = p 1 , obviously, y p + v = y 2 p 1 > β 0 k = γ k ; Then, Y k 2 does not contain the globally optimal solution of the problem (EP). If v { 1 , 2 , , p 2 } exists and β p v 1 k < y ¯ p + v k is satisfied, we continue to consider the ( p + v ) th element y p + v of y. If y p + v ( β p v 1 k , y ¯ p + v k ] Y ˜ v k , then β p v 1 k < y p + v y ¯ p + v k . Because β p v 1 k = γ k l = 1 p v 1 y ̲ l k , which means that, for all y Y k 2 , there is
γ k = β p v 1 k l = 1 p v 1 y ̲ l k < y p + v l = 1 p v 1 y l k = y p + v 1 l = 1 p v y l k = = y p l = 1 p 1 y l k = y 2 p 1 .
Therefore, there is no globally optimal solution for the original problem (EP) on the hyper-rectangular Y k 2 . □
According to Theorems 4 and 5, we can construct the following reduction techniques to reduce the hyper-rectangle Y k , which makes the compressed hyper-rectangle thinner and removes the part of the hyper-rectangle Y k that does not contain the globally optimal solution, so that the search-space required for the algorithm to solve the problem (EP) is reduced, thus speeding up the convergence of the algorithm.
S t e p 4 . ( r e c t a n g l e r e d u c i n g o p e r a t i o n )
(i) For each t = 1 , 2 , , p , if α t k < y ̲ t k , let Y k = . Otherwise, if α t k < y ¯ t k , let y ¯ t k = α t k .
(ii) For each v = 1 , 2 , , p 1 , if β p v 1 k < y ¯ p + v k , let y ¯ p + v k = β p v 1 k .
In addition, if the objective function of the problem (LMP) contains an additional linear function, then, to make the above reduction method valid, we need to make an adjustment to U B k because it is affected by the additional linear term. Assuming that this additional linear term is e T x + f , then, before the algorithm begins, we need to solve a linear programming problem ξ = min x X e T x + f , so that there is U B k = U B k ξ . Of course, e is an n-dimensional column vector while f is a constant. Obviously, the adjusted U B k also satisfies the conditions of the above two theorem.

2.5. Output-Space Branch-and-Bound Reduction Algorithm

In this subsection, we combine the output-space branch-and-bound reduction procedure with the bounding operation, branching operation, and rectangle-reducing operation, and then construct a new deterministic global optimization algorithm for solving the problem (EP), namely the Output-Space Branch-and-Bound Reduction Algorithm ( O S B B R A ).
To describe the algorithm smoothly, we explain the relevant symbols of the algorithm iteration to step k as follows: Y k is the hyper-rectangle to be subdivided in the current iteration step; Θ is the set of feasible solutions stored in the current iteration step of the problem (EP); Ξ is the set of sub-rectangles remaining after the pruning step; U B k is the upper bound of the global optimal value of the problem (EP) in the current iteration step; L B k is the lower bound of the globally optimal value of the problem (EP); and L B ( Y k ) and ( x , y ) represent the optimal value and solution of the subproblem ( L R P k ) on the rectangle Y k , respectively. In addition, any feasible point x of (LMP) must have ( x , y ¨ ) , with y ¨ k = ( y ¨ 1 k , y ¨ 2 k , , y ¨ p k , y ¨ p + 1 k , y ¨ p + 2 k , , y ¨ 2 p 1 k ) T , y ¨ j k = c j T x + d j , j = 1 , 2 , , p , and y ¨ p + s k = y ¨ p + s 1 k y ¨ p s k , s = 1 , 2 , , p 1 . being a feasible point for (EP). The specific steps of algorithm ( O S B B R A ) are as follows:
Step 0. (Initialization) Set the tolerance ϵ > 0 . The initial hyper-rectangular Y 0 is constructed by using Equations (2) and (3), whereas solving each feasible solution x of the (LMP) obtained from the linear programming problem (2) corresponds to one feasible solution ( x , y ¨ ) of (EP), and then stores all such feasible solutions of (EP) into the set Θ . Solve the initial subproblem L R P 0 on hyper-rectangular Y 0 . Then, the optimal value and solution corresponding to the initial subproblem are L ( Y 0 ) and ( x 0 , y 0 ) , respectively. Let Θ = Θ { ( x 0 , y ¨ 0 ) } . Thus, L B 0 = L ( Y 0 ) can be used as the initial lower bound of the globally optimal value of the problem (EP). The initial upper bound is U B 0 = min { h ( x , y ) : ( x , y ) Θ } . The initial best solution to the original problem (EP) is ( x * , y * ) = arg U B 0 . If U B 0 L B 0 ϵ , then stop, and the ϵ -globally optimal solution of the problem (EP) is ( x * , y * ) . Otherwise, set Ξ = { Y 0 } , H = , Θ = , the iteration number k = 1 , and go to Step 2.
Step 1. (Termination criteria) If U B k L B k ϵ or Ξ = , then the algorithm is terminated, and the ϵ -globally optimal solution ( x * , y * ) and the ϵ -globally optimal value h ( x * , y * ) of the (EP) problem are output immediately, and the ϵ -globally optimal solution x * and the ϵ -globally optimal value f ( x * ) = h ( x * , y * ) of (LMP) are also output immediately. Otherwise, go to Step 2.
Step 2. According to L B k = L B ( Y k ) , select the sub-rectangle Y k from the set Ξ and set Ξ = Ξ \ Y k , and then go to Step 3.
Step 3. (Branching operation) By using the subdivision and refinement methods in Section 2.3, Y k is divided into two sub-rectangles: Y k 1 and Y k 2 that satisfy Y k 1 Y k 2 = . Then, go to Step 4.
Step 4. ( r e c t a n g l e r e d u c i n g o p e r a t i o n ) Through the reduction in Section 2.4, we compress the two sub-rectangles Y k 1 and Y k 2 obtained in the previous iteration, and the index set of the remaining sub-rectangles after compression is expressed as Γ . Obviously, | Γ | 2 .
Step 5. (Bounding operation) For any L B ( Y k i ) < U B k ϵ ( i Γ ) , let H = H { Y k i : i Γ } , Θ = Θ { ( x i , y ¨ i ) } ( i Γ ) . If H = and Ξ , return to Step 2. Else, if H = and Ξ = , return to Step 1. Else, set Ξ = Ξ H .
Step 6. (Updating the upper and lower bound) Let U = min { U B k , h ( x , y ) : ( x , y ) Θ } . If U U B k , update the current best solution to ( x * , y * ) arg min { h ( x , y ) : ( x , y ) Θ } and set U B k = U . Let L B k = min { L B ( Y ) : Y Ξ } ; Set k : = k + 1 , H = , Θ = , and return to Step 1.
Remark 3.
In Step 5, we save the super-rectangle Y k i of L B ( Y k i ) < U B k ϵ into Ξ after each compression, which implies the pruning operation of the algorithm.
Remark 4.
As can be seen from Steps 4–6, the number of elements in Θ does not exceed two in each algorithm loop. At the same time, the phase of updating the upper bound in Step 5 computes at most two function values.
Remark 5.
The branch search space of our OSBBRA algorithm is p-dimensional. When p is much smaller than the dimension n of decision variables, the convergence rate of the algorithm is relatively faster than that of n-dimensional decision space search.

3. Analysis of the Computational Complexity of the Algorithm

In this subsection, we deduce the maximum number of iterations of the proposed algorithm by analyzing the computational complexity of the algorithm. For this reason, for the convenience of narration, we first define the longest edge of the first part of the rectangle Y k = Y ^ k × Y ˜ k Y 0 , that is, the longest edge of
Y ^ k = j = 1 p [ y ̲ j k , y ¯ j k ] ,
using
( Y ^ k ) = max { y ¯ j k y ̲ j k : j = 1 , 2 , , p } .
In addition, we also define
h s = 1 2 | y ¯ p s k y ̲ p s k | · | y ¯ p + s 1 k y ̲ p + s 1 k | , s = 1 , 2 , , p 1 ,
ς = s = 2 p 1 s · N s 1 + 1 2 ,
ϱ = max { 1 , j = 1 s | y ¯ j 0 | : s = 1 , 2 , , p 2 } ,
where y ¯ j 0 is given by (2).
Lemma 1.
For the given convergence tolerance ϵ 0 , if there is a rectangle Y k = Y ^ k × Y ˜ k satisfying ( Y ^ k ) ϵ ϱ ς when the algorithm is running to the kth cycle, then we have
U B L B ( Y k ) ϵ ,
where L B ( Y k ) is the optimal value of the problem ( L R P k ) and U B denotes the current best upper bound of the equivalent problem (EP).
Proof of Lemma 1.
If ( x k , y k ) is assumed to be the optimal solution of the linear relaxation problem (LRP k ), obviously y j k = c j T x k + d j , j = 1 , 2 , , p , in addition, let
y ˜ p k = y p k , y ˜ p + s k = y ˜ p + s 1 k y p s k , s = 1 , 2 , , p 1 ,
and
y ˜ k = ( y 1 k , y 2 k , , y j p , y ˜ p + 1 k , y ˜ p + 2 k , , y ˜ 2 p 1 k ) T ,
then ( x k , y ˜ k ) is a feasible solution of the equivalent problem (EP k ). By using the definitions of L B ( Y k ) and U B , we have
f ̲ k ( x k , y k ) = L B ( Y k ) U B h ( x k , y ˜ k ) = f ( x k ) ,
Therefore, from Equations (1) and (18)–(22), there is the following
U B L B ( Y k ) f ( x k ) f ̲ k ( x k , y k ) , = h ( x k , y ˜ k ) f ̲ k ( x k , y k ) , = y ˜ 2 p 1 k y 2 p 1 k , y ˜ 2 p 1 k ψ ̲ p 1 k ( y 1 , y 2 p 2 ) , = ( y ˜ 2 p 2 k y 1 k y 2 p 2 k y 1 k ) + ( y 2 p 2 k y 1 k ψ ̲ p 1 k ( y 1 , y 2 p 2 ) ) , y 1 k ( y ˜ 2 p 2 k y 2 p 2 k ) + 1 2 | y ¯ 1 k y ̲ 1 k | · | y ¯ 2 p 2 k y ̲ 2 p 2 k | , y ¯ 1 0 ( y ˜ 2 p 2 k y 2 p 2 k ) + h p 1 , y ¯ 1 0 y ¯ 2 0 ( y ˜ 2 p 3 k y 2 p 3 k ) + y ¯ 1 0 h p 2 + h p 1 , , j = 1 p 2 y ¯ j 0 h 1 + j = 1 p 3 y ¯ j 0 h 2 + + j = 1 2 y ¯ j 0 h p 3 + y ¯ 1 0 h p 2 + h p 1 , ϱ s = 1 p 1 h s .
Then, by using (17), we have
U B L B ( Y k ) ϱ s = 1 p 1 h s , = ϱ s = 1 p 1 ( 1 2 | y ¯ p s k y ̲ p s k | · | y ¯ p + s 1 k y ̲ p + s 1 k | ) , = ϱ 2 [ s = 2 p 1 ( | y ¯ p s k y ̲ p s k | · | y ¯ p + s 1 k y ̲ p + s 1 k | ) + ( | y ¯ p 1 k y ̲ p 1 k | · | y ¯ p k y ̲ p k | ) ] , ϱ 2 [ s = 2 p 1 ( | y ¯ p s k y ̲ p s k | · N s 1 · j = p s + 1 p | y ¯ j k y ̲ j k | ) + ( | y ¯ p 1 k y ̲ p 1 k | · | y ¯ p k y ̲ p k | ) ] , ϱ 2 ( ( Y ^ k ) ) 2 ( s = 2 p 1 s · N s 1 + 1 ) , = ϱ ς ( ( Y ^ k ) ) 2 .
Thus, according to the above inequality and combined with ( Y ^ k ) ϵ ϱ ς , we can obtain that
U B L B ( Y k ) ϱ ς ( ( Y ^ k ) ) 2 ϵ .
Finally, the proof of Lemma 1 is completed. □
On the premise of Lemma 1, if the ( Y ^ k ) ϵ ϱ ς is satisfied, then the sub-rectangular Y k can be deleted. Thus, according to Step 5 of the algorithm, it can be appreciated that when each sub-rectangular Y k obtained by the refinement of Y 0 satisfies ( Y ^ k ) ϵ ϱ ς , the algorithm terminates the iteration. Therefore, we can derive the maximum number of iterations of the algorithm through Lemma 1. Theorem 6 gives a specific process.
Theorem 6.
For the given convergence tolerance ϵ 0 , the maximum number of iterations required by the algorithm OSBBRA to obtain the ϵ globally optimal solution of the problem (LMP) is
N = 2 j = 1 p log 2 ϱ ς ( y ¯ j 0 y ̲ j 0 ) ϵ 1 ,
where ς and ϱ are given by (21) and (22), respectively. In addition, the definition of Y ^ 0 = j = 1 p Y ^ j 0 with Y ^ j 0 = [ y ̲ j 0 , y ¯ j 0 ] is given by (2).
Proof of Theorem 6.
It is assumed that Y = Y ^ × Y ˜ with Y ^ = j = 1 p Y ^ j = j = 1 p [ y ̲ j , y ¯ j ] is the rectangle that the algorithm is selected from Ξ when it is cycled to a certain Step 3. It is supposed that after k j iterations, there is a subinterval Y ^ j k j = [ y ̲ j k j , y ¯ j k j ] of Y ^ j 0 = [ y ̲ j 0 , y ¯ j 0 ] satisfying
y ¯ j k j y ̲ j k j ϵ ϱ ς , j = 1 , 2 , , p .
Let us consider the branching process of Step 4, and then there is
y ¯ j k j y ̲ j k j = 1 2 k j ( y ¯ j 0 y ̲ j 0 ) , j = 1 , 2 , , p .
By combining (23) with (24), we have
1 2 k j ( y ¯ j 0 y ̲ j 0 ) ϵ ϱ ς , j = 1 , 2 , , p .
That is,
k j log 2 ϱ ς ( y ¯ j 0 y ̲ j 0 ) ϵ , j = 1 , 2 , , p .
Let
k ¯ j = log 2 ϱ ς ( y ¯ j 0 y ̲ j 0 ) ϵ , j = 1 , 2 , , p .
Then, after K 1 = j = 1 p k ¯ j iterations, the algorithm will generate at most K 1 + 1 rectangles, denoted by Y 1 , Y 2 , , Y K 1 + 1 , and they must all satisfy
( Y ^ t ) = 2 K 1 t ( Y ^ K 1 ) = 2 K 1 t ( Y ^ K 1 + 1 ) , t = K 1 , K 1 1 , , 2 , 1 ,
where ( Y ^ K 1 ) = ( Y ^ K 1 + 1 ) = max { y ¯ j k ¯ j y ̲ j k ¯ j : j = 1 , 2 , , p } and
Y ^ 0 = t = 1 K 1 + 1 Y ^ t .
Now, put the K 1 + 1 rectangles into the set Ξ K 1 + 1 , that is,
Ξ K 1 + 1 = { Y t : t = 1 , 2 , , K 1 + 1 } ,
and, according to (23), we have
( Y ^ K 1 ) = ( Y ^ K 1 + 1 ) ϵ ϱ ς .
Here, we let ¯ = ( Y ^ K 1 ) = ( Y ^ K 1 + 1 ) in order to facilitate the smooth description of the following. Combined with (27), we can see that
¯ ϵ ϱ ς
is obvious. Then, Y K 1 and Y K 1 + 1 will be thrown out of the set Ξ K 1 + 1 after using Lemma 1 and Step 5 of the algorithm, because there is no globally optimal solution of the problem (EP) in Y K 1 and Y K 1 + 1 . Furthermore, the remaining rectangles will be placed in the set Ξ K 1 , where
Ξ K 1 = Ξ K 1 + 1 \ { Y K 1 , Y K 1 + 1 } = { Y t : t = 1 , 2 , , K 1 1 } .
Of course, the new set Ξ K 1 will continue to be considered.
Next, let us focus on Y K 1 1 . According to (25) and combined with the branch rule of Section 2.3, Y K 1 1 will be immediately divided into two sub-rectangles Y K 1 1 , 1 and Y K 1 1 , 2 satisfying
Y ^ K 1 1 = Y ^ K 1 1 , 1 Y ^ K 1 1 , 2
and
( Y ^ K 1 1 ) = 2 ( Y ^ K 1 1 , 1 ) = 2 ( Y ^ K 1 1 , 2 ) = 2 ¯ .
Therefore, Y K 1 1 is thrown out of the set Ξ K 1 by using (28)–(30), and we can know that the algorithm iterates once again. At the same time, the remaining rectangles are put into the set Ξ K 1 1 , that is,
Ξ K 1 1 = Ξ K 1 \ { Y K 1 1 } = Ξ K 1 + 1 \ { Y K 1 1 , Y K 1 , Y K 1 + 1 } = { Y t : t = 1 , 2 , , K 1 2 } .
Of course, Y K 1 2 will also be immediately divided into Y K 1 2 , 1 and Y K 1 2 , 2 satisfying
Y ^ K 1 2 = Y ^ K 1 2 , 1 Y ^ K 1 2 , 2
and
( Y ^ K 1 2 ) = 2 ( Y ^ K 1 2 , 1 ) = 2 ( Y ^ K 1 2 , 2 ) = 2 ( Y ^ K 1 1 ) = 2 2 ¯ .
Then, both Y K 1 2 , 1 and Y K 1 2 , 2 must be divided again for once to satisfy (28); that is, for Y K 1 2 , the algorithm must iterate 2 2 1 = 3 times for Y K 1 2 to be thrown out of the set Ξ K 1 1 . Then, put the remaining rectangles in the set Ξ K 1 2 , that is,
Ξ K 1 2 = Ξ K 1 1 \ { Y K 1 2 } = Ξ K 1 + 1 \ { Y K 1 2 , Y K 1 1 , Y K 1 , Y K 1 + 1 } = { Y t : t = 1 , 2 , , K 1 3 } .
Similar to (31) and (32), for a rectangular Y t ( t = 1 , 2 , , K 1 1 ) , we also have
Y ^ t = Y ^ t , 1 Y ^ t , 2
and
( Y ^ t ) = 2 ( Y ^ t , 1 ) = 2 ( Y ^ t , 2 ) = 2 ( Y ^ t + 1 ) = 2 2 ( Y ^ t + 2 ) = = 2 K 1 1 t ( Y ^ K 1 1 ) = 2 K 1 t ¯ .
According to (28) and (33), the algorithm must iterate 2 K 1 t 1 at most before Y t is thrown out of its corresponding set Ξ t + 1 .
Then, put the remaining rectangles in the set Ξ t , that is,
Ξ t = Ξ t + 1 \ { Y t } = Ξ K 1 + 1 \ { Y t , Y t + 1 , , Y K 1 2 , Y K 1 1 , Y K 1 , Y K 1 + 1 } , t = 1 , 2 , , K 1 1 .
Therefore, when t is taken from K 1 1 to 1, the algorithm iterates
K = K 1 + t = 1 K 1 1 ( 2 K 1 t 1 ) = 2 K 1 1 = 2 j = 1 p log 2 ϱ ς ( y ¯ j 0 y ̲ j 0 ) ϵ 1
times at most. In addition, according to (26) and (34), we have
Ξ 1 = Ξ K 1 + 1 \ { Y 1 , Y 2 , , Y K 1 2 , Y K 1 1 , Y K 1 , Y K 1 + 1 } , = Ξ K 1 + 1 \ { Y ^ 1 × Y ˜ 1 , Y ^ 2 × Y ˜ 2 , , Y ^ K 1 × Y ˜ K 1 , Y ^ K 1 + 1 × Y ˜ K 1 + 1 } , = .
then the algorithm will stop running, using Step 5 of the algorithm. □
Remark 6.
Through Theorem 6, when the proposed algorithm OSBBRA finds the ϵ globally optimal solution of the problem (LMP), we can use
2 K T ( m + 6 p 3 , n + 2 p 1 )
as the upper bound of the running time of the algorithm, where T ( m + 6 p 3 , n + 2 p 1 ) denotes the time taken to solve a linear programming problem with n + 2 p 1 variables and m + 6 p 3 constraints.
Remark 7.
Theorem 6 is fully capable of ensuring that the algorithm OSBBRA is completed in a finite number of iterations, because of the existence of such the most extreme number of iterations.

4. Numerical Examples

In this section, we present many random experiments of different scales through the proposed algorithm. We also provide the performance comparison results compared with the previous methods to solve the problem (LMP).
The code of our algorithm was compiled on Matlab (2016a). All calculation processes were carried out on personal PCs with Intel(R) Core(TM)i5-4210M 2.60 GHz power processor 4 GB memory and the operating system used is Microsoft Windows 7. In the process of numerical experiment, the linprog solver of MATLAB(2016a) was used to solve all linear programming problems, while the quadratic convex programming problem in [27] was solved by quadprog solver in MATLAB(2016a). We use the following notation:
  • Solution: the optimal solution
  • Optimum: the optimal value
  • Opt.val: the average of the optimal values for the 10 problems arising from the 10 sets of random coefficients calculated using the commercial software package BARON
  • Iter: the number of iterations
  • ϵ : tolerance
  • Ref: reference
  • Time: the CPU running time
  • Avg: average performance of an algorithm for a set of random problems
  • Std: standard deviation of performance of an algorithm for a set of random problems
  • “-”: the problem cannot be solved in 2400 s
  • “∗”: problems of this size not solved in [41]

4.1. Feasibility Tests

In this subsection, we give several exact examples to illustrate that the algorithm OSBBRA is effective and feasible.
  • Example 1 [23,25,26,42]
    min ( x 1 + x 2 ) ( x 1 x 2 + 7 ) s . t . 2 x 1 + x 2 14 , x 1 + x 2 10 , 4 x 1 + x 2 0 , 2 x 1 + x 2 6 , x 1 + 2 x 2 6 , x 1 x 2 3 , x 1 + x 2 0 , x 1 x 2 + 7 0 , x 1 , x 2 0 .
  • Example 2 [25,26,39,42]
    min f ( x ) = ( 0.813396 x 1 + 0.67440 x 2 + 0.305038 x 3 + 0.129742 x 4 + 0.217796 ) × ( 0.224508 x 1 + 0.063458 x 2 + 0.932230 x 3 + 0.528736 x 4 + 0.091947 ) s . t . 0.488509 x 1 + 0.063565 x 2 + 0.945686 x 3 + 0.210704 x 4 3.562809 , 0.324014 x 1 0.501754 x 2 0.719204 x 3 + 0.099562 x 4 0.052215 , 0.445225 x 1 0.346896 x 2 + 0.637939 x 3 0.257623 x 4 0.427920 , 0.202821 x 1 + 0.647361 x 2 + 0.920135 x 3 0.983091 x 4 0.840950 , 0.886420 x 1 0.802444 x 2 0.305441 x 3 0.180123 x 4 1.353686 , 0.515399 x 1 0.424820 x 2 + 0.897498 x 3 + 0.187268 x 4 2.137251 , 0.591515 x 1 + 0.060581 x 2 0.427365 x 3 + 0.579388 x 4 0.290987 , 0.423524 x 1 + 0.940496 x 2 0.437944 x 3 0.742941 x 4 0.373620 , x 1 0 , x 2 0 , x 3 0 , x 4 0 .
  • Example 3 [23]
    min ( c 1 T x + d 1 ) ( c 2 T x + d 2 ) s . t . A x = b , x 0 . .
    where
    b = ( 81 , 72 , 72 , 9 , 9 , 9 , 8 , 8 ) T , d 1 = 0 , d 2 = 0 , c 1 = ( 1 , 0 , 1 9 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) T , c 2 = ( 0 , 1 , 1 9 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 ) T .
    A = 9 9 2 1 0 0 0 0 0 0 0 8 1 8 0 1 0 0 0 0 0 0 1 8 8 0 0 1 0 0 0 0 0 7 1 1 0 0 0 1 0 0 0 0 1 7 1 0 0 0 0 1 0 0 0 1 1 7 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 0 1 .
    From the literature [23], we know that Example 3 can be transformed into the following forms:
    ( L M P ) : = min ( x 1 + 1 9 x 3 ) ( x 2 + 1 9 x 3 ) s . t . 9 x 1 + 9 x 2 + 2 x 3 81 , 8 x 1 + x 2 + 8 x 3 72 , x 1 + 8 x 2 + 8 x 3 72 , 7 x 1 + x 2 + x 3 9 , x 1 + 7 x 2 + x 3 9 , x 1 + x 2 + 7 x 3 9 , 0 x 1 8 , 0 x 2 8 , 0 x 3 9 .
    Then, we obtained the global optimal solution of the problem (LMP): x = ( 0.0 , 8.0 , 1.0 ) T , and got the global optimal solution of Example 3: x = ( 0.0 , 8.0 , 1.0 , ) T , where the remaining components were chosen such that A x = b .
  • Example 4 [26,42]
    min x 1 + ( x 1 x 2 + 5 ) ( x 1 + x 2 1 ) s . t . 2 x 1 3 x 2 9 , 3 x 1 x 2 8 , 1 x 1 + 2 x 2 8 , x 1 + 2 x 2 12 , x 1 x 2 + 5 0 , x 1 + x 2 1 0 , x 1 0 .
  • Example 5 [26,42]
    min ( x 1 + x 2 ) ( x 1 x 2 ) + ( x 1 + x 2 + 1 ) ( x 1 x 2 + 1 ) s . t . x 1 + 2 x 2 10 , x 1 3 x 2 20 , 0 x 1 3 , 0 x 2 3 .
    The objective function of Example 5 can be transformed into 2 ( x 1 x 2 + 3 ) ( x 1 + x 2 + 1 ) 6 x 1 4 x 2 5 .
  • Example 6 [26,42]
    min ( x 1 + x 2 ) ( x 1 x 2 ) + ( x 1 + x 2 + 2 ) ( x 1 x 2 + 2 ) s . t . x 1 + 2 x 2 10 , x 1 3 x 2 20 , 0 x 1 4 , 0 x 2 4 .
    The objective function of the Example 6 can be transformed into 2 ( x 1 x 2 + 4 ) ( x 1 + x 2 ) 4 x 1 8 x 2 + 4 .
  • Example 7
    min ( x 1 + 2 x 2 + 6 ) ( x 1 + x 2 x 3 + 3 ) ( x 1 + x 2 2 x 3 + 7 ) s . t . x 2 2 x 3 1 , 3 x 1 3 x 2 + 4 x 3 12 , 3 x 2 5 x 3 14 , x 1 0 , x 2 0 , x 3 0 .
  • Example 8
    min f ( x ) = ( 4 x 1 2 x 4 + 3 x 5 + 21 ) ( 4 x 1 + 2 x 2 + 3 x 3 4 x 4 + 4 x 5 3 ) × ( 3 x 1 + 4 x 2 + 2 x 3 2 x 4 + 2 x 5 7 ) ( 2 x 1 + x 2 2 x 3 + 2 x 5 + 11 ) s . t . 4 x 1 + 4 x 2 + 5 x 3 + 3 x 4 + x 5 25 , x 1 5 x 2 + 2 x 3 + 3 x 4 + x 5 2 , x 1 + 2 x 2 + x 3 2 x 4 + 2 x 5 6 , 4 x 2 + 3 x 3 8 x 4 + 11 x 5 8 , x 1 + x 2 + x 3 + x 4 + x 5 6 , x 1 1 , x 2 1 , x 3 1 , x 4 1 , x 5 1 .
As shown in Table 1, the algorithm OSBBRA can accurately obtain the globally optimal solution of these eight low-dimensional examples, which shows that the algorithm is effective and feasible. We can see that the algorithm OSBBRA only needs one iteration in solving Example 2 and takes the least time compared with other algorithms. For Example 3, the time and the number of iterations are the most, but only five iterations are required. Examples 1 and 4 need only two iterations to obtain the solution, and the algorithm in this paper only needs very little time to solve them. In the process of solving Example 1, the running speed of this algorithm is much faster than that in Refs. [23,25]. For Examples 5 and 6, because of the special structure of the case, OSBBRA uses more iterations and time to solve such problems than other algorithms, but it can also get the globally optimal solution in less than 0.5 s, which indicates that our relaxation bound method can be further generalized. Examples 7 and 8 are two examples constructed by us, and the results of this algorithm were compared with those of the software package BARON [43]. It can be seen that the calculation results of this algorithm are the same as those of BARON, but the running time of CPU used in our algorithm is less than that of BARON, especially the number of iterations used by BARON in solving Example 8 is much higher than that of algorithm OSBBRA.
The above eight small-scale examples only illustrate the validity and feasibility of the algorithm OSBBRA, but we cannot know the other performance of the algorithm in solving the problem (LMP). Therefore, in the next subsection, we describe the other features of the algorithm by performing a series of random tests. The experimental results of random tests are presented in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9, and the information disclosed, in these eight tables, is analyzed to illustrate the performance and applicable conditions of the algorithm.

4.2. Testing of Random Problems

To test the other features of the algorithm, we used three random problem generation schemes to generate random (LMP):
( L M P 1 ) : min j = 1 p c j T x s . t . i = 1 n A s i x i b s , s = 1 , 2 , , m , 0 x i 1 , i = 1 , 2 , , n .
( L M P 2 ) : min j = 1 2 ( c j T x + 1 ) s . t . i = 1 n A s i x i b s , s = 1 , 2 , , m , x i 0 , i = 1 , 2 , , n .
In (LMP1) and (LMP2), A s i is randomly generated in the interval [−1,1], and the value of the right b s is generated by i = 1 n A s i + 2 π , where π is randomly generated in the interval [0,1]. This is consistent with the methods in Refs. [19,41].
( L M P 3 ) : min j = 1 p c j T x s . t . i = 1 n A s i x i b s , s = 1 , 2 , , m , x i 0 , i = 1 , 2 , , n .
The A s i , b s and c j of (LMP3) were randomly generated in the interval [0, 100]. This agrees that the random number is generated in Ref. [19].
In addition, for all problems, we solves 10 different random instances for each size, and give the statistical information of the results. In addition, the tolerance was set to 10 6 for all random problems.
Remark 8.
Each random example is based on the size of (p, m, n) a set of random coefficients that randomly generate the problem in a given interval, and then solved by the algorithms of OSBBRA, BARON, Reference [27], Reference [38], Reference [19], Reference [41] respectively, to obtain the corresponding calculation results.

4.2.1. Testing of Random Problem (LMP1)

For the problem (LMP1), we compared the algorithm OSBBRA with the algorithm in Ref. [27] and the Primal algorithm in Ref. [38], respectively, and used the optimal value obtained by commercial software package BARON as a standard to evaluate the quality of the optimal solution obtained by these three algorithms. For each group ( p , m , n ) , 10 groups of examples were randomly generated to obtain the average value of the calculated results. For the measurement of the quality of the optimal solution, we used the following formula:
O p t i m u m . r a t i o = | f ( x * ) O p t . v a l O p t . v a l |
where x * is the final optimal solution of the three algorithms and f ( x * ) is the optimal value corresponding to the three algorithms. To see the optimal value quality clearly, we used O p t i m u m . r a t i o × 10 5 , and the corresponding calculation results are listed in Table 2.
As can be seen in Table 2, in terms of the quality of the obtained optimal solution, the three algorithms are arranged in the order of optimal to lowest order: OSBBRA, Ref. [38], and Ref. [27]. This means that the optimal value obtained by our algorithm is the most accurate. For the CPU running time of the algorithm, the following cases are discussed:
(i) For ( p , m , n ) = ( 2 , 20 , 200 ) , ( 2 , 30 , 300 ) , ( 2 , 40 , 400 ) , the time spent is arranged as OSBBRA, Ref. [38], Ref. [27] in order from less to more. Furthermore, the time occupied by OSBBRA is the least in the three algorithms.
(ii) In the case of ( p , m , n ) = ( 2 , 10 , 100 ) and p = 3 , 4 , the time occupied by OSBBRA is the most in the three algorithms, followed by the algorithm in Ref. [27], and the time of the algorithm in the Ref. [38] is the least.
It can be seen that, in Case (ii), our algorithm OSBBRA takes the most time, but, in Case (i), OSBBRA takes less time than the other two algorithms. This is because, in the first case, p n can better reflect the advantages of our algorithm, which can also be seen in the higher scale experimental results in Table 3. This feature of algorithm OSBBRA is also reflected in Table 4 and Table 5. Of course, the data in Table 8 and Table 9 also imply this reason, which we mention again below.
For the problem (LMP1), we also performed higher-dimensional numerical experiments and record the results in Table 3. In the high dimensional example, we did not use the software package BARON to calculate, because BARON takes a long time to solve the higher-dimensional (LMP1) problem, and, in Table 2, we only use the results of BARON to evaluate the optimal worth quality of the three algorithms. In Table 3, the optimal quality of our algorithm is still the best, followed by Refs. [27,38]. Especially when solving the 4000-dimensional problem, the algorithm in Ref. [27] cannot find the optimal solution in 2400 s, and, when p = 6 , 7 , the algorithm in Ref. [38] also fails to solve the optimal solution of the problem within 2400 s. In addition, the results in Table 2 and Table 3 also show that the computing power of the algorithm in Ref. [38] is better than that in Ref. [27]. The results in Table 3 show that, when ( p , m , n ) = ( 5 , 50 , 1000 ) , the OSBBRA takes less time than the other two algorithms, whereas, in the case of ( p , m , n ) = ( 6 , 50 , 1000 ) , ( 7 , 50 , 1000 ) , the OSBBRA takes less time only in the case of other larger-scale problems in [27]. In addition, although the time spent on these three algorithms increases with the size of the problem, OSBBRA always takes less than 2400 s. This shows that our algorithm is more suitable to solve large-scale optimization problems (LMP) than other algorithms under certain circumstances.
The phenomenon reflected in the results of Table 2 and Table 3 is mainly due to the characteristics of each algorithm itself. Through the understanding of Ref. [27], each iteration of its algorithm to solve the problem (LMP) requires solving a quadratic programming problem, which does take less time to solve small-scale problems in a certain range than the algorithm OSBBRA, but it takes longer than OSBBRA to solve some large-scale problems. Although the original algorithm in Ref. [38] needs to solve only two linear programming problems in each iteration, and indeed has less time to solve small-scale problems than the other two algorithms, it is a tangent plane algorithm, thus increasing the number of constraints for each iteration. As the scale of the problem increases to a certain extent, its advantages will disappear because the increased constraints will gradually increase the storage space of the computer as it progresses and eventually slow down the speed of the computer. At the same time, the algorithms in both Ref. [27,38] need to store a large number of vertices, which will also affect the running time of the computer. The algorithm OSBBRA needs to store the hyper-rectangle whose length is 2 p 1 , and at most two hyper-rectangles are added in each iterative step. Through the pruning operation and rectangle reduction technology of the branch-and-bound algorithm, the hyper-rectangle without the globally optimal solution will be reduced, so that the computer storage space will be saved, and the influence of the storage space on the performance of the algorithm will be reduced as much as possible. Therefore, the computational performance of the algorithm OSBBRA is affected by the length 2 p 1 of the hyper-rectangle.
Next, we performed additional numerical experiments on the problem (LMP1) with OSBBRA and recorded the results in Table 4 and Table 5. The main purpose of Table 4 and Table 5 is to explore the case that the number of computer-stored consumption-matrixes varies by the size of ( p , m , n ) in the search for optimal solutions. Meanwhile, the average consumption time and the number of iterations in this process are also recorded in the table.
The data of two groups of numerical experiments are separately calculated by the algorithm OSBBRA, which is used to observe how the size of p and n affects the performance (Ave.Time and Ave.Iter) of the algorithm.The first group is to fix p to 2, 3, 4, 5, 6, and 7 in turn, take n = 2 m , take m as from 10 to 100 by steps of 10, and the calculated results are shown in Table 4. The results in Table 4 show that, when p is fixed, the average CPU running time Ave.Time increases as the dimension n of the decision variable becomes larger, and the average maximum number of nodes Ave.Node and the average number of iterations Ave.Iter of the algorithm stored in the branching and delimiting tree either increases or decreases. For fixed ( m , n ) , as p increases, the size of the Ave.Time, Ave.Iter, and Ave.Node are increased; in particular, when p transitions from 6 to 7, the rise speed of these three increases sharply, which indicates that the effect of the size of p on the performance of the algorithm is sharp. The results of Table 2, Table 3 and Table 4 show that p and n have an effect on the algorithm calculation. In more extreme cases, we did another set of experiments. As can be seen from the first four rows of data in Table 5, when fixed ( m , n ) = ( 10 , 2 ) and p in order of 10, 20, 30, and 40, t Ave.Time, Ave.Iter, and Ave.Node are increasing and increasing rapidly, at which point p is larger than n. From the last six rows of data in Table 5, we can also see that when p is much less than n, the algorithm can obtain the globally optimal solution of the problem (LMP) in a short time, and the performance of the algorithm is very sensitive to the size of p, which is mainly because our branch operation is to branch the p-dimensional space Y ^ k (it is the same as the definition in Section 2.3).

4.2.2. Testing of Random Problem (LMP2)

Through the previous experiments, we know the computational effect of OSBBRA, which is influenced by ( p , m , n ) . Next, we conducted numerical experiments on the special scheme (LMP2), and we fixed p = 2 to observe the effect of ( m , n ) on the algorithm. For the random generation scheme (LMP2), the algorithm OSBBRA was compared with the calculated results in Ref. [19,41], and the related data are recorded in Table 6. According to the method in Ref. [19], the normalized CPU running time and the number of iterations of the problem (LMP2) with respect to the 10 × 20 were obtained using the formula
T i m e ( I t e r ) o n m × n p r o b l e m T i m e ( I t e r ) o n 10 × 20 p r o b l e m ,
respectively, and the data are recorded in Table 7.
First, the results of Table 6 show that the stability of our algorithm is not as good as the other two. Table 7 shows that our algorithm has the best performance in the average case in terms of time, but, in terms of the number of iterations, the performance of our algorithm OSBBRA in the average case is slightly worse than that of the other two algorithms. When ( m , n ) = ( 110 , 199 ) , the matrix size of the solved problem is 109 times larger than that of the basic problem, and the time used by the algorithm OSBBRA is only 14 times larger than that of the basic problem. Whereas the algorithm in Ref. [19] used 162 times more time than the basic problem. In the case of ( m , n ) = ( 100 , 100 ) , we solved the problem with only six times the time of the fundamental problem, which is 50 times larger than the fundamental problem in terms of the size of the constraint matrix, whereas the algorithm of Ref. [41] takes 22 times the time, and the algorithm of Ref. [41] takes 54 times the time.
To sum up, the algorithm OSBBRA has less computational time increase than the algorithms in Ref. [19,41], in this particular case of fixed p = 2 . In the next subsection, we take the case of p = 2 as the basic problem to test the the growth of computing time requirements of algorithm OSBBRA compared to the Ref. [19].

4.2.3. Testing of Random Problem (LMP3)

The results in Section 4.2.2 show that, in the case of p = 2 , our algorithm OSBBRA takes far less time to solve a relatively large-scale problem than the algorithm in Ref. [19], but less stable. In this subsection, we use a stochastic scheme (LMP3) with a wider range of values of random coefficients to verify the growth of computing time requirements (measured by r p , and the definition of r p is given below) of our algorithm based on the premise of p = 2 . In addition, To better contrast with Ref. [19] and reduce the problem of unnecessary computation, we simply extracted the data that can be compared in Ref. [19] and used our algorithm to calculate, and record the experimental results in Table 8, while recording the values of r p in Table 9. For p = 2 , 3 , 4 and 5, r p was obtained by calculating the formula
r p = A v g T i m e f o r p = i A v g T i m e f o r p = 2 .
The calculated results in Table 8 show that the stability of OSBBRA is still poor, which also has an effect on the value of r p . Our algorithm is particularly sensitive to the values of p, as can be seen from the experimental data and analysis in Table 4 and Table 5, and in the case of p = 2 , 3 or ( m , n ) = ( 20 , 30 ) the growth of computing time requirements is slower. In the case of ( m , n ) = ( 100 , 100 ) , ( 120 , 120 ) , ( 200 , 200 ) and p = 4 , 5 , the growth of computing time requirements increases. However, when ( m , n ) = ( 200 , 200 ) , the growth of computing time requirements for p = 5 is similar to that for p = 4 , but the random coefficient values are diverse, and the stability of our algorithm is slightly poor; it is also acceptable to produce such a result. Furthermore, the results in Table 9 show that the computational requirements of the algorithm OSBBRA increase more rapidly than those of [19] only in ( p , m , n ) = ( 4 , 100 , 100 ) , ( 4 , 120 , 120 ) , ( 4 , 200 , 200 ) , and ( 5 , 100 , 100 ) . When n is relatively large compared to p, the value of r p is relatively small, which is because our algorithm has a distinct advantage in solving this case. The results of Table 4 and Table 5 also show that, in the case of p n , the growth of computing time requirements of the algorithm grows slowly, which also implies in another aspect the reason the value of r p in Table 9 is relatively small, which also means that our algorithm has obvious advantages in solving this case.
From the experimental results in Section 4.2.1, Section 4.2.2 and Section 4.2.3, we can summarize that our algorithm compared with other algorithms is characterized by the high accuracy of the calculated optimal value, slightly poor stability, and the computational effect of solving large-scale problems with high dimensions in the case of p n is more advantageous than other algorithms. In fact, in practical problems, the numerical size of p in problem (LMP) is generally not more than 10, and the dimension n of decision variable is much larger than p. In the process of branching, the number of vertices of the divided p dimensional rectangle is 2 p , which is very small compared with the n dimensional hyper-rectangle with the partition number of 2 n , which is the main reason our algorithm performs better than those in Ref. [19,27,38,41] in solving this kind of large-scale problems. We can also see from Theorem 5 in Ref. [44] that p and n affect the convergence rate of the output space algorithm.

5. Conclusions

In this paper, an output-space branch-and-bound reduction algorithm (OSBBRA) is proposed to solve the problem (LMP). Based on a new bilinear function relaxation technique, the linear relaxation problem of the equivalent problem (EP) is constructed, and other related parts of the algorithm OSBBRA (Bounding operation, Branching operation and rectangle-reducing operation) are given. In Section 4, the feasibility, effectiveness, and other performance metrics of the algorithm are fully illustrated by a large number of numerical experiments, and it is pointed out that the algorithm is more effective in solving high-dimensional problems under the condition of p n . The method in this paper can also be directly extended and used to solve the linear maximum multiplicative programming problem. In a broader sense, it can also be indirectly promoted, and we will also consider this issue in future academic research.

Author Contributions

B.Z. and Y.G. conceived of and designed the study. B.Z. and X.L. performed the experiments. B.Z. wrote the paper. Y.G. and X.H. reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China under Grant (11961001), the Construction Project of first-class subjects in Ningxia higher Education (NXY LXK2017B09), and the major proprietary funded project of North Minzu University (ZDZX201901).

Acknowledgments

The authors are grateful to the responsible editor and the anonymous references for their valuable comments and suggestions, which have greatly improved the earlier version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LMPlinear multiplicative programming
EPequivalent nonlinear programming
LRPlinear relaxation programming
Ttranspose of a vector or matrix

References

  1. Maranas, C.D.; Androulakis, I.P.; Floudas, C.A.; Berger, A.J.; Mulvey, J.M. Solving long-term financial planning problems via global optimization. J. Econ. Dyn. Control 1997, 21, 1405–1425. [Google Scholar] [CrossRef]
  2. Konno, H.; Shirakawa, H.; Yamazaki, H. A mean-absolute deviation-skewness portfolio optimization model. Ann. Oper. Res. 1993, 45, 205–220. [Google Scholar] [CrossRef]
  3. Nicholas, R.; Layard, P.R.G.; Walters, A.A. Microeconomic Theory. Economica 1980, 47, 211. [Google Scholar]
  4. Mulvey, J.M.; Vanderbei, R.J.; Zenios, S.A. Robust Optimization of Large-Scale Systems. Oper. Res. 1995, 43, 264–281. [Google Scholar] [CrossRef] [Green Version]
  5. Bennett, K.P. Global tree optimization: A non-greedy decision tree algorithm. Comput. Sci. Stat. 1994, 26, 156. [Google Scholar]
  6. Benson, H.P. Vector maximization with two objective functions. J. Optim. Theory Appl. 1979, 28, 253–257. [Google Scholar] [CrossRef]
  7. Dennis, D.F. Analyzing Public Inputs to Multiple Objective Decisions on National Forests Using Conjoint Analysis. For. Sci. 1998, 44, 421–429. [Google Scholar]
  8. Dorneich, M.C.; Sahinidis, N.V. Global optimization algorithms for chip layout and compaction. Eng. Optim. 1995, 25, 131–154. [Google Scholar] [CrossRef]
  9. Kuno, T. Globally determining a minimum-area rectangle enclosing the projection of a higher-dimensional set. Oper. Res. Lett. 1993, 13, 295–303. [Google Scholar] [CrossRef]
  10. Mititelu, Ş.; Treanţă, S. Efficiency conditions in vector control problems governed by multiple integrals. J. Appl. Math. Comput. 2018, 57, 647–665. [Google Scholar] [CrossRef]
  11. Treanţă, S. On Locally and Globally Optimal Solutions in Scalar Variational Control Problems. Mathematics 2019, 7, 829. [Google Scholar] [CrossRef] [Green Version]
  12. Treanţă, S. Multiobjective fractional variational problem on higher-order jet bundles. Commun. Math. Stat. 2016, 4, 323–340. [Google Scholar] [CrossRef]
  13. Treanţă, S. On a new class of vector variational control problems. Numer. Funct. Anal. Optim. 2018, 39, 1594–1603. [Google Scholar] [CrossRef]
  14. Saghand, P.G.; Charkhgard, H.; Kwon, C. A branch-and-bound algorithm for a class of mixed integer linear maximum multiplicative programs: A bi-objective optimization approach. Comput. Oper. Res. 2019, 101, 263–274. [Google Scholar] [CrossRef]
  15. Grötschel, M.; Lovász, L.; Schrijver, A. Geometric Algorithms and Combinatorial Optimization; Springe: Berlin, Germany, 1988. [Google Scholar]
  16. Charkhgard, H.; Savelsbergh, M.; Talebian, M. A linear programming based algorithm to solve a class of optimization problems with a multi-linear objective function and affine constraints. Comput. Oper. Res. 2018, 89, 17–30. [Google Scholar] [CrossRef]
  17. Matsui, T. NP-Hardness of linear multiplicative programming and related problems. J. Glob. Optim. 1996, 9, 113–119. [Google Scholar] [CrossRef] [Green Version]
  18. Kuno, T. A finite branch-and-bound algorithm for linear multiplicative programming. Appl. Math. Comput. 2001, 20, 119–135. [Google Scholar]
  19. Ryoo, H.S.; Sahinidis, N.V. Global optimization of multiplicative programs. J. Glob. Optim. 2003, 26, 387–418. [Google Scholar] [CrossRef]
  20. Kuno, T. Solving a class of multiplicative programs with 0-1 knapsack constraints. J. Optim. Theory Appl. 1999, 103, 121–135. [Google Scholar] [CrossRef]
  21. Benson, H.P. An outcome space branch and bound-outer approximation algorithm for convex multiplicative programming. J. Glob. Optim. 1999, 15, 315–342. [Google Scholar] [CrossRef]
  22. Jiao, H. A branch and bound algorithm for globally solving a class of nonconvex programming problems. Nonlinear Anal. Theory Methods Appl. 2009, 70, 1113–1123. [Google Scholar] [CrossRef]
  23. Chen, Y.; Jiao, H. A nonisolated optimal solution of general linear multiplicative programming problems. Comput. Oper. Res. 2009, 36, 2573–2579. [Google Scholar] [CrossRef]
  24. Shen, P.; Bai, X.; Li, W. A new accelerating method for globally solving a class of nonconvex programming problems. Nonlinear Anal. Theory Methods Appl. 2009, 71, 2866–2876. [Google Scholar] [CrossRef]
  25. Wang, C.F.; Liu, S.Y.; Shen, P.P. Global minimization of a generalized linear multiplicative programming. Appl. Math. Model. 2012, 36, 2446–2451. [Google Scholar] [CrossRef]
  26. Wang, C.F.; Bai, Y.Q.; Shen, P.P. A practicable branch-and-bound algorithm for globally solving linear multiplicative programming. Optimization 2017, 66, 397–405. [Google Scholar] [CrossRef]
  27. Gao, Y.; Xu, C.; Yang, Y. An outcome-space finite algorithm for solving linear multiplicative programming. Appl. Math. Comput. 2006, 179, 494–505. [Google Scholar] [CrossRef]
  28. Kuno, T.; Yajima, Y.; Konno, H. An outer approximation method for minimizing the product of several convex functions on a convex set. J. Glob. Optim. 1993, 3, 325–335. [Google Scholar] [CrossRef]
  29. Pardalos, P.M. Polynomial time algorithms for some classes of constrained quadratic problems. Optimization 1990, 21, 843–853. [Google Scholar] [CrossRef]
  30. Liu, X.J.; Umegaki, T.; Yamamoto, Y. Heuristic methods for linear multiplicative programming. J. Glob. Optim. 1999, 15, 433–447. [Google Scholar] [CrossRef]
  31. Benson, H.P.; Boger, G.M. Multiplicative programming problems: Analysis and efficient point search heuristic. J. Optim. Theory Appl. 1997, 94, 487–510. [Google Scholar] [CrossRef]
  32. Benson, H.P.; Boger, G.M. Outcome-space cutting-plane algorithm for linear multiplicative programming. J. Optim. Theory Appl. 2000, 104, 301–332. [Google Scholar] [CrossRef]
  33. Konno, H.; Kuno, T.; Yajima, Y. Global minimization of a generalized convex multiplicative function. J. Glob. Optim. 1994, 4, 47–62. [Google Scholar] [CrossRef]
  34. Konno, H.; Yajima, Y.; Matsui, T. Parametric simplex algorithms for solving a special class of nonconvex minimization problems. J. Glob. Optim. 1991, 1, 65–81. [Google Scholar] [CrossRef]
  35. Van Thoai, N. A global optimization approach for solving the convex multiplicative programming problem. J. Glob. Optim. 1991, 1, 341–357. [Google Scholar] [CrossRef]
  36. Youness, E.A. Level set algorithm for solving convex multiplicative programming problems. Appl. Math. Comput. 2005, 167, 1412–1417. [Google Scholar] [CrossRef]
  37. Liu, S.; Zhao, Y. An efficient algorithm for globally solving generalized linear multiplicative programming. J. Comput. Appl. Math. 2016, 296, 840–847. [Google Scholar] [CrossRef]
  38. Shao, L.; Ehrgott, M. Primal and dual multi-objective linear programming algorithms for linear multiplicative programmes. Optimization 2016, 65, 415–431. [Google Scholar] [CrossRef] [Green Version]
  39. Peiping, S.; Lufan, W. A Fully Polynomial Time Approximation Algorithm for Generalized Linear Multiplicative Programming. Math. Appl. 2018, 31, 208–213. [Google Scholar]
  40. Benson, H.P. Decomposition branch-and-bound based algorithm for linear programs with additional multiplicative constraints. J. Optim. Theory Appl. 2005, 126, 41–61. [Google Scholar] [CrossRef]
  41. Wang, C.F.; Liu, S.Y. A new linearization method for generalized linear multiplicative programming. Comput. Oper. Res. 2011, 38, 1008–1013. [Google Scholar] [CrossRef]
  42. Shen, P.; Huang, B. Global algorithm for solving linear multiplicative programming problems. Optim. Lett. 2019, 2019, 1–18. [Google Scholar] [CrossRef]
  43. Sahinidis, N. BARON User Manual v.19.7.13 [EB/OL]. 2019. Available online: http://minlp.com (accessed on 7 November 2019).
  44. Liu, X.; Gao, Y.L.; Zhang, B.; Tian, F.P. A New Global Optimization Algorithm for a Class of Linear Fractional Programming. Mathematics 2019, 7, 867. [Google Scholar] [CrossRef] [Green Version]
Table 1. Comparison of results in Examples 1–8.
Table 1. Comparison of results in Examples 1–8.
Ref.SolutionOptimumIterTime ϵ
1[23](1.9999998, 7.9999988)10.0000090410.02 10 5
[25](2.0, 8.0)10.0485.0780 10 3
[26](2.0, 8.0)10.010.0128 10 6
[42](2.0, 8.0)10.010.046 10 4
O S B B R A (2.0000, 8.0000)10.000020.0182 10 6
2[25](1.3148, 0.1396, 0.0, 0.4233 )0.890210.1880 10 3
[26](1.3148, 0.1396, 0.0, 0.4233)0.890210.0601 10 6
[39](1.3148, 0.1396, 0.0000, 0.4233)0.89019030.047 0.05
[42](1.3148, 0.1396, 0.0000, 0.4233)0.890210.093 10 4
O S B B R A (1.3148, 0.1396, 0.0000, 0.4233)0.890210.0226 10 6
3[23](8.0, 0.0, 1.0, …)0.90123530.00 10 3
[27](0.0, 8.0, 1.0, …)0.90123530.0469
O S B B R A (0.0, 8.0, 1.0, …)0.90123550.0743 10 3
4[26](0.0, 4.0)310.0693 10 6
[42](0, 4)310.062 10 4
O S B B R A (0.0000, 4.0000)3.000020.0218 10 6
5[26](1.0, 3.0)−1310.0868 10 6
[42](1, 3 )−1310.047 10 4
O S B B R A (1.0000, 3.0000)−13.0000160.1845 10 6
6[26](1.0, 4.0)−2210.0849 10 6
[42](1, 4)−2210.046 10 4
O S B B R A (1.0000, 4.0000)−22.0000190.2143 10 6
7 B A R O N (0.0000, 0.0000, 2.8000)1.680010.3824 10 6
O S B B R A (0.0000, 0.0000, 2.7999)1.680090.1377 10 6
8 B A R O N (1.0000, 1.9999, 1.0000, 1.0000, 1.0000)9503.99991551.4662 10 6
O S B B R A (1.0000, 2.0000, 1.0000, 1.0000, 1.0000)9503.999920.0691 10 6
Table 2. The average result of 10 low-dimensional random problems (LMP1).
Table 2. The average result of 10 low-dimensional random problems (LMP1).
( p , m , n ) O p t . v a l O p t i m u m O p t i m u m . r a t i o × 10 5 T i m e
O S B B R A Ref. [27]Ref. [38] O S B B R A Ref. [27]Ref. [38] O S B B R A Ref. [27]Ref. [38]
(2, 10, 100)29.909729.909729.918729.91570.06733.61922.24110.55370.47370.1615
(2, 20, 200)133.3865133.3866133.434133.41830.002334.30352.29440.57461.24240.6026
(2, 30, 300)182.721182.721182.7537182.74280.007224.62071.64121.81973.1941.9535
(2, 40, 400)351.5904351.5926351.754351.69940.603943.31722.88742.23386.49194.5876
(3, 10, 100)96.644596.644596.654696.65140.064314.00319.54930.63250.58510.286
(3, 20, 200)1375.65951375.64341375.95411375.85711.1743.13382.88912.24121.43850.7983
(3, 30, 300)6510.45616510.09546514.1456512.92545.5451.380934.43463.7963.63752.3271
(3, 40, 400)10,302.613410,303.170810,306.057810,304.92655.4127.529418.531413.13118.03745.6722
(4, 10, 100)521.4775521.4759521.7151521.63640.29348.682732.54932.66621.10850.4045
(4, 20, 200)22,512.683622,511.832722,527.319622,522.46573.7865.012343.451319.83162.43830.9227
(4, 30, 300)248,728.2167248,734.8081248,898.3767248,841.81082.6568.412145.669925.08138.88085.8772
(4, 40, 400)170,323.0981170,338.2399170,446.6353170,405.94778.8972.531148.642527.339312.63217.5582
Table 3. The average result of 10 high-dimensional random problems (LMP1).
Table 3. The average result of 10 high-dimensional random problems (LMP1).
( p , m , n ) O p t i m u m T i m e
O S B B R A Ref. [27]Ref. [38] O S B B R A Ref. [27]Ref. [38]
(5, 50, 1000)1.5534 × 10 8 1.5541 × 10 8 1.5539 × 10 8 941.3086873.6958723.1655
(5, 60, 2000)6.5945 × 10 8 6.5985 × 10 8 6.5972 × 10 8 449.88721529.7757917.3199
(5, 70, 3000)8.0127 × 10 9 8.0144 × 10 9 8.0138 × 10 9 782.06791639.86101323.6961
(5, 80, 4000)4.0686 × 10 9 4.0692 × 10 9 1007.69141796.8545
(6, 50, 1000)2.5156 × 10 9 2.5166 × 10 9 2.5163 × 10 9 1202.90661332.19531081.1864
(6, 60, 2000)1.8276 × 10 10 1.8281 × 10 10 1.8279 × 10 10 1101.43071542.39411484.3904
(6, 70, 3000)2.0195 × 10 11 2.0208 × 10 11 2.0204 × 10 11 1360.13561927.71731619.3256
(6, 80, 4000)3.5453 × 10 11 1404.9818
(7, 50, 1000)2.0887 × 10 11 2.0892 × 10 11 2.0890 × 10 11 1500.49771661.26531321.2203
(7, 60, 2000)1.4108 × 10 13 1.4111 × 10 13 1.4110 × 10 13 1690.93172169.46101923.1707
(7, 70, 3000)5.0654 × 10 13 5.0655 × 10 13 1893.36332164.7689
(7, 80, 4000)2.6370 × 10 14 2116.0214
Table 4. The results of random calculation for (LMP1).
Table 4. The results of random calculation for (LMP1).
( m , n ) p = 2 p = 3 p = 4 p = 5 p = 6 p = 7
(10, 20)Ave.Time0.14520.32050.66541.26091.728719.2006
Ave.Iter4.610.710.845.743.8458.8
Ave.Node1.42.92.211.812.8139.2
(20, 40)Ave.Time0.15140.32861.16681.39296.650623.8151
Ave.Iter3.99.725.145.777.0496.6
Ave.Node1.32.85.113.925.6131.7
(30, 60)Ave.Time0.17200.45811.81457.676611.112564.3865
Ave.Iter5.111.532.6184.6101.91013.5
Ave.Node1.22.38.631.728.7157.7
(40, 80)Ave.Time0.24410.70602.278010.410816.394586.1229
Ave.Iter5.212.946.4117.6133.11177.3
Ave.Node1.13.16.221.040.0166.9
(50, 100)Ave.Time0.41781.34772.527715.590222.0331127.1028
Ave.Iter6.115.342.6167271.21398
Ave.Node1.43.215.427.255.8186.9
(60, 120)Ave.Time0.61792.79116.579718.371125.2491155.5248
Ave.Iter6.820.876.5143.4387.1640.2
Ave.Node1.54.210.823.585.1109.2
(70, 140)Ave.Time1.82343.83808.149338.753763.8334185.5591
Ave.Iter9.024.062.2212.5766.51197.4
Ave.Node1.65.313.430.469.4196.0
(80, 160)Ave.tTime1.98866.768814.768881.181698.5674278.3208
Ave.Iter6.527.543.8310.2315.3611891.1
Ave.Node1.54.111.439.472.4122.3
(90, 180)Ave.Time2.41048.811419.228193.8722110.7434293.8722
Ave.Iter6.926.637.2298.4444.2816.7
Ave.Node1.24.216.632.491.2131.6
(100, 200)Ave.Time4.102714.277246.8494114.7671120.0034321.4063
Ave.Iter8.431.079.2230.6657.8469.3
Ave.Node1.75.215.826.1116.997.0
Table 5. The results of random calculation for (LMP1).
Table 5. The results of random calculation for (LMP1).
( p , m , n ) A v e . I t e r A v e . T i m e A v e . N o d e
(10, 10, 2)10.90.35761.3
(20, 10, 2)234948.57071891.8
(30, 10, 2)11,369.5240.45698072.5
(40, 10, 2)5060.4120.88214937.7
(2, 10, 1000)15.52.62934.0
(2, 10, 2000)28.514.001275.9
(3, 10, 1000)101.819.323525.3
(3, 10, 2000)185.490.389837.0
(4, 10, 1000)757.6156.5649134.7
(4, 10, 2000)1352.1995.4707257.3
Table 6. Computational results on (LMP2) (p = 2) and comparison with results reported in [19,41].
Table 6. Computational results on (LMP2) (p = 2) and comparison with results reported in [19,41].
L M P 2 Ref. [19]Ref. [41] O S B B R A
( m , n ) A v g ( S t d ) T i m e A v g ( S t d ) I t e r A v g ( S t d ) T i m e A v g ( S t d ) I t e r A v g ( S t d ) T i m e A v g ( S t d ) I t e r
(10, 20)0.1 (0.1)6.2 (4.3)0.6062 (0.0695)14.2 (1.5492)0.2083 (0.3861)2.6 (6.2561)
(20, 20)0.2 (0.1)7.0 (2.8)0.8368 (0.0756)17.4 (1.7127)0.2814 (0.5504)4.8 (6.9793)
(22, 20)0.2 (0.1)8.8 (4.2)0.9460 (0.1235)18.5 (1.9003)0.3231 (0.9257)6.0 (12.2564)
(20, 30)0.3 (0.1)8.0 (3.6)1.0781 (0.0674)19.9 (0.5676)0.3302 (0.4899)6.4 (7.4951)
(35, 50)1.0 (0.4)11.0 (3.5)1.8415 (0.1338)21.2 (0.4316)0.4267 (0.8646)8.1 (11.6772)
(45, 60)1.2 (0.3)13.3 (4.9)2.4338 (0.1016)23.0 (0.6667)0.4867 (0.8930)8.7 (14.2688)
(45, 100)3.9 (1.2)15.2 (6.0)5.1287 (0.0935)35.7 (1.1595)0.6049 (0.9664)11.9 (12.3809)
(60, 100)5.6 (1.2)14.8 (3.8)6.8143 (0.1713)36.1 (0.7379)0.7955 (1.2783)9.7 (12.9822)
(70, 100)6.5 (3.0)17.5 (7.2)8.1967 (0.2121)36.6 (1.2649)0.8152 (1.3057)8.3 (11.6638)
(70, 120)9.0 (1.8)17.2 (4.8)9.5642 (0.2975)39.1 (1.6633)0.9693 (1.3529)10.1 (14.6462)
(100, 100)7.6 (1.0)13.3 (4.3)13.0578 (0.3543)37.5 (2.1731)1.1889 (1.2506)11.1 (9.0549)
(102, 150)15.9 (2.9)24.8 (7.0)1.7051 (0.9492)12.6 (8.9361)
(102, 190)21.4 (3.5)28.4 (7.5)1.8014 (1.7103)8.4 (8.0443)
(72, 199)18.3 (6.2)25.5 (8.3)1.5827 (2.1399)9.7 (9.6171)
(110, 199)22.7 (3.0)21.7 (5.7)2.9039 (4.1332)9.7 (16.5476)
Table 7. Computational results of Normalized values on (LMP2).
Table 7. Computational results of Normalized values on (LMP2).
( L M P 2 ) N o r m a l i z e d V a l u e s ( p = 2 )
( m , n ) Ref. [19] Ref. [41] O S B B R A
( A v g ) T i m e ( A v g ) I t e r ( A v g ) T i m e ( A v g ) I t e r ( A v g ) T i m e ( A v g ) I t e r
(10, 20)111111
(20, 20)111112
(20, 30)212122
(45, 60)824223
(70, 100)46314343
(100, 100)54222264
(102, 150)114485
(110, 199)1624144
Table 8. Computational results on (LMP3) and comparison with results reported in [19].
Table 8. Computational results on (LMP3) and comparison with results reported in [19].
p ( m , n ) Ref. [19] O S B B R A
A v g ( S t d ) T i m e A v g ( S t d ) I t e r A v g ( S t d ) T i m e A v g ( S t d ) I t e r
2(20, 30)0.3(0.1)9.0(3.1)0.1(0.1)2.3(5.5)
(100, 100)5.8(2.0)17.5(8.5)0.4(1.2)2.5(7.5)
(120, 120)8.9(2.7)15.8(6.8)0.9(1.8)3.0(7.4)
(200, 200)50.1(13.0)25.8(6.2)8.8(22.6)8.3(18.6)
3(20, 30)0.8(0.3)39.4(20.2)0.2(0.9)2.3(6.6)
(100, 100)25.6(9.6)90.6(24.4)1.2(3.4)7.8(23.2)
(120, 120)35.3(10.0)82.1(40.9)2.1(6.5)6.3(21.3)
(200, 200)149.0(60.2)87.3(46.9)18.8(92.4)9.9(43.1)
4(20, 30)2.6(0.8)158.2(64.8)0.2(1.2)7(56.9)
(100, 100)61.0(21.1)243.8(117.8)5.7(50.0)17.9(160.3)
(120, 120)94.2(23.3)271.4(70.2)17.2(94.5)35.3(186.4)
(200, 200)396.3(189.4)301.4(171.7)127.0(812.3)47.6(301.1)
5(20, 30)6.0(2.0)370.8(108.2)0.4(3.3)13.9(122.3)
(100, 100)197.9(38.4)830.8(148.3)16.5(152.3)56.7(528.4)
(120, 120)245.4(97.0)686.0(285.2)21.5(196.0)46.4(430.7)
(200, 200)1381.1(860.1)1047.5(693.1)122.8(1137.1)52.4(491.5)
Table 9. Computational results on (LMP3) and comparison with results reported in [19].
Table 9. Computational results on (LMP3) and comparison with results reported in [19].
(LMP3)Normalized AvgTime
( m , n ) Ref. [19]OSBBRA
r 2 r 3 r 4 r 5 r 2 r 3 r 4 r 5
(20,30)1310231224
(100,100)141034131441
(120,120)141128121924
(200,200)13828121414

Share and Cite

MDPI and ACS Style

Zhang, B.; Gao, Y.; Liu, X.; Huang, X. Output-Space Branch-and-Bound Reduction Algorithm for a Class of Linear Multiplicative Programs. Mathematics 2020, 8, 315. https://doi.org/10.3390/math8030315

AMA Style

Zhang B, Gao Y, Liu X, Huang X. Output-Space Branch-and-Bound Reduction Algorithm for a Class of Linear Multiplicative Programs. Mathematics. 2020; 8(3):315. https://doi.org/10.3390/math8030315

Chicago/Turabian Style

Zhang, Bo, Yuelin Gao, Xia Liu, and Xiaoli Huang. 2020. "Output-Space Branch-and-Bound Reduction Algorithm for a Class of Linear Multiplicative Programs" Mathematics 8, no. 3: 315. https://doi.org/10.3390/math8030315

APA Style

Zhang, B., Gao, Y., Liu, X., & Huang, X. (2020). Output-Space Branch-and-Bound Reduction Algorithm for a Class of Linear Multiplicative Programs. Mathematics, 8(3), 315. https://doi.org/10.3390/math8030315

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop