1. Introduction
Fractional programming is an important branch of nonlinear optimization and it has attracted interest from researchers for several decades. The sum of linear ratios problem is a special class of fractional programming problem with wide applications, such as for transportation schemes, as well as finding applications in economics [
1], investment and production control [
2,
3,
4], and multi-objective portfolios [
5]. The primary challenges in solving linear fractional programming (
) arise from a lack of useful properties (convexity or otherwise) and from the number of ratios and the dimension of decision space. Theoretically, it is NP-hard [
6,
7]. In addition, for a problem
, there may be several local optimal solutions [
8], which interferes with finding the global optimal solution and increases the difficulty of the problem. It is therefore worthwhile to study this kind of problem. In this paper, we shall investigate the following linear fractional programming problem:
where the feasible domain
is
n-dimensional, nonempty, and bounded;
,
,
,
,
,
,
and
.
In the application of practical problems,
p usually does not exceed 10. At present, many algorithms have been proposed to solve the
problem with a limited number of ratios. For instance, In 1962, Charnes et al. gave an effective elementary simplex method in the case of
[
9]. On the premise of
, Konno proposed one similar parametric elementary simplex method on the basis of reference [
9], which can be used to solve large-scale problems [
10]. When
, Konno et al. constructed an effective heuristic algorithm by developing the parameter simplex algorithm [
11]. When
, Shen et al. reduced the original nonconvex programming problem to a series of linear programming problems by using equivalent transformation and linearization techniques to achieve the purpose of solving the linear fraction problem with coefficients [
12]. Nguyen and Tuy considered a unified monotonic approach to generalized linear fractional programming [
13]. Benson presented a simplicial branch-and-bound duality-bounds algorithm by applying the Lagrangian duality theory [
6]. Jiao et al. gave a new interval reduced branch-and-bound algorithm for solving the global problem of linear ratio and denominator outcome space [
14]. By exploring a well-defined nonuniform mesh, Shen et al. solved an equivalent optimization problem and proposed a complete polynomial time approximation algorithm [
15]. In the same year, Hu et al. proposed a new branch-and-bound algorithm for solving the low-dimensional linear fractional programming [
16]. Shen et al. introduced a practicable regional division and reduction algorithm for minimizing the sum of linear fractional functions over a polyhedron [
17]. Through using a suitable transformation and linearization technique, Zhang and Wang proposed a new branch and bound algorithm with two reducing techniques to solve the generalized linear fractional programming [
18]. By adopting the exponent transformation technique, Jiao et al. proposed a branch and bound algorithm of three-level linear relaxation to solve the generalized polynomial ratios problem with coefficients [
19]. Based on the image space where the objective function is easy to deal with in a certain direction, Falk J E et al. transformed the problem into an “image space” by introducing new variables, and then analyzed and solved the linear fractional programming [
20]. Gao Y et al. transformed the original problem into an equivalent bilinear programming problem, and used the convex envelope and concave envelope of bilinear functions to determine the lower bound of the optimal value of the original problem, and then propose a branch and bound algorithm [
21]. By dividing the box where the decision variables are located, Ying Ji et al. proposed a new deterministic global optimization algorithm by relaxing the denominator on each box [
22]. Furthermore, according to references [
23,
24], there are other algorithms that can be used to solve the
problem.
In this article, a new branch-and-bound algorithm based on the branch of output-space is proposed for globally solving the
problem. To do this, an equivalent optimization problem (
) is presented. Next, the objective function and constraint functions of the equivalence problem are relaxed using four sets (i.e.,
,
,
,
) and the multiplication rules for real number operations. Based on this operation, a linear relaxation programming problem that provides a reliable lower bound for the original problem is constructed. Finally, a new branch-and-bound algorithm for the
problem is designed. Compared with the methods mentioned above (e.g., [
9,
10,
11,
12,
13,
14,
15,
17,
18,
23,
24]), the goal of this research is three-fold. First of all, the lower bound of the subproblem of each node can be achieved easily, solely by solving linear programs. Secondly, the performance of the algorithm is based on the difference between the number of decision variables
n and the number
p of ratios. Thirdly, the problem in this article is more general than those considered in [
14,
17,
18], since we only require
and don’t need to convert
to
for each
i. However, the problem solved by our model must ensure that every decision variable is non-negative, which is also a limitation of the problem we study. In the end, the computational results of a problem with a large number of ratio terms are shown below to illustrate the feasibility and validity of the proposed algorithm.
This paper is organized as follows. In
Section 2, the
problem is changed to the equivalent non-convex programming problem
.
Section 3 shows how to construct a linear relaxation problem of
. In
Section 4, we give the branching rules on a hyper-rectangle. In
Section 5, an output-space branch and bound algorithm is presented and its convergence is established.
Section 6 introduces some existing test examples in the literature, and gives the calculation results and numerical analysis. Finally, the method of this paper is briefly reviewed, and the extension of this method to multi-objective fractional programming is prospected.
5. Output-Space Branch-and-Bound Algorithm and Its Convergence
To allow a full description of the algorithm, when the algorithm iterates to step k, we make the following representation of the associated notation: is the hyper-rectangle to be thinned for the current iteration step; Q is the set of all feasible solutions to ; is the remaining sets of hyper-rectangles after pruning; is the upper bound of the global optimal value of the problem when the algorithm iterates to the step k; is the lower bound of the global optimal value of the problem when the algorithm iterates to the step k; represents the optimal function value of problem on and is its corresponding optimal solution.
Using the above, a description of the output-space branch-and-bound algorithm for solving the problem is as follows.
Step 1. Set the tolerance . Construct the initial hyper-rectangle . Solve the linear programming problem on super-rectangular . The corresponding optimal solution and optimal value are recorded as and , respectively. Then, is the initial lower bound of the global optimal value of . The initial upper bound is . If , then stop, a -global optimal solution of problem is found. Otherwise, set , , the initial iteration number , and transfer to Step 2.
Step 2. If , then stop the iteration of the algorithm, output the current global optimal solution of the problem and the globally optimal value ; Otherwise, go to Step 3.
Step 3. The super-rectangle , which corresponds to the current lower bound , is selected, in , i.e., .
Step 4. Using the rectangular branching process in
Section 3,
is divided into two sub-rectangles:
and
that satisfy
. For all
, set
,
. If
, go to Step 3. Otherwise, set
, and continue.
Step 5. Let . If , the current optimal solution is ; Let ; Set , , , and go to Step 2.
Remark 3. The branching target of our branch and bound algorithm is p-dimensional output-space, so our algorithm can be called .
Remark 4. It can be seen from Step 4 and Step 5 that the number of elements in Q does not exceed two in each iterative step, and at the same time, only two function values are calculated in Step 5 to update the upper bound.
Remark 5. In Step 4, we save the super-rectangle of into Ω after each branch, which implies the pruning operation of the branching algorithm.
Remark 6. The convergence rate of the algorithm . is related to the optimal accuracy and the initial hyper-rectangle . It can be seen from Theorem 5 below that the convergence rate of the algorithm is proportional to the size of the accuracy ϵ and inversely proportional to the diameter length of the initial hyper-rectangle . In general, the accuracy is given in advance, and the convergence rate mainly depends on the diameter length of the initial hyper-rectangle .
Theorem 2. Let , for each , or , if , we have .
Proof of Theorem 2. For every
, by merging (1) and (2), we have:
and
where
On the one hand,
is bounded and
for each
. Thus
,
, besides
.
By combining Inequalities
and Equation (
4), then
. Therefore, the theorem holds. □
According to Theorem 2, we can also know that the super-rectangle of t is thinning gradually and the relaxed feasible region progressively approaches the original feasible region by operation of the algorithm.
Theorem 3. (a) If the algorithm terminates within finite iterations, a globally optimal solution for is found.
(b) If the algorithm generates an infinite sequence in the iterative process, then any accumulation point of the infinite sequence is a global optimal solution of the problem .
Proof of Theorem 3. (a) If the algorithm is finite, assume it stops at the
kth iteration,
. From the termination rule of Step 2, we know that
, which implies that
Assuming that the global optimal solution is
, we know that
hence, combining inequalities (5) and (6), we have
and then part (a) has been proven.
(b) If the iteration of the algorithm is infinite, and in this process, an infinite sequence
of feasible solutions for the problem
is generated by solving the problem
, the sequence of feasible solutions for the corresponding linear relaxation problem is
. According to Steps 3–5 of the algorithm, we have
Because the series
is nondecreasing and bounded, and
is decreasing and bounded, they are convergent sequences. Taking the limit on both sides of (8), we have
Then,
,
, and Formula (9) becomes
Without loss of generality, assume that the rectangular sequence satisfies and . In our algorithm, the rectangles are divided continuously into two parts of equal width, then , and in the process, a sequence of t will be generated, obviously, , and also generate a sequence that satisfies , because of the continuity of function and Formula (10). So, the sequence , of which any accumulation point is a global optimal solution of the problem. □
From Theorem 3, we know that the algorithm in this paper is convergent, and then we use Theorems 4 and 5 to show that the convergence rate of our algorithm is related to the size of
p. For the detailed proof of Theorem 4, see [
25], and other concepts in the theorem are derived from [
25]. In addition, we encourage readers to understand [
25] in detail.
As the sub-hyper-rectangles obtained by our branch method are not necessarily congruent, take
, where the definition of
s is given below. The definition of
is the same as that of Notation 1 in [
25], which represents the diameter of hyper-rectangle
. Therefore,
represents the maximum diameter of the
s hyper-rectangles. In order to connect well with the content of this paper, we adjust the relevant symbols and reinterpret.
Theorem 4. Consider the big cube small cube algorithm with a bounding operation which has a rate of convergence of . Furthermore, assume a feasible super-rectangle H and the constants ϵ, as before. Moreover, we assume the branching process which splits the selected super-rectangle along each side, i.e., into smaller super-rectangles. Then the worst case number of iterations for the big cube small cube method can be bounded from above by Proof of Theorem 4. The proof method is similar to the Theorem 2 in [
25] and is thus omitted. □
In Theorem 4,
r represents the spatial dimension of the hyper-rectangle to be divided. At the same time, Tables 1 and 2 in [
25] show that
is the worst case, that is, the most times the algorithm needs to subdivide hyper-rectangles during iteration. For the convenience of the discussion, we assume
, and give Theorem 5 to show that the convergence rate of our algorithm is related to the size of
p.
Theorem 5. For the algorithm , it is assumed that for a feasible hyper-rectangle , there is a fixed positive number of and the accuracy ϵ. In addition, we also assume that the branching process will eventually divide the hyper-rectangle into small hyper-rectangles. Then, in the worst case, the number of iterations of the algorithm when dividing the hyper-rectangle can be expressed by the following formula:We call the convergence rate of the algorithm , . Proof of Theorem 5. We order “
”, “
”, “
”, “
” and “
” in Theorem 4, the proof method is similar to Theorem 4, and the reader can refer to [
25]. □
In addition, for the algorithms in [
18,
26,
27,
28,
29], they subdivide the
n-dimensional hyper-rectangle
. Similar to Theorem 4, when they divide the hyper-rectangle
, the number of iterations in the worst case can also be expressed by the following formula:
where “
n”, “
”, “
”, “
” and “
” correspond to “
r”, “
C”, “
q”, “
z” and “
H” in (11). We also record the convergence rate of the algorithm in [
18,
26,
27,
28,
29] as
.
By means of Equations (12) and (13), when , the following conclusions are drawn:
(i): If , then, .
(ii): If
, there must be a positive number
so that
holds, which means that when
implies that
,
also holds, then:
Both conclusions (i) and (ii) can show that when , the following formula is established: .
Remark 7. In Formula (12) of Theorem 5, , while the in Formula (13) does not specify the size, which means that is compared with in the case of slowest convergence, but in the case of , there will always be (i) and (ii), which is a clearer indication of .
It can be seen that the size of and is exponential growth, but the size of p in is generally not more than 10, and , so our algorithm has an advantage in solving large-scale problems in the case of and . The experimental analysis of several large-scale random examples below will also be referred to again.
6. Numerical Examples
Now, we give several examples and a random calculation example to prove the validity of the branch-and-bound algorithm in this paper.
We coded the algorithms in Matlab 2017a and ran the tests in a computer with an Intel(R) Core(TM)i7-4790s processor of 3.20 GHz, 4 GB of RAM memory, under the Microsoft Windows 7 operational system. In solving the , we use the simplex method in the linprog command in Matlab 2017a.
In
Table 1,
Table 2,
Table 3,
Table 4,
Table 5,
Table 6,
Table 7,
Table 8 and
Table 9, the symbols of the table header are respectively:
, the optimal solution of the
problem;
), the optimal value of the objective function; Iter, the number of iterations on problems 1–11; Ave.Iter, the average number of iterations on problems 12–13;
, tolerance; Time: the CPU running time of problems 1–11; Ave.Time, the average CPU running time of problems 12–13;
p, the number of linear fractions in the objective function;
m, the number of linear constraints;
n, the dimension of the decision variable;
, the success rate of the algorithm in calculating problem 12. When the number of Ave.Time or Ave.Iter shows “–”, it means that the algorithm fails to calculate when solving the problem.
As can be seen from
Table 1, our algorithm can accurately obtain the global optimal solution of these 11 low-dimensional examples, which shows the effectiveness and feasibility of this algorithm. However, compared with other algorithms in the literature, the effect of this algorithm is relatively poor. This is because the method of constructing the lower bound is simple and easy to operate, and the branching operation is performed on the
p-dimensional output-space. At the same time, the algorithm of this paper has no super-rectangular reduction technology, which makes the approximation of solving the low dimensional problem worse. We also note that the number
p of ratios in these 11 examples is not larger than the dimension
n of the decision variable, and our algorithm requires that
p is much smaller than
n, which is why our algorithm is not effective in solving these examples. With the continuous updating and progress of computers, the gap between our algorithm and other methods in solving these 11 low-dimensional examples can be bridged, and the needs of society mainly focus on the high-dimensional problems under
. Therefore, we only use Examples 1–11 to illustrate the effectiveness and feasibility of our algorithm, and the numerical results can also show that the algorithm is convergent. When our algorithm is applied to higher dimensional problems, the effect gradually improves, as can be seen from the numerical results of Examples 12 and 13 in
Table 2,
Table 3,
Table 4,
Table 5,
Table 6,
Table 7,
Table 8 and
Table 9.
where p is a positive integer,
,
,
are randomly selected on the interval [0,1], and set
for all q. All constant terms of denominators and numerators are the same number, which randomly generated in [1,100]. This agrees that the random number is generated in [
18]. First of all, when the dimension is not more than 500, we generate 10 random examples for each group (p, m, n), and use the algorithm OSBBA and the algorithm in [
18] to calculate the same example respectively, and then record the average number of iterations and the average CPU running time of these 10 examples in
Table 2, respectively. Secondly, when the dimension n is not less than 1000, it is noted that the algorithm in [
18] needs to solve 2n linear programming problems when determining the initial hyper-rectangle, and the search space of each linear programming is at least thousands of dimensions, which is very time-consuming. Note that when (p, m, n) = (10, 200, 500), the CPU time is close to 1200 s. Therefore, in the case where the dimension n is not less than 1000, We generate only five random examples and specify that the algorithm is considered to fail when the calculated time exceeds 1200 s. On the premise of recording the average number of iterations and the average CPU running time, the success rate of the five high-dimensional examples is also added, which is presented in
Table 3.
First of all, by comparing the lower bound subproblem in [
18] with the lower bound subproblem in our algorithm, we can know that the lower bound subproblem of algorithm
only makes use of the information of the upper and lower bounds of the denominator of
p ratios in the process of construction, while in the process of constructing the lower bound, the information of the upper and lower bounds of the decision variables is also used, which requires the calculation of
linear programming problems in the initial iterative step. Compared with the method in [
18], the algorithm
only needs to calculate the
linear programming problems in the initialization stage, and does not need to calculate the upper and lower bounds of any fractal denominator. It can be seen that when
p is much less than
n,
18] will spend a lot of time in calculating
linear programming problems. The number of branches is often particularly large when the number of iterations is greater than 1, so that a large number of child nodes will be produced on the branch and bound tree, which not only occupies a large amount of computer memory space but also takes a lot of time. However, the performance of our algorithm
is the opposite. In real life, the size of
p is usually not greater than 10, therefore, the number of subproblems that will need to be solved is usually small in the process of branching iteration. Compared with the method in [
18], the amount of computer memory occupied is not very large.
Secondly, from the results of
Table 2, we can see that the computational performance of [
18] in solving small-scale problems is better than that of our algorithm
. However, it can be clearly seen that when the dimension of the problem is higher than 100, its computational performance gradually weakens. It can also be seen that when the dimension of the problem is above 100, the computational power of the method in [
18] is inferior to our algorithm
. The computational performance of the algorithm
is closely related to the size of
p, and the smaller the
p is, the shorter the computing time is. For the algorithm in [
18], its computational performance has a very important relationship with the size of
n. The larger the
n, the more time the computer consumes. It is undeniable that the method in [
18] has some advantages in solving small-scale problems. However, in solving large-scale problems,
Table 3 shows that the algorithm
is always superior to the algorithm in [
18]. Especially when the dimension is more than 500, the success rate of our algorithm to find the global optimal solution in 1200 s is higher than that in [
18]. This is the advantage of algorithm
.
In addition, for Example 12, we also use the same test method as the [
18] to compare the algorithm
with the internal solver
of MATLAB toolbox
[
26], where we only record the the average CPU running time and the success rate of the two algorithms and display them in
Table 4 and
Table 5.
As can be seen from
Table 4 and
Table 5, the
is more effective than the
when solving the small-scale problem, but it is sensitive to the size
n of the decision variable, especially when
n exceeds 100, the CPU time of the computer is suddenly increased. The algorithm
is less affected by
n, but for small-scale problems, the computational performance of the algorithm
is very sensitive to the number
p of the linear fractions. For large-scale problems,
Table 5 shows similar results as
Table 3.
In order to further illustrate the advantages of the algorithm
in this paper, a large number of numerical experiments were carried out for Example 13, comparing the algorithm
with the call to the commercial software package
[
27]. Through the understanding of
, we know that the operation of its branches is also carried out in the
n-dimensional space. Similar to [
18], we can predict that
is quite time-consuming as the dimension increases. To simplify the comparison operation, the one of the constants of the numerator and denominator is set to 100 (i.e.,
) in order to successfully run
. Next, we give an upper bound
for each decision variable, and randomly select from the interval
together with
,
,
and
to form a random Example 13.
We set the tolerance of the algorithm
and
to
for the sake of fairness (This is because the accuracy of the internal setting of the package
is
and we are unable to adjust it). For each group
, we randomly generate 10 examples, calculate the same example with the algorithm
and the commercial software package
respectively, and then record the average number of iterations and the average CPU running time of the 10 examples in
Table 6,
Table 7,
Table 8 and
Table 9.
As we can see from
Table 6, when
n is less than 100, the CPU run time (
) and the iteration number (
) of our algorithm are not as good as
. In the case of
,
n = 100, the CPU average running time (
) and the average iteration number (
) of
are less than our algorithm
. In the state of
and
, the algorithm
is better than
. In the case of
, if
, the average CPU running time (
) of algorithm
is less than
, while at
, the average running time of the algorithm is opposite to that of the former.
According to
Table 7,
Table 8 and
Table 9, we can also conclude that if
, the algorithm
takes significantly less than
. In
Table 8 and
Table 9, in the case of
, if
, the calculation time of algorithm
is significantly more than
, if
, the calculation of
takes more time than the algorithm
. At the same time, some “–” can be seen in
Table 7 and
Table 9,
fails in these 10 computations, which indicates that the success rate of
in solving high-dimensional problems is close to zero, but our algorithm can still obtain the global optimal solution of the problem within a finite step, and the overall time is no more than 420 s.
In general, when
, our algorithm showed a good computing performance. In practical application, the size of
p generally does not exceed 10. The calculation results in
Table 6 show that our algorithm is not as effective as the
algorithm in solving small problems, but it can be seen from
Table 6,
Table 7,
Table 8 and
Table 9 that this algorithm has obvious advantages in solving large-scale and high-dimensional problems. At the same time, compared with
, this algorithm can also solve high-dimensional problems.
The method of the nonlinear programming in commercial software package
comes from [
29]. It is a branch and bound reduction method based on the
n-dimensional space in which the decision variable
x is located, which we can see from the two examples in
Section 6 of [
29]. In
Section 5 of [
29], many feasible domain reduction techniques, including polyhedral cutting techniques, are also proposed. Although
connects many feasible domain reduction techniques when using this method, from the experimental results in this paper, it can be seen that
is more effective than our
method in solving small-scale linear fractional programming problems.
branches on a maximum of
nodes, which is exponential, while our algorithm, potentially branches on a max of
nodes, a smaller-sized problem. Even if these feasible domain reductions are still valid in combination with
, when a computer runs a reduced program in these feasible domains, it also increases time consumption and space storage to a large extent. For special nonlinear programming-linear fractional programming problems, the proposed global optimization algorithm is to branch hyper-rectangles in
p-dimensional space, because
p is much less than
n, and
p is not more than 10, which ensures that the algorithm proposed by us is suitable for solving large-scale problems. As the variables that the algorithm branches are located in different spatial dimensions,
branches on the
n-dimensional decision space where the decision variable is located, and the branching process of the algorithm
is carried out on the
p-dimensional output-space. It can also be seen from
Table 6,
Table 7,
Table 8 and
Table 9 that when the number
p of ratio items is much smaller than the dimension
n of the decision variable, the algorithm
calculates better than
. This is because in the case of higher dimensional problems, in the process of initialization,
needs to solve more and higher dimensional subproblems, while the algorithm
only needs to solve 2
p n-dimensional subproblems, which greatly reduces the amount of computation. This is why our algorithm
branches in
p-dimensional space.
In summary, in terms of the demand of real problems, the number
p of ratios does not exceed 10 in the linear fractional programming problems required to be solved. At the same time, the size of
p is much smaller than the dimension
n of the decision variable. In the process of branching, the number of vertices of the rectangle to be divided is
and
, respectively. In the case of
, the branch of our algorithm
can always be completed quickly, but the methods in the software package
and in [
18] will have a lot of branching complexity. Therefore the computation required by the branch search in
is more economical than that in
. In the case of
, our method is more effective in solving large-scale problems than in [
18] and the software package
. At the same time, it is also noted that when the results of
and
are compared, the latter is sensitive to the size of n, which once again illustrates the characteristics of the algorithm in this paper.