1. Introduction
The mathematical formulation for NBLP problem which we will consider it is
where
and
. In our approach, the functions
,
,
, and
must have a twice continuously differentiable function at least.
The NBLP problem (
1) is utilized so extensively in transaction network, resource allocation, finance budget, price control, etc., see [
1,
2,
3,
4]. The NBLP problem (
1) has two levels of optimization problems, upper and lower levels. A decision maker with the upper level objective function
takes the lead, and so he chooses the decision vector
v. According to this, the decision maker with lower level objective function
, chooses the decision vector
w to optimize her objective, parameterized in
v.
To obtain the solution of problem (
1), number of different approaches have been offered, see (
1), see [
5,
6,
7,
8,
9]. In our method, we utilize one of these approaches to transforme NBLP problem (
1) to a single level one by replacing the lower level optimization problem with its KKT conditions, see [
10,
11].
Utilizing KKT optimality conditions for the lower level problem, the NBLP problem (
1) is reduced to the following single-objective optimization problem:
where
a multiplier vector which is associated with inequality constraint
.
Problem (
2) is non-differentiable and non-convex. Furthermore, the regularity assumption prerequisites to successfully handle smooth optimization problems are never satisfied. Following the smoothing method which is proposed by [
2], we can introduce the FBACTR algorithm to solve the problem (
2). Before introducing FBACTR algorithm, we need the following definition.
Definition 1. A Fischer–Burmeister function is the function and it is defined by . A perturbed Fischer–Burmeister function is the function and it is defined by .
The Fischer–Burmeister function has the property that
if and only if
,
, and
. It is non-differentiable at
. Its perturbed variant satisfies
if and only if
,
, and
for
. This function is smooth with respect to
e,
d, for
, and for more details see [
12,
13,
14,
15].
The next perturbed Fischer–Burmeister function is used to satisfy the asymptotic stability conditions, and allow the FBACTR algorithm to solve problem (
2).
Using the perturbed Fischer–Burmeister function (
3), problem (
2) can be approximated by:
The following notations are introduced to simplify our discussion. These notations are
,
and
. Hence problem (
4) can be reduced as follows:
where
,
, and
.
A set of indices of binding or violated inequality constraints at x is defined by . A regular point is the point at which the vectors of the set ⋃ are linearly independent.
A regular point
is KKT point of problem (
5) if there exist Lagrange multiplier vectors
and
such that the following KKT conditions hold:
To solve the nonlinear single-objective constrained optimization problem (
5), various approaches have been proposed; for more details, see [
16,
17,
18,
19,
20,
21,
22].
An active-set strategy is utilized to reduce problem (
5) to equality constrained optimization problem. The idea beyond the active-set method is to identify at every iteration, the active inequality constraints and treat them as equalities and this allows to utilize the improved methods which are used to solve the equality constrained problems, see [
21,
23,
24]. Most of the methods that are used to solve the equality constrained problems, may not converge if the starting point is far away from the stationary point, so it is called a local method.
To ensure a convergence to the solution from any starting point, a trust-region strategy which is strongly global convergence can be induced. It is very important strategy to solve a smooth optimization. It is more robust when it deals with rounding errors. It does not require the objective function of the model be convex. For more details see [
11,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32].
To treat the difficult of having infeasible trust-region subproblem in FBACTR algorithm, a reduced Hessian technique which is suggested by [
33,
34] and used by [
22,
24,
35] is utilized.
Under five assumptions, a theory of global convergence for FBACTR algorithm is proved. Moreover, numerical experiments display that FBACTR algorithm performers effectively and efficiently in pursuance.
We shall use the following notation and terminology. We use to denote the Euclidean norm . Subscript k refers to iteration indices. For example, , ,, , , , and so on to denote the function value at a particular point.
The rest of the paper is organized as follows.
Section 2 is devoted to the description of an active-set trust-region algorithm to solve problem (
5) and summarized to FBACTR algorithm to solve NBLP problem (
1) is introduced. In
Section 3 the analysis of the theory of global convergence of the FBACTR algorithm is presented.
Section 4 contains an implementation of the FBACTR algorithm and the results of test problems. Finally, some further remarks are given in
Section 5.
2. Active-Set with Trust-Region Technique
A detailed description for active-set with the trust-region strategy to solve problem (
5) and summarized to FBACTR algorithm to solve problem (
1) are introduced in this section.
Based on the active-set method which is suggested by [
36] and used with [
21,
22,
23,
24], we define a 0–1 diagonal matrix
, whose diagonal entries are:
Using the previous definition of the matrix
, a smooth and simple function is utilized to replace problem (
5) with the following simple problem
where
is a parameter, see [
21,
22,
23]. The Lagrangian function associated with problem (
12) is given by:
where
and
represents a Lagrange multiplier vector which is associated with the constraint
. A KKT point
for problem (
12) is the point at which the following conditions are satisfied
where
.
If the KKT point
satisfies conditions (
6)–(
10), we notice that it is also satisfies conditions (
15) and (
16), but the converse is not necessarily true. So, we design FBACTR algorithm in a way that, if
satisfies conditions (
15) and (
16), then it is also satisfies KKT conditions (
6)–(
10).
Various approaches which were proposed to solve the equality constrained are local methods. By local method, we mean a method such that if the starting point is sufficiently close to a solution, then under some reasonable assumptions the method is guaranteed by theory to converge to the solution. There is no guarantee that the local method converges starting from the remote. Globalizing a local method means modifying the method in such a way that is guaranteed to converge from any starting point without sacrificing its fast local rate of convergence. To ensure convergence from the remote, the trust-region technique is utilized.
2.1. A Trust-Region Technique
To solve problem (
12) and to convergence from remote with any starting point, the trust-region strategy is used. A naive trust-region quadratic subproblem associated with problem (
12) is:
where
represents the trust-region radius and
is the Hessian matrix of the Lagrangian function (
14) or an approximation to it.
Subproblem (
17) may be infeasible because there may be no intersecting points between hyperplane of the linearized constraints
and the constraint
. Even if they intersect, there is no guarantee that this will keep true if
is reduced, see [
37]. To overcome this problem, a reduced Hessian technique which was suggested by [
33,
34] and used by [
22,
23,
35] is used. In this technique, to obtain the trial step
, it is decomposed into two orthogonal components: the tangential component
to improve optimality and the normal component
to improve feasibility. To evaluate each of
and
, two unconstrained trust-region subproblems are solved.
To evaluate the normal component
, the following trust-region subproblem must be solved:
for some
.
Any method can be used to solve subproblem (
18), as long as a fraction of the normal predicted decrease obtained by the Cauchy step
is less than or equal to the normal predicted decrease obtained by
. That is, the following condition must be held:
for some
. The normal Cauchy step
is given by:
where the parameter
is given by:
A dogleg method is used to solve subproblem (
18). It is very cheap if the Hessian is indefinite. The dogleg algorithm approximates the solution curve to subproblem (
18) by piecewise linear function connecting the Newton point to the Cauchy point. For more details, see [
35].
Once is estimated, we will compute . A matrix is the matrix whose columns form a basis for the null space of .
To evaluate the tangential component
, the following subproblem is solved by using the dogleg method
where
,
, and
.
Since the dogleg method is used to solve the above subproblem, then a fraction of the tangential predicted decrease obtained by a tangential Cauchy step
is less than or equal to the tangential predicted decrease which is obtained by tangential step
. That is, the following conditions hold
for some
. The tangential Cauchy step
is defined as follows
where the parameter
is given by
such that
.
To be decided whether the step
will be accepted or not, a merit function is needed to tie the objective function with the constraints in such a way that progress in the merit function means progress in solving the problem. The following augmented Lagrange function is used in FBACTR algorithm as a merit function,
where
is a penalty parameter.
To test whether the point will be taken in the next iterate, an actual reduction and a predicted reduction are defined.
The actual reduction
in the merit function in moving from
to
is defined as follows:
can also be written as follows:
where
.
The predicted reduction in the merit function is defined to be:
can be written as:
To update the penalty parameter
to ensure that
, the following schemeis used (see Algorithm 1):
Algorithm 1 To update the penalty parameter |
Ifthen, setwhere is a small fixed constant. Else, setEnd if. |
For more details, see [
22].
The framework to test the step
and update
is clarified in the Algorithm 2.
Algorithm 2 (To test the step and update ) |
Choose, , and . |
While or . |
Set. |
Evaluate a new trial step. |
End while. |
If. |
Set and . |
End if. If. |
Set and . |
End if. |
To update the positive parameter
, we use the following scheme (see Algorithm 3)
Algorithm 3 To update the positive parameter |
If |
Set. |
Else, set. |
End if. |
For more details see, [
25].
Finally, the algorithm stopped if the termination criteria or , for some is satisfied.
The framework of the trust-region algorithm to solve subproblem (
17) are summarized as follows (see Algorithm 4).
Algorithm 4 Trust-region algorithm |
Step 0. (Initialization)
Starting with . Evaluate and . Set , , and . Choose , , , , , and such that , , , and . Choose , , and such that . Set . Step 1. If , then stop.
Step 2. (How to compute )
(a) Evaluate the normal component by solving subproblem ( 18). (b) Evaluate the tangential component by solving subproblem ( 22). (c) Set . Step 3. If , then stop.
Step 4. Set .
Step 5. Compute given by (11).
Step 6. Evaluate by solving the following subproblemStep 7. To update the penalty parameter , using Algorithm 1.
Step 8. To test the step and update the radius, using Algorithm 2.
Step 9. To update the positive parameter , using Algorithm 3.
Step 10. Set and go to Step 1.
|
The main steps for solving the NBLP problem (
1) are clarified in the following algorithm.
2.2. Fischer–Burmeister Active-Set Trust-Region Algorithm
The framework to solve NBLP problem (
1) is summarized in the Algorithm 5.
Algorithm 5 FBACTR algorithm |
Step 1. Use KKT optimality conditions for the lower level of problem (1) and convert it to a single objective constrained optimization problem (2).
Step 2. Using Fischer–Burmeister function (3) with to obtain the smooth problem (4).
Step 3. Summarize problem (4) to the form of nonlinear optimization problem (5).
Step 4. Use the active set strategy to reduce problem (5) to problem (12).
Step 5. Use trust-region Algorithm 4 to solve problem (12) and obtained approximate solution for problem (5) which is approximate solution for problem (1).
|
The next section is dedicated to the global convergence analysis for the active-set with the trust-region algorithm.
3. Global Convergence Analysis
Let be the sequence of points generated by FBACTR Algorithm 5. Let be a convex set which is contained all iterates and .
Standard assumptions which are needed on the set to demonstrate global convergence theory for FBACTR Algorithm 5 are stated in the following section.
3.1. A Standard Assumptions
The next standard assumptions are required to demonstrate the global convergence theory for the FBACTR Algorithm 5.
[.] Functions , , , and are twice continuously differentiable functions for all .
[.] The sequence of the Lagrange multiplier vectors is bounded.
[.] All of , , for , , , for , and are uniformly bounded on .
[.] The matrix has full column rank.
[.] The sequence of Hessian matrices is bounded.
Some fundamental lemmas which are needed in the proof of the main theorem introduced in the following section.
3.2. Main Lemmas
Some basic lemmas which are required to demonstrate the main theorems are presented in this section.
Lemma 1. Under standard assumption – and at any iteration k, there exists a positive constant such that: Proof. Since the normal component
is normal to the tangent space, then we have:
where
. Using standard assumptions
–
, we have the desired result. □
Lemma 2. Under standard assumptions and , the functions are Lipschitz continuous in Ω.
Proof. See Lemma (4.1) in [
36]. □
From Lemma 2, we conclude that is differentiable and is Lipschitz continuous in .
Lemma 3. At any iteration k, let be a diagonal matrix whose diagonal entries are: where . Then Proof. See Lemma (6.2) in [
21]. □
Lemma 4. Under standard assumptions and , there exists a positive constant such that Proof. See Lemma (6.3) in [
21]. □
Lemma 5. Under standard assumptions –, there exists a positive constant such that: Proof. From (
37) and (
27) we have:
From (
40), (
28), and using Cauchy–Schwarz inequality, we have:
Hence,
for some
,
,
,
,
, and
. Using standard assumptions
–
,
,
, and inequality (
38), we have:
where
,
, and
are constants and independent of the iteration
k. From inequality (
41),
,
, and
are uniformly bounded, we obtain the desired result. □
Lemma 6. Under standard assumptions –, there exists a positive constant such that: Proof. We consider two cases:
Firstly, from (
20), if
and
, then we have:
Secondly, from (
20), if
and
, then we have:
Using standard assumption
, we have
. From inequalities (
19), (
43), (
44), and using standard assumption
, we obtain the desired result.
From Algorithm 1 and Lemma 6, we have, for all
k:
□
Lemma 7. Under standard assumptions –, there exists a constant , such that: where .
Proof. We consider two cases:
Firstly, from (
24), if
and
, then we have:
Secondly, from (
24), if
and
, then we have:
□
From inequalities (
23), (
47), (
48), and using standard assumptions
–
, we obtain the desired result.
The next lemma shows that FBACTR algorithm cannot be looped infinitely without finding an acceptable step.
Lemma 8. Under standard assumptions –, if there exists such that , then for some finite j.
Proof. Using (
39), (
45), and from
, we have:
becomes small as
gets rejected and eventually we will have:
Then, the acceptance rule will be met for finite j. This completes the proof. □
Lemma 9. Under standard assumptions – and if jth trial step of iteration k satisfies then it must be accepted.
Proof. This lemma is proved by contradiction. In case of assuming that inequality (
49) holds, the trial step
is rejected, by using inequalities (
39), (
45), and (
49), then:
This contradiction and therefore the lemma is proved.
□
Lemma 10. Under standard assumptions –, there exists satisfies: for all trial steps j of any iteration k where is a positive constant independent of k or j.
Proof. At any trial iterate of any iteration k, we consider two cases:
Firstly, if
, then the step is accepted. That is
and take
, we have
Secondly, if
, then there is at least one trial step which is rejected. From Lemma 9, we have
for all trial steps
which are rejected. Since
is a trial step which is rejected, then from the previous inequality and Algorithm 2, we have:
From inequality (
51) and the above inequality, we obtain the desired result.
The next lemma obviously shows that as long as is bounded away from zero, the radius of the trust-region is bounded away from zero. □
Lemma 11. Under standard assumptions –, if there exists such that . Then there exists such that: Proof. Let
and using (
50), the proof follows directly. □
In the next section, the iteration sequence convergence is studied when .
3.3. Convergence When the Positive Parameter
This section is devoted to the convergence of the iteration sequence when the positive parameter goes to infinity.
Notice that, we do not require [, ] has full column rank in standard assumption , so, we may have other kinds of stationary points, which are defined in the following definitions.
Definition 2. A feasible Fritz John (FFJ) point is a point that satisfies the following FFJ conditions:where , , and are not all zeros. For more details see [18]. If , then the point is called a KKT point and FFJ conditions are called KKT conditions.
Definition 3. An infeasible Fritz John (IFJ) point is a point that satisfies the following IFJ conditions: where , , and are not all zeros. For more details see [18]. If , then the point is called an infeasible KKT point and IFJ conditions are called infeasible KKT conditions.
Lemma 12. Under standard assumptions –, a subsequence of the sequence of the iteration satisfies IFJ conditions if the following conditions satisfied:
- (i)
- (ii)
.
- (iii)
Proof. For simplification and without loss of generality, let
represents the whole sequence
. Assume that
is the solution of the subproblem
, then it satisfies the following equation:
It also satisfies the right hand side of Condition (iii). That is,
We will consider two cases:
Firstly, if
, then from Equation (
53) we have
.
Secondly, if
, then by multiplying Equation (
53) from the left by
and subtract it from Equation (
54), we have
. Hence
. That is, in two cases, we have
Since , then , for and , for some i. Let , , then . Hence, there exists a sequence of such that . That is, IFJ conditions hold in the limit with , see Definition 3. □
Lemma 13. Under standard assumptions, –, a subsequence of the sequence of the iteration satisfies FFJ conditions if the following conditions are satisfied:
- (i)
- (ii)
For all , and
- (iii)
.
Proof. For simplification and without loss of generality, let
represents the whole sequence
. Notice that the following equation,
is equivalent to Condition (iii), where
is a unit vector in the direction of
and
. Let
be a solution of the following problem:
Now two cases are considered:
Firstly, if
and using (
58), then
.
Secondly, if
, then from (
56) and the fact that
is a solution of problem (
57) we have:
Multiplying Equation (
58) from the left by
and subtracting it from the above limit, we have the following equation:
. That is
. Hence in both cases, we have
. The remnant of the proof follows using cases similar to those in Lemma 12. □
Lemma 14. Under standard assumptions –, if k represents the index of iteration at which is increased, then we have:where is a positive constant. Proof. Since
is increased, then from Algorithm 1 we have:
From (
42), (
50), and using the above equality, we have:
However,
, then:
From Cauchy–Schwarz inequality, standard assumptions –, and the fact that , the proof is completed. □
Lemma 15. Under standard assumptions –, if and there is an infinite subsequence of the sequence of the iteration at which is increased, then: Proof. From lemma (
59) and using
is unbounded, the proof is completed. □
Theorem 1. Under standard assumptions –, if as , then Lemma 16. Under standard assumptions –, if there exists a subsequence of indices indexing iterates that satisfy for all and as . Then a subsequence of the iteration sequence indexed satisfies IFJ conditions in the limit.
Proof. For simplification and without loss of generality, the total sequence
denotes to
. This lemma is proved by contradiction, therefor we suppose there is no subsequence of the sequence
satisfies IFJ conditions in the limit. Using Lemma 12, we have for all
k,
for some
. From (
55), we have
, for some
, hence:
Since
converges to zero and
is bounded, then we can write:
. Therefore,
For a
k sufficiently large, we have:
From Algorithm 3,
is boundless only if there exist an infinite subsequence of indices
, at which:
Since
, therefore an infinite number of acceptable iterates at which (
62) holds and from the way of updating
, we have
as
. This gives a contradiction unless
is bounded and hence
. Therefore
. We will consider two cases:
Firstly, if
, then we have
Using (
63) and standard assumptions
, we have
. That is, the left hand side of inequality (
62) goes to infinity while the right hand side tends to zero and this is a contradiction in this case.
Secondly, if
, then
where
as
and similar to the first case,
. This is a contradiction with
. The lemma is proved. □
Lemma 17. Under standard assumptions –, if as , and there exists a subsequence indexed of iterates that satisfy for all and , then a subsequence of the sequence of iterates indexed satisfies FFJ conditions in the limit.
Proof. Without loss of generality, let
be the whole iteration sequence
to simplify. This lemma is proved by contradiction and so suppose that there is no subsequence that satisfies FFJ conditions in the limit. From condition (iii) of Lemma 13, for all
k sufficiently large, there exists a constant
such that:
The following two cases are considered:
Firstly, if
, then there is a contradiction with inequality (
64).
Secondly, if
, then from subproblem (
22) we have:
where
represents the Lagrange multiplier vector, which is associated with the constraint
. From (
65) and (
46), we have:
Since
as
, then there exists an infinite number of acceptable steps such that inequality (
62) holds. However, inequality (
62) can be written as follows:
where
.
From (
66) and (
67), we have:
However, in previous inequality, the right hand side tends to zero as
and also
along the subsequence
. Therefore,
is bounded. That is, either
lies in the null space of
or
.
The first possibility occurs only when
as
and
lie in the null space of the matrix
which is contradicted with assumption (
64). This means that, FFJ conditions are satisfied in the limit. As
, the second possibility is,
and from (
65), we have
which is contradicted with assumption (
64). That is FFJ conditions are satisfied in the limit.
In the next section, the convergence of the sequence of the iteration sequence is studied when bounded.
3.4. Global Convergence When Is Bounded
Our analysis in this section is continued supposing that
is bounded. Therefore, let
be an integer at which
for all
. That is,
From assumptions
and
, and using (
68), then for all
k, there is a constant
such that:
where
. □
Lemma 18. Under standard assumptions –, there exists a constant such that: Proof. From inequality (
35) and the fact that
, then we have:
From standard assumptions
,
,
, the fact that
, and using (
69), then there exists
, such that inequality (
70) holds. □
Lemma 19. Under standard assumptions –, then for all k we have: Proof. Using inequalities (
46), (
68), and (
70), we obtain the desired result. □
Lemma 20. Under standard assumptions –, if and where τ is a positive constant given by then there exists a constant such that: Proof. Since , then we can say and . We will consider two cases:
Firstly, if
, then from inequalities (
69), (
35) and
, we have:
However,
, then
From inequality (
35), assumption
, and using the value of
in (
72), we have
. That is,
. This means that,
From inequalities (
71), (
74), (
75), and assumption
, we have the following:
However,
, then we have
Secondly, if
and using inequality (
71), then
where
. Let
, then the result follows.
From the previous lemma, we notice that either
or
and
, where
is given by (
72) at any iteration
k, the value of the penalty parameter
is not needed to increase. That is the penalty parameter
is increased only when
.
□
Lemma 21. Under standard assumptions –, if is increased at kth iteration, then there is a positive constant such that: Proof. From Algorithm 1, we have:
where
increased at any iteration and
. From the previous equation, (
42), (
46), (
68), and (
70), we have
Using assumption , we get the desired result. □
Lemma 22. Under standard assumptions – and at the jth trial iterate of any iteration k. If is increased, then there is a constant , such that Proof. From (
50) and (
76), we get the desired result. □
Lemma 23. Under standard assumptions –, if , thenwhere is a subsequence indexes the iterates at which is increased. Proof. From Lemma 22 we obtain the desired result. □
3.5. Main Results for Global Convergence
In this section, main global convergence results for FBACTR algorithm are introduced.
Theorem 2. Under standard assumptions –, the sequence of iterates which is generated by FBACTR algorithm satisfies Proof. This theorem is proved by contradiction and so we suppose that . This means that there exists an infinite subsequence of indices indexing iterates that satisfy . However, there exists an infinite sequence of acceptable steps from Lemma 8. Without loss of generality and to simplify, we suppose that all members of are acceptable iterates. Now, two cases are considered:
Firstly, if
is unbounded, then an infinite number of iterates
exists and at which the penalty parameter
is increased. So, for
k that is sufficiently large and from Lemma 23, let
and
be the two sequences which are not have common elements. Let
and
be two consecutive iterates at which
is increased and
, where
. The penalty parameter
is the same for all iterates that lie between
and
. Since all the iterates of
are acceptable, then for all
,
Using inequality (
45), we have:
Summing over all acceptable iterates that lie between
and
, we have:
where
is as
in (
52)but
is replaced by
. Hence,
Since
, then for
sufficiently large, we have:
This leads to a contradiction with Lemma 23 unless .
Secondly, If
is bounded, then for all an integer
and
, we have
. Hence, for any
where
and using (
45), we have:
Then for any
, we have:
such that all the iterates of
are acceptable. From above inequality, inequality (
80) and using Lemma 11 we have:
However, this is a contradiction of the fact that is bounded when is bounded. Therefore, we have a contradiction in both cases. Hence the supposition is not correct and this proves the theorem. □
Theorem 3. Under standard assumptions –, the sequence of iterates generated by FBACTR algorithm satisfies: Proof. The proof of (
82) is by contradiction, so, for all
k, assume that
. Let
be an infinite subsequence at which
, where
is defined in (
72). However,
, then
Let be any trial iterate belonging to and we consider two cases:
Firstly, if
is unbounded, then for the rejected trial step
of iteration
, we have
. Since the trial step
is rejected and using inequalities (
45) and (
41), then
However,
is unbounded, hence for all
,
is sufficiently large, we have:
Therefore, for all
, we have:
From Algorithm 2, we have:
This gives a contradiction and this leads to not being able to go to zero in this case.
Secondly, if the sequence is bounded, then there exists an integer and such that for all , . Consider a trial step j of iteration and , we consider three cases:
- (i)
If , then , see Algorithm 2. This means that, is bounded in this case;
- (ii)
If
, and
for
, then for all rejected trial steps
of iteration
, we have
That is, is also bounded in this case.
- (iii)
If
and
does not hold for all
l, then there exists an integer
such that
holds for
and
holds for all
. As in case (ii), we can write:
From Algorithm 2, we have:
From Lemma 20, if
and
is rejected, then we have:
This implies that
is bounded and from (
83) and (
84) we have also
is bounded in this case. That is in three cases, we have
is bounded, but this leading to a contradiction. Hence, all the iterates satisfy
for
are sufficiently large. From Lemma 20, then the value of the penalty parameter is not needed to increase. Hence,
is bounded. Using Lemma 20 and for
, we have:
That is the trust-region radius is not bounded below and this leading to a contradiction. Because at iteration , if the previous step was accepted; i.e., at , then . That is is bounded in this case.
If
, then there exists at least one rejected trial step. From Lemmas 5 and 20, then for the rejected trial step
we have:
From Algorithm 2, we have:
Hence
is bounded and this contradicts (
85). That is, the supposition is wrong and hence,
That is, (
81) holds and the proof is completed.
From the above two theorems, we conclude that, given any , the algorithm terminates because , for some finite k. □
4. Numerical Results and Comparisons
In this section, we introduce an extensive variety of possible numeric NBLP problems to illustrate the validity of the proposed Algorithm FBACTR Algorithm 5 to solve the NBLP problem. The proposed algorithm FBACTR experimented on 16 benchmark examples given in [
4,
7,
38,
39,
40].
Ten independent runs with a distinct initial value starting points for every test example are performed to observe the matchmaking of the result. Statistical results of all examples are briefed in
Table 1 which displays that the results found by the FBACTR Algorithm 5 are approximate or equal to those by the compered algorithms in method [
11] and the literature.
For comparison, the corresponding results of the mean number of iterations (iter), the mean number of function evaluations (nfunc), and the mean value of CPU time (CPUs) in seconds obtained by Methods in [
11,
41,
42] respectively are included and summarized in
Table 2. These results show that results of the FBACTR Algorithm 5 are approximate or equal to those of the compared algorithms in the literature.
It is evident from the results that our approach is able to handle NBLP problems even if the upper and the lower levels are convex or not and the computed results converge to the optimal solution which is similar or approximate to the optimal reported in the literature. Finally, it is obvious from the comparison between the solutions obtained using the FBACTR Algorithm 5 with those in the literature, that the FBACTR Algorithm 5 is capable of finding the optimal solution to some problems by a small number of iterations, a small number of function evaluations, and less time.
We offered the numerical results of FBACTR Algorithm 5 using MATLAB (R2013a) (8.2.0.701)64-bit(win64) and a starting point . The following parameter setting is used: , , , , , , , , and .