Next Article in Journal
New Bounds for Three Outer-Independent Domination-Related Parameters in Cactus Graphs
Next Article in Special Issue
Strict Vector Equilibrium Problems of Multi-Product Supply–Demand Networks with Capacity Constraints and Uncertain Demands
Previous Article in Journal
Solvability Criterion for a System Arising from Monge–Ampère Equations with Two Parameters
Previous Article in Special Issue
An Efficient Subspace Minimization Conjugate Gradient Method for Solving Nonlinear Monotone Equations with Convex Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Penalty Method without a Line Search for Nonlinear Optimization

Department of Mathematics, Ferhat Abbas University of Setif-1, Setif 19137, Algeria
Axioms 2024, 13(3), 176; https://doi.org/10.3390/axioms13030176
Submission received: 18 January 2024 / Revised: 1 March 2024 / Accepted: 3 March 2024 / Published: 7 March 2024
(This article belongs to the Special Issue Numerical Analysis and Optimization)

Abstract

:
In this work, we integrate some new approximate functions using the logarithmic penalty method to solve nonlinear optimization problems. Firstly, we determine the direction by Newton’s method. Then, we establish an efficient algorithm to compute the displacement step according to the direction. Finally, we illustrate the superior performance of our new approximate function with respect to the line search one through a numerical experiment on numerous collections of test problems.

1. Introduction

The nonlinear optimization is a fundamental subject in the modern optimization literature. It focuses on the problem of optimizing an objective function in the presence of inequality and/or equality constraints. Furthermore, the optimization problem is obviously linear if all the functions are linear, otherwise it is called a nonlinear optimization problem.
This research field is motivated by the fact that it arises in various problems encountered in practice, such as business administration, economics, agriculture, mathematics, engineering, and physical sciences.
In our knowledge, Frank and Wolfe are the deans in nonlinear optimization problems. They established a powerful algorithm in [1] to solve them. Later, they used another method in [2] based on the application of the Simplex method on the nonlinear problem after converting it to a linear one.
This pioneer work inspired many authors to propose and develop several methods and techniques to solve this class of problems. We refer to [3,4] for interior point methods to find the solution of nonlinear optimization problems with a high dimension.
In order to make this theory applicable in practice, other methods are designed on the linear optimization history, among robust algorithms with polynomial complexity. In this perception, Khachian succeeded in 1979 to introduce a new ellipsoid method from approaches applied originally to nonlinear optimization.
Interior point methods outperform the Simplex ones, and they have recently been the subject of several monographs including Bonnans and Gilbert [5], Evtushenko and Zhadan [6], Nesterov and Emirovski [7], and Wright [8] and Ye [9].
Interior point methods can be classified into three different groups as follows: projective methods and their alternatives as in Powell [10] and Rosen [11,12], central trajectory methods (see Ouriemchi [13] and Forsgren et al. [14]), and barrier/penalty methods, where majorant functions were originally proposed by Crouzeix and Merikhi [15] to solve a semidefinite optimization problem. Inspired by this work, Menniche and Benterki [16] and Bachir Cherif and Merikhi [17] applied this idea to linear and nonlinear optimizations, respectively.
A majorant function for the penalty method in convex quadratic optimization was proposed by Chaghoub and Benterki [18]. On the other hand, A. Leulmi et al. [19,20] used new minorant functions for semidefinite optimization, and this idea was extended to linear programming by A. Leulmi and S. Leulmi in [21].
As far as we know, our new approximate function has not been studied in the nonlinear optimization literature. These approximate functions are more convenient and efficient than the line search method for rapidly computing the displacement step.
Therefore, in our work, we aim to optimize a nonlinear problem based on prior efforts. Thus, we propose a straightforward and effective barrier penalty method using new minorant functions.
More precisely, we first introduce the position of the problem and its perturbed problem with the results of convergence in Section 2 and Section 3 of our paper. Then, in Section 4, we establish the solution of the perturbed problem by finding new minorant functions. Section 5 is devoted to presenting a concise description of the algorithm and to illustrating the outperformance of our new approach by carrying out a simulation study. Finally, we summarize our work in the conclusion.
Throughout this paper, the following notations are adopted. Let . , . and . denote the scalar product and the Euclidean norm, respectively, given by the following:
x , y = x T y = i = 1 n x i y i , x , y R n
and
x = x , x = i = 1 n x i 2

2. The Problem

We aim to present an algorithm for solving the following optimization problem:
min f x A x = b x 0 ,
where b R m and A R m × n is a full-rank matrix with m < n .
For this purpose, we need the following hypothesis:
Hypothesis 1.
f is nonlinear, twice continuously differentiable, and convex on L , where L = x R n : A x = b ; x 0 is the set of realizable solutions of (P).
Hypothesis 2.
(P) satisfies the condition of interior point (IPC), i.e., there exists x 0 > 0 such that A x 0 = b .
Hypothesis 3.
The set of optimal solutions of (P) is nonempty and bounded.
Notice that these conditions are standard in this context. We refer to [17,20].
If x * is an optimal solution, there exist two Lagrange multipliers p * R m and q * R n , such that
f x * + A T p * = q * 0 , A x * = b , q * , x * = 0 .

3. Formulation of the Perturbed Problem of (P)

Let us first consider the function ψ defined on R × R n by the following:
ψ ( η , x ) = f x + i = 1 n ξ ( η , x i ) if x 0 , A x = b + if not ,
where ξ : R 2 , + is a convex, lower semicontinuous and proper function given by the following:
ξ ( η , α ) = η ln η η ln α if α > 0 and η > 0 , 0 if α 0 and η = 0 , + otherwise .
Thus, ψ is a proper, convex, and lower semicontinuous function.
Furthermore, the function g defined by
g η = inf x R n ψ η ( x ) = f x + i = 1 n ξ ( η , x i )
is convex. Notice that for η = 0 , the perturbed problem ( P η ) coincides with the initial problem (P); then, f * = g 0 .

3.1. Existence and Uniqueness of Optimal Solution

To show that the perturbed problem ( P η ) has a unique optimal solution, it is sufficient to demonstrate that the recession cone of ψ η is reduced to zero.
Proof. 
For a fixed η , the function ψ η is proper, convex, and lower semicontinuous. The asymptotic function of ψ η is defined by the following:
ψ η d = lim α + ψ η x 0 + α d ψ η x 0 α ,
thus, the asymptotic functions of f and ψ η satisfy the relation:
ψ η d = f ( x ) if d 0 , A d = 0 , + if not .
Moreover, hypothesis H3 is equivalent to
d R n : f ( x ) 0 , d 0 , A d = 0 = 0 .
Then,
d R n : ψ η d 0 = 0
and from [17], for each non-negative real number η , the strictly convex problem ( P η ) admits a unique optimal solution noted by x η * . The solution of the problem (P) is the limit of the solutions sequence of the perturbed problem ( P η ) when η tends to 0. □

3.2. Convergence of the Solution

Now, we are in a position to state the convergence result of ( P η ) to (P), which is proved in Lemma 1 on [18].
Let η > 0 , for all x L ; we define ψ ( x , η ) = f η ( x ) .
Lemma 1
([18]). We consider η > 0 . If the perturbed problem ( P η ) admits an optimal solution x η , such that lim η 0 x η = x * , then the problem (P) admits an optimal solution x * .
We use the classical prototype of penalty methods. We begin our process with ( x 0 , η 0 ) L ˜ × 0 , , where
L ˜ = x R n : x > 0 , A x = b
and the iteration scheme is divided into the following steps:
1. Select η k + 1 0 , η k .
2. Establish an approximate solution x k + 1 for ( P η k ) . It is obvious that ψ ( η k , x k + 1 ) < ψ ( η k , x k ) .
Remark 1.
If the values of the objective functions of the problem (P) and the perturbed problem ( P η ) are equal and finite, then (P) will have an optimal solution if and only if ( P η ) has an optimal solution.
The iterative process stops when we obtain an acceptable approximation of g ( 0 ) .

4. Computational Resolution of the Perturbed Problem

Our approach to the numerical solution of the perturbed problem ( P η ) , consists of two stages. In the first one, we calculate the descent direction using the Newton approach, and in the second one, we propose an efficient new-minorant-functions approach to compute the displacement step easily and quickly relative to the line search method.

4.1. The Descent Direction

As ( P η ) is strictly convex, the necessary and sufficient optimality conditions state that x η is an optimal solution of ( P η ) if and only if it satisfies the nonlinear system:
ψ η x η = 0 .
Using the Newton approach, a penalty method is provided to solve the above system, where the vector x k + 1 in each is given by x k + 1 = x k + α k d k .
The solution of the following quadratic convex optimization problem is necessary to obtain the Newton descent direction d:
min d ψ η x , d + 1 2 2 ψ η x d , d : A d = 0 = min d H η , x : A d = 0 ,
where x L ˜ and
ψ η x = f x η + n η ln η η i = 1 n ln x i ψ η x = f x η X 1 e 2 ψ η x = 2 f x + η X 2 e H η , x = ψ η x , d + 1 2 2 ψ η x d , d ,
with the diagonal matrix X = diag x i i = 1 , n ¯ .
The Lagrangian is given by the following:
L x , s = f x X 1 η , d + 1 2 2 f x + X 2 η d , d + A d , s ,
where s R m is the Lagrange multiplier. It is sufficient for solving the linear system equations with n + m :
f x η X 1 e + 2 f x + X 2 η d , d + A t s = 0 A d = 0 ,
then,
2 f x η X 2 d + A T s = η X 1 e f x A d = 0 .
It is simple to prove that system (3) is non-singular. We obtain
d T 2 f x η X 2 d + d T A T s = d T η X 1 e d T f x d T A d = 0 ,
As d T A T s = A d T s = 0 and A d = 0 , we obtain
2 f x d , d + f x , d = η X 1 d , e X 1 d 2 .
The system can also be written as follows:
X 2 f x X X 1 d + η I X 1 d + X A T s = η X X 1 e X f x A X X 1 d = 0 , .
Thus, the Newton descent direction is obtained.
Throughout this paper, we take x instead of x η .

4.2. Computation of the Displacement Step

This section deals with the numerical solution of the displacement step. We give a brief highlight of the line search methods used in nonlinear optimization problems. Then, we collect some important results of approximate function approaches applied to both semidefinite and linear programming problems. Finally, we propose our new approximate function method for the nonlinear optimization problem (P).

4.2.1. Line Search Methods

The line search methods consists of determining a displacement step α k , which ensures the sufficient decrease in the objective at each iteration x k + 1 = x k + α k d k , where α k > 0 , along the descent direction d k ; in other words, it involves solving the following one-dimensional problem:
φ ( α ) = min α > 0 ψ η x k + α d k .
The disadvantage of this method is that the solution α is not necessarily optimal, which make the feasibility of x k + 1 not guaranteed.
The line search techniques of Wolfe, Goldstein-Armijo, and Fibonacci are the most widely used ones. However, generally, their computational volume is costly. This is what made us search for another alternative.

4.2.2. Approximate Functions Techniques

These methods are based on sophisticated techniques introduced by J.P. Crouzeix et al. [15] and A. Leulmi et al. [20] to obtain the solution of a semidefinite optimization problem.
The aim of these techniques is to give a minimized approximation of one real-variable function φ α defined by
φ α = 1 η [ ψ η ( x + α d ) ψ η ( x ) ] = 1 η [ f ( x + α d ) f ( x ) ] i = 1 n ln ( 1 + α t i ) , t = X 1 d .
The function φ is convex, and we obtain the following:
φ ( α ) = 1 η f ( x + α d ) , d i = 1 n t i 1 + α t i , φ ( α ) = 1 η 2 f ( x + α d ) , d + i = 1 n t i 2 1 + α t i 2 .
We find that φ 0 + φ 0 = 0 , deduced from (4), which is expected since d is the direction of Newton’s descent direction.
We aim to avoid the disadvantages of line search methods and accelerate the convergence of the algorithm. For this reason, we have to identify an α ¯ that yields a significant decrease in the function φ ( α ) . This is the same as solving a polynomial equation of degree n + 1 , where f is a linear function.
Now, we include a few helpful inequalities below, which are used throughout the paper.
H. Wolkowicz et al. [22] see also Crouzeix and Seeger [23] presented the following inequalities:
z ¯ σ z n 1 min i z i z ¯ σ z n 1 z ¯ + σ x n 1 max i z i z ¯ + σ z n 1 ,
where z ¯ and σ z represent the mean and the standard deviation, respectively, of a statistical real numbers series { z 1 , z 2 , . . . , z n } . The later quantities are defined as follows:
z ¯ = 1 n i = 1 n z i and σ z 2 = 1 n i = 1 n z i 2 z ¯ 2 = 1 n i = 1 n ( z i z ¯ ) 2 .
Theorem 1
([15]). Let z i > 0 , for i = 1 , 2 , . . . , n . We have the following:
i = 1 n ln ( z i ) ln z ¯ + σ z n 1 + ( n 1 ) ln z ¯ σ z n 1 .
where z i = 1 + α t i , z = 1 + α t , and σ z = α σ t .
We will proceed to present the paper’s principal result.

4.2.3. New Approximate Functions Approach

Let
φ α = 1 η [ f ( x + α d ) f ( x ) ] i = 1 n ln ( 1 + α t i ) ,
be defined on α ˜ = min i I 1 t i such that I = { i : t i < 0 } .
To find the displacement step, it is necessary to solve φ α = 0 . Considering the difficulty of solving a non-algebraic equation, approximate functions are recommended alternatives.
Two novel approximation functions of φ are introduced in the following lemma.
Lemma 2.
For all α 0 , α 1 * with α 1 * = min α ^ , α ^ 1 , we have
φ α φ ^ 1 α ,
and for all α 0 , α 2 * with α 2 * = min α ^ , α ^ 2 , we obtain
φ ^ 2 α φ α ,
where
φ ^ 1 α = 1 η f ( x + α d ) f ( x ) ln 1 + δ α ( n 1 ) ln 1 + β α ,
and
φ ^ 2 α = 1 η f ( x + α d ) f ( x ) τ ln ( 1 + β 1 α ) ,
with
β = t ¯ σ t n 1 δ = t ¯ + σ t n 1 β 1 = t 2 n t ¯ .
Furthermore, we have
φ ^ 2 α φ ^ 1 α φ α ,
Proof. 
We start by proving that
φ α φ ^ 1 α ,
Theorem 1 gives
i = 1 n ln ( z i ) ln z ¯ + σ z n 1 + ( n 1 ) ln z ¯ σ z n 1 ,
then,
i = 1 n ln ( 1 + α t i ) ln 1 + α δ + ( n 1 ) ln 1 + α β ,
and
i = 1 n ln ( 1 + α t i ) ln 1 + α δ ( n 1 ) ln 1 + α β .
Hence,
1 η f ( x + α d ) f ( x ) i = 1 n ln ( 1 + α t i ) 1 η f ( x + α d ) f ( x ) ln 1 + α δ ( n 1 ) ln 1 + α β .
Therefore,
φ α φ ^ 1 α = 1 η f ( x + α d ) f ( x ) ln 1 + α δ ( n 1 ) ln 1 + α β , φ ^ 1 α = 1 η f ( x + α d ) d δ 1 + α δ ( n 1 ) β 1 + α β .
Let us consider the following:
g ( α ) = φ ( α ) φ ^ 2 ( α ) .
We have
g ( α ) = i = 1 n t i 2 ( 1 + α t i ) 2 τ β 1 1 + α β 1 2 .
Because of the fact that | t i |     t and n t ¯ t , it is easy to see that α 0 , g ( α ) 0 .
Therefore,
1 η f ( x + α d ) f ( x ) i = 1 n ln ( 1 + α t i ) 1 η f ( x + α d ) f ( x ) τ ln 1 + β 1 α ,
then, φ α φ ^ 2 α .
Hence, the domain of φ ^ i i = 1 , 2 is included in the domain of φ , which is ( 0 , α ˜ ) , where
α ˜ = max α : 1 + α δ > 0 , 1 + α β > 0 .
Let us remark that
0 = φ ^ i ( 0 ) = φ 0 , φ ( 0 ) = φ ^ i ( 0 ) = φ i 0 = φ ^ i ( 0 ) > 0 , i = 1 , 2 .
Thus, φ is well approximated by φ ^ i in a neighborhood of 0 . Since φ ^ i is strictly convex, it attains its minimum at one unique point α ¯ , which is the unique root of the equation φ ^ i α = 0 . This point belongs to the domain of φ ^ i i = 1 , 2 . Therefore, φ is bounded from below by φ ^ 1 :
φ ^ 1 ( α ¯ ) φ α ¯ < 0
And it is also bounded from below by φ ^ 2 :
φ ^ 2 ( α ¯ ) φ α ¯ < 0 .
Then, α ¯ gives an apparent decrease in the function φ .

4.3. Minimize an Auxiliary Function

We now consider the minimization of the function
φ 1 α = n γ α ln 1 + δ α ( n 1 ) ln 1 + β α ,
and we also have the following approximate function:
φ 2 α = n γ α τ ln ( 1 + β 1 α ) ,
where β 1 is defined in (7). Then, we have the following:
φ 1 α = n γ δ 1 + δ α ( n 1 ) β 1 + β α , φ 1 α = δ 2 1 + δ α 2 + ( n 1 ) β 2 1 + β α 2 .
and
φ 2 α = n γ τ β 1 ( 1 + β 1 α ) , φ 2 α = τ β 1 2 1 + β 1 α 2 .
We remark that for i = 1 , 2 :
φ i 0 = 0 , φ i 0 = n γ t ¯ , φ i 0 = t 2 .
We present the conditions φ i 0 < 0 and φ i 0 > 0 . The function φ is strictly convex. It attains its minimum at one unique point α such that φ i α = 0 , which is one of the roots of the equations
γ δ β α 2 + α γ δ + γ β δ β + γ t ¯ = 0 ,
and
n γ β ( 1 + β α ) t 2 = 0 .
For Equation (8), the roots are explicitly calculated, and we distinguish the following cases:
  • If δ = 0 , we obtain α ¯ 1 = t ¯ γ γ β .
  • If β = 0 , we obtain α ¯ 1 = t ¯ γ γ δ .
  • If γ = 0 , we have α ¯ 1 = t ¯ δ β .
  • If γ δ β 0 , α ¯ is the only root of the second-degree equation that belongs to the domain of definition of φ . We obtain = 1 γ 2 + 1 β 2 + 1 δ 2 2 δ β + 2 n 4 n 1 β γ 1 γ δ . Both roots are
    α ¯ 1.1 = 1 2 1 γ 1 β 1 δ + ,
    and
    α ¯ 1.2 = 1 2 1 γ 1 β 1 δ + .
Then, the root of Equation (9) is explicitly calculated, and we have
α ¯ 2 = t 2 β ( 1 β ) 1 β .
Consequently, we compute the two values α ¯ i , i = 1 , 2 , explicitly. Then, we take α ¯ 1 , α ¯ 2 0 , α ˜ ε , where ε > 0 is a fixed precision and φ ( α ¯ i ) > 0 , i = 1 , 2 .
Remark 2.
The computation of α ¯ i , i = 1 , 2 is performed through a dichotomous procedure in the cases where α ¯ i ( 0 , α ˜ ε ) , and φ ( α ¯ i ) > 0 , as follows:
Put a = 0 , b = α ˜ ε .
While b a > ε do
If φ ( a + b 2 ) < 0 then, b = a + b 2 ,
else a = a + b 2 , so α ¯ i = b .
This computation guarantees a better approximation of the minimum of φ ( α ) while remaining in the domain of φ .

4.4. The Objective Function f Is

4.4.1. Linear

For all x , there exists c R n such that f ( x ) = c , x .
The minimum of φ ^ i is reached at the unique root α ¯ of the equation φ α = 0 . Then,
φ ^ i α ¯ φ α ¯ < φ ^ i 0 = φ 0 = 0 .
Take γ = n 1 c , d in the auxiliary function φ . The two functions φ and φ ^ i i = 1 , 2 coincide.
α ¯ yields a significant decrease in the function ψ η along the descent direction d . It is interesting to note that the condition φ ^ i 0 + φ ^ i 0 = 0 ( i = 1 , 2 ) implies the following:
φ ^ i ( 0 ) = n t ¯ γ = t 2 = n t ¯ 2 σ t 2 = φ ^ i ( 0 ) > 0 .

4.4.2. Convex

f ( x + α d ) is no longer constant, and the equation φ ^ i ( α ) = 0 is not reduced to one equation of a second degree for i = 1 , 2 .
We consider another function φ ˜ less than φ . Given α ^ 0 , α ˜ , we have, for all α ( 0 , α ^ ] , the following:
f x + α d f x η f x + α ^ d f x η α ^ α ,
then,
φ α φ ˜ 1 α = f x + α ^ d f x η α ^ α ln 1 + α δ ( n 1 ) ln 1 + α β
and
φ α φ ˜ 2 α = f x + α ^ d f x η α ^ α τ ln ( 1 + β 1 α ) , τ 0 , 1 .
We choose γ = f x + α ^ d f x n η α ^ in the auxiliary function φ , and we compute the root α ¯ of the equation φ i ( α ) = 0 with i = 1 , 2 .
Therefore, we have two cases:
  • Where α ¯ α ^ : We have the following:
    φ α ¯ φ ^ i α ¯ φ ˜ i α ¯ , for i = 1 , 2
    and, thus, along the direction d , we obtain a significant decrease in the function ψ η . The approximation accuracy of φ by φ ˜ i being better for small values of α ^ for i = 1 , 2 , it is recommended to use a new value of α ^ , situated between α ˜ and the former α ^ , for the next iteration. Moreover, the cost of the supplementary computation is small since it is the cost of one evaluation of f and the resolution of a second-order equation.
  • Where α ¯ > α ^ : The computation of α ^ is performed through a dichotomous procedure (see Remark 3).

5. Description of the Algorithm and Numerical Simulations

5.1. Description of the Algorithm

This section is devoted to introducing our algorithm for obtaining an optimal solution x ¯ of (P).
Begin
Initialization
ε > 0 is a given precision. η ^ > 0 and σ 0 , 1 are given.
x 0 is a strictly realizable solution from L ˜ , d 0 R m .
Iteration
  • Start with η > η ^ .
  • Calculate d and t = X 1 d .
  • If t > ε , calculate t ¯ , γ , δ , β , and β 1 .
  • Determine α ¯ following (8), (10), or (9) depending on the linear or nonlinear case.
  • Take the new iterate x = x + α ¯ d = X e + α ¯ t and go back to step 2.
  • If t ε , a well approximation of g ( η ) has been obtained.
    (a) 
    If η η ^ and η = σ η , return to step 2.
    (b) 
    If η < η ^ , STOP: a well approximate solution of ( P ) has been obtained.
End algorithm.
The aim of this method is to reduce the number of iterations and the time consumption. In the next section, we provide some examples.

5.2. Numerical Simulations

To assess the superior performance and accuracy of our algorithm, based on our minorant functions, numerical tests are conducted to make comparisons between our new approach and the classical line search method.
For this purpose, in this section, we present comparative numerical tests on different examples taken from the literature [5,24].
We report the results obtained by implementing the algorithm in MATLAB on an Intel Core i7-7700HQ (2.80 GHz) machine with 16.00 Go RAM.

5.2.1. Examples with a Fixed Size

Nonlinear Convex Objective

Example 1.
Let us take the following problem:
min 2 x 1 2 + 2 x 2 2 2 x 1 x 2 4 x 1 6 x 2 x 1 + x 2 + x 3 = 2 x 1 + 5 x 2 + x 4 = 5 x 1 , x 2 , x 3 , x 4 0 .
The optimal value is 7.1613 , and the optimal solution is x * = 1.1290 0.7742 0.0968 0 t .
Example 2.
Let us take the following problem:
min x 1 3 + x 2 3 x 1 x 2 + x 3 + x 4 = 3 2 x 1 + x 2 x 3 + x 4 = 2.0086 x 1 + x 3 + 2 x 4 = 4.9957 x 1 , x 2 , x 3 , x 4 0 .
The optimal value is 0.0390 , and the optimal solution is x * = 0.3391 0 0.6652 1.9957 t .
Example 3.
Let us consider the following problem:
min x 1 3 + x 2 3 + x 1 x 2 2 x 1 x 2 + x 3 = 8 x 1 + 2 x 2 + x 4 = 6 x 1 , x 2 , x 3 , x 4 0 .
The optimal value is 1.6157 , and the optimal solution is x * = 1.1734 0 5.6532 4.8265 t .
This table presents the results of the previous examples:
Example st1  st2  LS 
  iterTime (s) iterTime (s) iterTime (s)
1 120.0006 190.0015 60.0091
2 50.0004 90.0009 440.099
3 30.0001 50.0006 650.89

5.2.2. Example with a Variable Size

The Objective Function f Is

1-Linear: Let us consider the linear programming problem:
ζ = min [ c T x : x 0 , A x = b ] ,
where A is an m × 2 m matrix given by the following:
A [ i , j ] = 1 if i = j or j = i + m 0 if not , c [ i ] = 1 , c [ i + m ] = 0 and b [ i ] = 2 , i = 1 , . . . m ,
where c , b R 2 m .
The results are presented in the table below.
Size st1  st2 LS 
iterTime (s) iterTime (s)iterTime (s)
5 × 10 10.0021 20.003990.0512
20 × 40 10.0031 30.0045130.0821
50 × 100 20.0049 30.0032170.3219
100 × 200 20.0053 40.0088190.5383
200 × 400 20.0088 40.0098220.9220
250 × 500 30.0096 50.0125269.2647
2-Nonlinear:
Example 4
(Quadratic case [13]). Let the quadratic problem be as follows:
ζ = min [ f x : x 0 , A x b ] ,
with f x = 1 2 x , Q x , Q is the matrix defined for n = 2 m by the following:
Q [ i , j ] = 2 j 1 if i > j 2 i 1 if i < j i i + 1 1 if i = j , i , j = 1 , . . , n A [ i , j ] = 1 if i = j or j = i + m , i = 1 , . . , m and j = 1 , . . , n 0 if not c [ i ] = 1 , c [ i + m ] = 0 and b [ i ] = 2 , i = 1 , . . , m .
This example is tested for many values of n .
The obtained results are given by the following table:
ex( m , n ) st1 st2 LS 
iterTime (s)iterTime (s)iterTime (s)
300 × 600 50.996840.96992619.5241
400 × 800 718.144859.60123586.1259
600 × 1200 1236.3259519.00992398.2354
1000 × 2000 2156.99121741.101233109.2553
1500 × 3000 28140.13252395.6903401599.1596
Example 5
(The problem of Erikson [25]). Let the following be the quadratic problem:
ζ = min f x = i = 1 n x i ln x i a i : x i + x i + m = b , x 0 ,
where n = 2 m , a i > 0 and b R m are fixed.
This example is tested for different values of n , a i , and b i .
The following table resumes the obtained results in the case ( a i = 2 , i = 1 , . . . , n , b i = 4 , i = 1 , . . . , m ) :
ex( m , n ) st1 st2 LS 
iterTime (s)iterTime (s)iterTime (s)
10 × 20 10.000120.001240.0236
40 × 100 20.002130.003350.7996
100 × 200 20.004330.020151.5289
500 × 1000 23.090145.96191222.1254
In the above tables, we take ε = 1.0 × 10 4 .
We also denote the following:
- (iter) is the number of iterations.
- (time) is the computational time in seconds (s).
- (sti)i=1,2 represents the strategy of approximate functions introduced in this paper.
- (LS) represents the classical line search method.
Commentary: The numerical tests carried out show, without doubt, that our approach leads to a very significant reduction in the cost of calculation and an improvement in the result. When comparing the approximate functions to the line search approach, the number of iterations and computing time are significantly reduced.

6. Conclusions

The contribution of this paper is particular focused on the study of nonlinear optimization problems by using the logarithmic penalty method based on some new approximate functions. We first formulate the problems P and P η with the results of the convergence. Then, we find their solutions by using new approximate functions.
Finally, to lend further support to our theoretical results, a simulation study is conducted to illustrate the good accuracy of the studied approach. More precisely, our new approximate functions approach outperforms the line search one as it significantly reduces the cost and computing time.

Funding

This work has been supported by: The General Directorate of Scientific Research and Technological Development (DGRSDT-MESRS) under the PRFU project number C00L03UN190120220009. Algeria.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The author is very pleased to thank the editor and the reviewers for their helpful suggestions and comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Frank, M.; Wolfe, B. An algorithm for quadratic programming. Nav. Res. Logist. Q. 1956, 3, 95–110. [Google Scholar] [CrossRef]
  2. Wolfe, P. A Duality Theorem for Nonlinear Programming. Q. Appl. Math. 1961, 19, 239–244. [Google Scholar] [CrossRef]
  3. Bracken, J.; M1cCormiek, G.P. Selected Applications of Nonlinear Programming; John Wiley & Sons, Inc.: New York, NY, USA, 1968. [Google Scholar]
  4. Fiacco, A.V.; McCormick, G.P. Nonlinear Programming: Sequential Unconstrained Minimization Techniques; John Wiley & Sons, Inc.: New York, NY, USA, 1968. [Google Scholar]
  5. Bonnans, J.-F.; Gilbert, J.-C.; Lemaréchal, C.; Sagastizàbal, C. Numerical Optimization: Theoretical and Practical Aspects; Mathematics and Applications; Springer: Berlin/Heidelberg, Germany, 2003; Volume 27. [Google Scholar]
  6. Evtushenko, Y.G.; Zhadan, V.G. Stable barrier-projection and barrier-Newton methods in nonlinear programming. In Optimization Methods and Software; Taylor & Francis: Abingdon, UK, 1994; Volume 3, pp. 237–256. [Google Scholar]
  7. Nestrov, Y.E.; Nemiroveskii, A. Interior-Point Polynomial Algorithms in Convex Programming; SIAM: Philadelphia, PA, USA, 1994. [Google Scholar]
  8. Wright, S.J. Primal–Dual Interior Point Methods; SIAM: Philadelphia, PA, USA, 1997. [Google Scholar]
  9. Ye, Y. Interior Point Algorithms: Theory and Analysis. In Discrete Mathematics Optimization; Wiley-Interscience Series; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
  10. Powell, M.J.D. Karmarkar’s Algorithm: A View from Nonlinear Programming; Department of Applied Mathematics and Theoretical Physics, University of Cambridge: Cambridge, UK, 1989; Volume 53. [Google Scholar]
  11. Rosen, J.B. The Gradient Projection Method for Nonlinear Programming. Soc. Ind. Appl. Math. J. Appl. Math. 1960, 8, 181–217. [Google Scholar] [CrossRef]
  12. Rosen, J.B. The Gradient Projection Method for Nonlinear Programming. Soc. Ind. Appl. Math. J. Appl. Math. 1961, 9, 514–553. [Google Scholar] [CrossRef]
  13. Ouriemchi, M. Résolution de Problèmes non Linéaires par les Méthodes de Points Intérieurs. Théorie et Algorithmes. Doctoral Thesis, Université du Havre, Havre, France, 2006. [Google Scholar]
  14. Forsgren, A.; Gill, P.E.; Wright, M.H. Interior Methods for Nonlinear Optimization; SIAM: Philadelphia, PA, USA, 2002; Volume 44, pp. 525–597. [Google Scholar]
  15. Crouzeix, J.P.; Merikhi, B. A logarithm barrier method for semidefinite programming. RAIRO-Oper. Res. 2008, 42, 123–139. [Google Scholar] [CrossRef]
  16. Menniche, L.; Benterki, D. A Logarithmic Barrier Approach for Linear Programming. J. Computat. Appl. Math. 2017, 312, 267–275. [Google Scholar] [CrossRef]
  17. Cherif, L.B.; Merikhi, B. A Penalty Method for Nonlinear Programming. RAIRO-Oper. Res. 2019, 53, 29–38. [Google Scholar] [CrossRef]
  18. Chaghoub, S.; Benterki, D. A Logarithmic Barrier Method Based on a New Majorant Function for Convex Quadratic Programming. IAENG Int. J. Appl. Math. 2021, 51, 563–568. [Google Scholar]
  19. Leulmi, A. Etude d’une Méthode Barrière Logarithmique via Minorants Functions pour la Programmation Semi-Définie. Doctoral Thesis, Université de Biskra, Biskra, Algeria, 2018. [Google Scholar]
  20. Leulmi, A.; Merikhi, B.; Benterki, D. Study of a Logarithmic Barrier Approach for Linear Semidefinite Programming. J. Sib. Fed. Univ. Math. Phys. 2018, 11, 300–312. [Google Scholar]
  21. Leulmi, A.; Leulmi, S. Logarithmic Barrier Method via Minorant Function for Linear Programming. J. Sib. Fed. Univ. Math. Phys. 2019, 12, 191–201. [Google Scholar] [CrossRef]
  22. Wolkowicz, H.; Styan, G.P.H. Bounds for Eigenvalues Using Traces. Lin. Alg. Appl. 1980, 29, 471–506. [Google Scholar] [CrossRef]
  23. Crouzeix, J.-P.; Seeger, A. New bounds for the extreme values of a finite sample of real numbers. J. Math. Anal. Appl. 1996, 197, 411–426. [Google Scholar] [CrossRef]
  24. Bazraa, M.S.; Sherali, H.D.; Shetty, C.M. Nonlinear Programming, Willey-Interscience; John Wiley & Sons, Inc.: Hoboken, NJ, USA; Toronto, ON, Canada, 2006. [Google Scholar]
  25. Shannon, E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423+623–656. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Leulmi, A. An Efficient Penalty Method without a Line Search for Nonlinear Optimization. Axioms 2024, 13, 176. https://doi.org/10.3390/axioms13030176

AMA Style

Leulmi A. An Efficient Penalty Method without a Line Search for Nonlinear Optimization. Axioms. 2024; 13(3):176. https://doi.org/10.3390/axioms13030176

Chicago/Turabian Style

Leulmi, Assma. 2024. "An Efficient Penalty Method without a Line Search for Nonlinear Optimization" Axioms 13, no. 3: 176. https://doi.org/10.3390/axioms13030176

APA Style

Leulmi, A. (2024). An Efficient Penalty Method without a Line Search for Nonlinear Optimization. Axioms, 13(3), 176. https://doi.org/10.3390/axioms13030176

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop