Next Article in Journal
First-Order Differential Subordinations and Their Applications
Next Article in Special Issue
A Two-Step Newton Algorithm for the Weighted Complementarity Problem with Local Biquadratic Convergence
Previous Article in Journal
An Interval Type-3 Fuzzy–Fractal Approach for Plant Monitoring
Previous Article in Special Issue
A Reliable Computational Scheme for Stochastic Reaction–Diffusion Nonlinear Chemical Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Strong Convergence of a Two-Step Modified Newton Method for Weighted Complementarity Problems

1
School of Sciences, Xi’an Technological University, Xi’an 710021, China
2
School of Science, Xi’an University of Posts and Telecommunications, Xi’an 710121, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2023, 12(8), 742; https://doi.org/10.3390/axioms12080742
Submission received: 15 June 2023 / Revised: 18 July 2023 / Accepted: 26 July 2023 / Published: 28 July 2023
(This article belongs to the Special Issue Computational Mathematics in Engineering and Applied Science)

Abstract

:
This paper focuses on the weighted complementarity problem (WCP), which is widely used in the fields of economics, sciences and engineering. Not least because of its local superlinear convergence rate, smoothing Newton methods have widespread application in solving various optimization problems. A two-step smoothing Newton method with strong convergence is proposed. With a smoothing complementary function, the WCP is reformulated as a smoothing set of equations and solved by the proposed two-step smoothing Newton method. In each iteration, the new method computes the Newton equation twice, but using the same Jacobian, which can avoid consuming a lot of time in the calculation. To ensure the global convergence, a derivative-free line search rule is inserted. At the same time, we develop a different term in the solution of the smoothing Newton equation, which guarantees the local strong convergence. Under appropriate conditions, the algorithm has at least quadratic or even cubic local convergence. Numerical experiments indicate the stability and effectiveness of the new method. Moreover, compared to the general smoothing Newton method, the two-step smoothing Newton method can significantly improve the computational efficiency without increasing the computational cost.

1. Introduction

The weighted complementarity problem (WCP for short) is
x 0 , s 0 , G ( x , s , y ) = 0 , x s = w ,
in which x , s R n , y R m , w R + n is a known weighted vector, G ( x , s , y ) : R 2 n + m R n + m is a nonlinear mapping and x s represents the vector obtained by multiplying the components of x with s, respectively.
The concept of WCP was introduced first by Potra [1], is an extension of the complementarity problem (CP) [2,3], and is widely used in engineering, economics and science. As shown in [1], Fisher market equilibrium problems from economics can be transformed into WCPs, and quadratic programming and weighted centering problems can be equivalently converted to monotone WCPs. Not only that, the WCP has the potential to be developed into atmospheric chemistry [4,5] and multibody dynamics [6,7].
When G ( x , s , y ) : R 2 n + m R n + m is a linear mapping, the WCP (1) can be degenerated into the linear weighted complementarity problem (WLCP) as
x 0 , s 0 , M x + N s + P y = t , x s = w ,
where M , N R ( m + n ) × n , P R ( m + n ) × m , t R m + n . Many scholars have studied the WLCP and have put forward many effective algorithms. Potra [1] proposed two interior-point algorithms and discussed their computational complexity and convergence based on the methods by Mcshane [8] and Mizuno et al. [9]. Gowda [10] discussed a class of WLCP over Euclidean Jordan algebra. Chi et al. [11,12] proposed infeasible interior-point methods for WLCPs, which have good computational complexity. Asadi et al. [13] presented a modified interior-point method and obtained an iteration bound for the monotone WLCP.
On the other hand, recent years have witnessed a growing development of smoothing Newton methods for WCPs whose basic idea is to convert the problem to a smoothing set of nonlinear equations by employing a smoothing function, which is then solved by Newton methods [14,15,16,17,18,19]. Zhang [20] proposed a smoothing Newton method for the WLCP. For WCPs over Euclidean Jordan algebras, Tang et al. [21] presented a smoothing method and analyzed its convergence property under some weaker assumptions.
The two-step Newton method [22,23,24], which typically achieves third-order convergence when solving nonlinear equations H ( x ) = 0 , has a higher order of convergence than the classical Newton method. The two-step Newton algorithm computes not only a Newton step defined as
d 1 k = H ( x k ) 1 H ( x k ) ,
but also an approximate Newton step as
d 2 k = H ( x k ) 1 H ( y k ) ,
where y k = x k + d 1 k and H ( x ) represents the Jacobian matrix of H ( x ) . Compared with classical third-order methods such as Halley’s method [25] or super-Newton’s method [26], the two-step Newton algorithm does not need to compute the second-order Hessen matrix, and its computational cost is lower. Without adding additional derivatives and inverse operators, it is possible to raise the order of convergence from second to third order by evaluating the function only once.
In light of those considerations, we present here a two-step Newton algorithm possessing a high-order convergence rate for the WCP (1) on a smoothing complementarity function and an equivalent smoothing system of equations. The new algorithm has the following advantageous properties:
  • The proposed method computes the Newton direction twice in each iteration. The first calculation yields a Newton direction, and the second yields an approximate Newton direction. Moreover, both calculations employ the same Jacobian matrix (see Section 3), which saves computing costs.
  • The new algorithm utilizes a new term ζ k = min { γ , ε k ϱ } where ϱ [ 1 , 2 ] (see Section 3), when computing the Newton direction, unlike existing Newton algorithms for the WCP [20,21], which determine the local strong convergence. In particular, when ϱ = 2 , the algorithm has local cubic convergence properties.
  • To obtain global convergence properties, we employ a derivative-free line search rule.
This paper is structured as follows. Section 2 presents a smoothing function and discusses its basic properties. Section 3 presents a derivative-free two-step smoothing Newton algorithm for the WCP, which is shown to be feasible. Section 4 deals with convergence properties. Section 5 shows some experiment results. Section 6 gives some concluding remarks.

2. Preliminaries

We define a smoothing function as
θ ε ( u , v , r ) = u 2 + v 2 + 2 r + 2 ε ( u + v ) ,
where ε ( 0 , 1 ) and r 0 is a given constant. It readily follows that θ 0 ( u , v , r ) = 0 if and only if u 0 , v 0 , u v = r .
By simple reasoning and calculations, we can conclude the following.
Lemma 1. 
For any 0 < ε < 1 , θ ε ( u , v , r ) is continuously differentiable, where
( θ ε ( u , v , r ) ) ε = 1 u 2 + v 2 + 2 r + 2 ε , ( θ ε ( u , v , r ) ) u = u u 2 + v 2 + 2 r + 2 ε 1 , ( θ ε ( u , v , r ) ) v = v u 2 + v 2 + 2 r + 2 ε 1 .
In addition, ( θ ε ( u , v , r ) ) u < 0 and ( θ ε ( u , v , r ) ) v < 0 .
Let z = ( ε , x , s , y ) R × R 2 n + m and w R + n ; we define M ( z ) by
M ( z ) = ε θ ε ( x , s , w ) G ( x , s , y ) ,
where
θ ε ( x , s , w ) = θ ε ( x 1 , s 1 , w 1 ) θ ε ( x n , s n , w n ) .
It follows that the WCP (1) can be transformed into an equivalent equation:
M ( z ) = 0 .
The following lemma states the continuous differentiability of M ( z ) .
Lemma 2. 
Define M ( z ) and θ ε ( x , s , w ) by (4) and (5), respectively. For any ε > 0 , M ( z ) is continuously differentiable with
M ( z ) = 1 0 0 0 D 1 D 2 D 3 0 0 G x G s G y ,
where
D 1 = vec 1 x i 2 + s i 2 + 2 w i + 2 ε , i = 1 , 2 , , n ,
D 2 = diag x i x i 2 + s i 2 + 2 w i + 2 ε 1 , i = 1 , 2 , , n ,
D 3 = diag s i x i 2 + s i 2 + 2 w i + 2 ε 1 , i = 1 , 2 , , n .
In order to discuss the nonsingularity of Jacobian matrix M ( z ) , it is necessary to make some assumption.
Assumption A1. 
Assuming that G y is column full rank, then it holds that any ( Δ x ,   Δ s ,   Δ y ) R 2 n + m with
G x Δ x + G s Δ s + G y Δ y = 0
yields Δ x ,   Δ s 0 .
For the WLCP (2), i.e., G ( x , s , y ) : R 2 n + m R n + m is a linear mapping, then Assumption 1 reduces to
M Δ x + N Δ s + P Δ y = 0 ,
which shows that G ( x , s , y ) is monotone, and this case has been discussed for the feasibility of smoothing algorithms for the WLCP, see [1,20,27] and the reference therein.
Theorem 1. 
If Assumption 1 holds, then for any ε > 0 , M ( z ) is nonsingular.
Proof of Theorem 1. 
It only needs to verify that there exists Δ z = ( Δ ε , Δ x , Δ s , Δ y ) R 2 n + m + 1 such that
M ( z ) Δ z = 0 ,
with Δ z = 0 . Substituting (7) into (11) yields
Δ ε = 0 , G x Δ x + G s Δ s + G y Δ y = 0 , D 1 Δ ε + D 2 Δ x + D 3 Δ s = 0 .
By Lemmas 1 and 2, we obtain that the diagonal matrices D 2 and D 3 are both negative definite. Upon (12), we get
Δ x = D 2 1 D 3 Δ s ,
and then
Δ x , Δ s = Δ s T D 3 D 2 1 Δ s 0 .
Using Assumption 1 yields that Δ x , Δ s 0 , which, together with (14), implies
Δ x , Δ s = Δ s T D 3 D 2 1 Δ s = 0 ,
and then Δ s = 0 . We conclude from (13) that Δ x = 0 ; hence, Δ y = 0 due to the second equation in (12). We complete the proof.    □

3. A Two-Step Newton Method

Now, we state the two-step smoothing Newton method. In order to understand Algorithm 1 more intuitively, we also give the flow chart of the new algorithm, as shown in Figure 1.
Algorithm 1 A Two-Step Newton Method.
  • Initial Step. Choose ε 0 > 0 and γ , η ( 0 , 1 ) . Choose c ( 0 , 1 ) , l ( 0 , 1 ) and ϱ [ 1 , 2 ] . { ξ k } R + satisfies that k = 0 ξ k ξ < . Choose any ( x 0 , s 0 , y 0 ) R 2 n + m as a starting point and let μ 0 = ( ε 0 , 0 , 0 , 0 ) T R × R 2 n + m . Set z 0 = ( ε 0 , x 0 , s 0 , y 0 ) and k = 0 .
  • Step 1. If M ( z k ) = 0 , stop. Else, calculate Δ z 1 k by
    M ( z k ) Δ z 1 k = M ( z k ) + ζ k μ k ,
    where ζ k = min { γ , ε k ϱ } and μ k = ( ε k , 0 , 0 , 0 ) T . Let z ¯ k = z k + Δ z 1 k .
  • Step 2. Calculate Δ z 2 k by
    M ( z k ) Δ z 2 k = M ( z ¯ k ) + ζ k μ k .
  • Step 3. If
    M ( z k + Δ z 1 k + Δ z 2 k ) c · M ( z k ) ,
    set β k = 1 and go to Step 5.
  • Step 4. Set β k be the maximum of 1, l, l 2 , that satisfies the following inequality
    M ( z k + β k Δ z 1 k + β k 2 Δ z 2 k ) 2 ( 1 + ξ k ) M ( z k ) 2 η β k 2 ( Δ z 1 k 2 + Δ z 2 k 2 + M ( z k ) 2 ) .
  • Step 5. Set z k + 1 = z k + β k Δ z 1 k + β k 2 Δ z 2 k , k = k + 1 and return to Step 1.
Remark 1. 
1.
In each iteration, Algorithm 1 computes the Newton direction by the equations
M ( z k ) Δ z 1 k = M ( z k ) + ζ k μ k ,
and
M ( z k ) Δ z 2 k = M ( z ¯ k ) + ζ k μ k ,
using a new term ζ k = min { γ , ε k ϱ } , which is of significance for discussing the local strong convergence of Algorithm 1. Moreover, although Algorithm 1 computes the Newton direction twice, its computational cost is comparable to the classical Newton method.
2.
Algorithm 1 employs a derivative-free line search rule, a variant of that in [28]. As is shown in Theorem 2, the new derivative-free line search is feasible.
Theorem 2. 
If Assumption 1 holds, then Algorithm 1 is feasible. Moreover, we have
1.
ε k 0 for any k 0 .
2.
{ ε k } is non-increasing monotonically.
Proof of Theorem 2. 
With Theorem 1, we get that M ( z ) is invertible. Then, both Step 1 and Step 2 are feasible. Next, we show the feasibility of Step 4. Supposing not, then for any β k 0 ,
M ( z k + β k Δ z 1 k + β k 2 Δ z 2 k ) 2 > ( 1 + ξ k ) M ( z k ) 2 η β k 2 ( Δ z 1 k 2 + Δ z 2 k 2 + M ( z k ) 2 ) M ( z k ) 2 η β k 2 ( Δ z 1 k 2 + Δ z 2 k 2 + M ( z k ) 2 ) .
Hence,
M ( z k + β k Δ z 1 k + β k 2 Δ z 2 k ) 2 M ( z k ) 2 > η β k 2 ( Δ z 1 k 2 + Δ z 2 k 2 + M ( z k ) 2 )
Dividing (20) by β k and taking the limit as k , we can conclude that
lim k M ( z k + β k Δ z 1 k + β k 2 Δ z 2 k ) 2 M ( z k ) 2 β k 0 .
Therefore, we get
lim k M ( z k + β k Δ z 1 k + β k 2 Δ z 2 k ) 2 M ( z k ) 2 β k = 2 M ( z k ) T ( M ( z k ) Δ z 1 k ) 0 .
On the other hand, if z k is not the solution of (1), then it follows from Step 1 that
M ( z k ) T ( M ( z k ) Δ z 1 k ) = M ( z k ) T ( M ( z k ) + ζ k μ k ) ( γ 1 ) ] M ( z k ) 2 < 0 ,
where the first equality comes from the fact that ζ k = min { γ , ε k ϱ } γ 1 . This contradicts (21). Hence, Step 4 is feasible, and then Algorithm 1 is well-defined.
Then, we show ε k 0 by induction. Suppose that ε k 0 for some k > 0 , we obtain from (15) and (16) that
Δ ε k 1 = ε k + ζ k ε k ,
and
Δ ε k 2 = ε k Δ ε k 1 + ζ k ε k .
Then, it holds by Step 5 that
ε k + 1 = ε k + β k Δ ε k 1 + β k 2 Δ ε k 2 = ε k + β k ( ε k + ζ k ε k ) + β k 2 ( ε k Δ ε k 1 + ζ k ε k ) = ( 1 β k ) ε k + β k ζ k ε k = [ 1 β k ( 1 ζ k ) ] ε k ,
which means that ε k + 1 0 due to the fact that β k 1 and ζ k 1 . Moreover, it follows that
ε k + 1 = [ 1 β k ( 1 ζ k ) ] ε k ε k ,
i.e., { ε k } is non-increasing monotonically. □
Lemma 3. 
If Assumption 1 holds, then { M ( z k ) } is convergent, and the sequence { z k } remains in the level set L ( z ) of M ( z )
L ( z ) = { z R + × R 2 n + m | M ( z ) e ξ 2 M ( z 0 ) } .
Proof of Lemma 3. 
According to (18), we have
M ( z k + 1 ) 2 ( 1 + ξ k ) M ( z k ) 2 .
Since k = 0 ξ k ξ < , it follows from Lemma 3.3 in [29] that { M ( z k ) 2 } is convergent. Then, { M ( z k ) } is also convergent.
Moreover, we have
M ( z k + 1 ) 1 + ξ k M ( z k ) 1 + ξ k · 1 + ξ k 1 · · 1 + ξ 0 M ( z 0 ) = j = 0 k 1 + ξ j M ( z 0 ) j = 0 k 1 k + 1 ( 1 + ξ j ) k + 1 2 M ( z 0 ) 1 + ξ k + 1 k + 1 2 M ( z 0 ) e ξ 2 M ( z 0 ) .

4. Convergence Properties

4.1. Global Convergence

We first show a statement of great significance before analyzing the convergence properties of Algorithm 1.
Theorem 3. 
If Assumption 1 holds, then it holds that lim k β k M ( z k ) = 0 .
Proof of Theorem 3. 
Define S ( k ) and R ( k ) by
S ( k ) = { j k | ( 17 ) i s   s a t i s f i e d }
and
R ( k ) = { 0 , 1 , , k } S ( k ) .
Let | S ( k ) | be the number of elements in S ( k ) .
If (17) holds for infinite k, then | S ( k ) | as k . By (17), (18) and (27), we get
M ( z k + 1 ) 2 i R ( k ) ( 1 + ξ i ) i S ( k ) l 2 M ( z 0 ) 2 = i R ( k ) ( 1 + ξ i ) l 2 | S ( k ) | M ( z 0 ) 2 e ξ l 2 | S ( k ) | M ( z 0 ) 2 .
As k , M ( z k + 1 ) 2 0 , i.e., M ( z k + 1 ) 0 , and then lim k β k M ( z k ) = 0 .
Assume that (17) holds for finite k. Then, we have from (18) that
k = 0 β k 2 M ( z k ) = 0 ,
which implies
lim k β k M ( z k ) = 0 .
The proof is completed. □
Theorem 4. 
If Assumption 1 holds, then { z k } converges to a solution of the WCP (1).
Proof of Theorem 4. 
According to Lemma 3, we know that { M ( z k ) } is convergent. Suppose, without loss of generality, that { z k = ( ε k , x k , s k , y k ) } converges to z * = ( ε * , x * , s * , y * ) and lim k M ( z k ) = M ( z * ) 0 . Next, we show M ( z * ) = 0 by contradiction. Assume that M ( z * ) > 0 , then lim k β k = 0 due to Theorem 3.
Let β ^ = β k l , it follows from Step 4 that
M ( z k + β ^ Δ z 1 k + β ^ 2 Δ z 2 k ) 2 > ( 1 + ξ k ) M ( z k ) 2 η β ^ 2 ( Δ z 1 k 2 + Δ z 2 k 2 + M ( z k ) 2 ) ,
for sufficiently large k.
On the other hand, since
M ( z k + β ^ ( Δ z 1 k + β ^ Δ z 2 k ) ) = M ( z k ) + β ^ M ( z k ) ( Δ z 1 k + β ^ Δ z 2 k ) + o ( β ^ ) ,
it follows that
M ( z k + β ^ ( Δ z 1 k + β ^ Δ z 2 k ) ) 2 = M ( z k ) + β ^ M ( z k ) ( Δ z 1 k + β ^ Δ z 2 k ) 2 + o ( β ^ ) = M ( z k ) 2 + 2 β ^ M ( z k ) T M ( z k ) ( Δ z 1 k + β ^ Δ z 2 k ) + o ( β ^ ) .
Combining (15) and (16) with (29), we obtain
M ( z k + β ^ ( Δ z 1 k + β ^ Δ z 2 k ) ) 2 = M ( z k ) 2 + 2 β ^ M ( z k ) T M ( z k ) + ζ k μ k + o ( β ^ ) = ( 1 2 β ^ ) M ( z k ) 2 + 2 β ^ ζ k M ( z k ) T μ k + o ( β ^ ) [ 1 2 β ^ ( 1 γ ) ] M ( z k ) 2 + o ( β ^ ) .
Then, from (28) and (30), we get
[ 1 2 β ^ ( 1 γ ) ] M ( z k ) 2 + o ( β ^ ) > ( 1 + ξ k ) M ( z k ) 2 η β ^ 2 ( Δ z 1 k 2 + Δ z 2 k 2 + M ( z k ) 2 ) M ( z k ) 2 η β ^ 2 ( Δ z 1 k 2 + Δ z 2 k 2 + M ( z k ) 2 ) .
It follows by simple calculation that
2 ( 1 γ ) M ( z k ) 2 + o ( β ^ ) β ^ < η β ^ ( Δ z 1 k 2 + Δ z 2 k 2 + M ( z k ) 2 ) .
Passing to the limit in (31), then
2 ( γ 1 ) M ( z * ) 2 0 .
As M ( z * ) > 0 , it follows that
γ > 1 ,
a contradiction. Thus, M ( z * ) = 0 , which means that { z k } converges to a solution of the WCP (1). □

4.2. Local Convergence

We then discuss the local superquadratical convergence properties of Algorithm 1.
Theorem 5. 
If Assumption 1 holds, all J M ( z * ) are nonsingular. Suppose that G ( x , s , y ) and M ( x , s , y ) are both Lipschitz continuous on some neighborhood of z * , then
1.
β k 1 for any sufficiently large k.
2.
{ z k } converges to z * locally superquadratically. In particular, { z k } converges to z * locally cubically if ϱ = 2 .
Proof of Theorem 5. 
Upon Theorem 4, we have that M ( z * ) = 0 . All J M ( z * ) are nonsingular, so we have for any sufficiently large k that
M ( z k ) 1 = O ( 1 ) .
Since G ( x , s , y ) is Lipschitz continuous on some neighborhood of z * , M ( z ) is strongly semismooth and Lipschitz continuous on some neighborhood of z * , namely,
M ( z k ) M ( z * ) M ( z k ) ( z k z * ) = O ( z k z * 2 ) ,
and
M ( z k ) = M ( z k ) M ( z * ) = O ( z k z * ) ,
for any sufficiently large k.
By the definition of ζ k and μ k , it follows that
ζ k μ k ε k ϱ + 1 M ( z k ) ϱ + 1 .
Then, combining (15) and (32)–(35) implies
z k + Δ z 1 k z * = z k + M ( z k ) 1 ( M ( z k ) + ζ k μ k ) z * = O M ( z k ) M ( z * ) M ( x k ) ( z k z * ) + ζ k μ k O ( z k z * 2 ) + O ( M ( z k ) 3 ) = O ( z k z * 2 ) ,
which means that z k + Δ z 1 k is sufficiently close to z * for sufficiently large k. Then, according to (34) and (36), we have that
M ( z k + Δ z 1 k ) = M ( z k + Δ z 1 k ) M ( z * ) = O ( z k + Δ z 1 k z * ) = O ( z k z * 2 ) = O ( M ( z k ) 2 )
Hence, since ϱ 1 , it follows from (16), (32), (35) and (37) that
Δ z 2 k = M ( z k ) 1 M ( z k + Δ z 1 k ) + ζ k μ k O M ( z k + Δ z 1 k ) + ζ k μ k = O ( M ( z k ) 2 ) + O ( M ( z k ) ϱ + 1 ) = O ( M ( z k ) 2 ) ,
combining with (34) and (36) yields
z k + Δ z 1 k + Δ z 2 k Δ z * z k + Δ z 1 k z * + Δ z 2 k = O ( z k z * 2 ) ,
for any sufficiently large k, which also means that z k + Δ z 1 k + Δ z 2 k is sufficiently close to z * for a sufficiently large k, which, together with the Lipschitz continuity of M ( z ) on some neighborhood of z * , implies
M ( z k + Δ z 1 k + Δ z 2 k ) M ( z k ) M ( z k ) ( Δ z 1 k + Δ z 2 k ) = O ( Δ z 1 k + Δ z 2 k 2 ) .
Then, it holds that
M ( z k + Δ z 1 k + Δ z 2 k ) M ( z k ) + M ( z k ) ( Δ z 1 k + Δ z 2 k ) + M ( z k + Δ z 1 k + Δ z 2 k ) M ( z k ) M ( z k ) ( Δ z 1 k + Δ z 2 k ) = M ( z k ) + M ( z k ) Δ z 1 k + M ( z ¯ k ) + M ( z ¯ k ) + M ( z k ) Δ z 2 k + O ( Δ z 1 k + Δ z 2 k 2 ) .
Now, we consider the term
M ( z k ) + M ( z k ) Δ z 1 k + M ( z ¯ k ) + M ( z ¯ k ) + M ( z k ) Δ z 2 k + O ( Δ z 1 k + Δ z 2 k 2 ) .
By (15), (32) and (34), we obtain
Δ z 1 k = M ( z k ) 1 M ( z k ) + ζ k μ k = O ( M ( z k ) ) , = O ( z k z * ) .
On the other hand, according to (15), (16) and (35), we obtain
M ( z k ) + M ( z k ) Δ z 1 k = ζ k μ k = O ( M ( z k ) ϱ + 1 ) ,
and
M ( z ¯ k ) + M ( z k ) Δ z 2 k = ζ k μ k = O ( M ( z k ) ϱ + 1 ) .
So, combining (34), (38) and (41)–(44), we have
M ( z k + Δ z 1 k + Δ z 2 k ) = O ( M ( z k ) ϱ + 1 ) + O ( z k z * 2 ) = o ( M ( z k ) ) = ρ k M ( z k ) ,
where ρ k 0 . This means that (17) makes sense for a sufficiently large k, which shows that β k 1 when z k is sufficiently close to z * , i.e.,
z k + 1 = z k + Δ z 1 k + Δ z 2 k .
By using (16), (32), (35), (46) and the Lipschitz continuity of M ( z ) on some neighborhood of z * , we get
z k + 1 z * = z ¯ k z * + M ( z k ) 1 ( M ( z ¯ k ) + ζ k μ k ) = O M ( z ¯ k ) M ( z * ) M ( z k ) ( z ¯ k z * ) + ζ k μ k = O M ( z ¯ k ) M ( z * ) M ( z ¯ k ) ( z ¯ k z * ) + ( M ( z ¯ k ) M ( z k ) ) ( z ¯ k z * ) + ζ k μ k = O ( z ¯ k z * 2 ) + O ( M ( z k ) ϱ + 1 ) = O ( z k z * ϱ + 1 ) .
Moreover, we have from (34) that
M ( z k + 1 ) = O ( z k + 1 z * ) = O ( z k z * ϱ + 1 ) = O ( M ( z k ) ϱ + 1 ) ,
which means that { z k } converges to z * locally and superquadratically since ϱ [ 1 , 2 ] . In particular, if ϱ = 2 , then
z k + 1 z * = O ( z k z * 3 )
and
M ( z k + 1 ) = O ( M ( z k ) 3 ) ,
which means that { z k } converges to z * locally cubically. □

5. Numerical Experiments

We implement Algorithm 1 in practice and use it to solve some numerical examples to verify the feasibility and effectiveness in this section. All programs are implemented on Matlab R2018b and a PC with 2.30 GHz CPU and 16.00 GB RAM. We also code the algorithm in [20], denoted as SNM_Z, and compare it with the new algorithm. To illustrate the performance of the new algorithm, we also code and compare the algorithm in [20], denoted as SNM_Z, with Algorithm 1. The stopping criterion is M ( z k )   10 6 and the parameters are set as
l = 0.5 , c = 0.8 , γ = 0.01 , ε 0 = 0.1 , η = 0.001 and ξ k = 1 / 2 k + 2 .
For SNM_Z, the stopping criterion is the same as that in Algorithm 1, and the parameters are the same as [20].
Example 1. 
Consider an example of the WLCP (2) with
M = A B , N = 0 I , P = 0 A T , t = A f g ,
where B R n × n , A R m × n whose elements are produced by the normal distribution on [ 0 , 1 ] , f , g R n are chosen uniformly from [ 0 , 1 ] and [ 1 , 0 ] , respectively. The weighted vector w R n is generated by w = u v with v = B u g , where u , v R n are generated uniformly from [ 0 , 1 ] .
We test two kinds of problems using Algorithm 1 with different B, denoted by B 1 and B 2 . B 1 is produced by setting B 1 = Q Q T / Q Q T , where Q is generated uniformly from [ 0 , 1 ] . The diagonal matrix B 2 is generated randomly on [ 0 , 1 ] . The initial points x 0 , s 0 and y 0 are chosen as ( 1 , 0 , , 0 ) T with relevant dimensions in every experiment.
First, in order to state the influence of ϱ on the local convergence, we perform different ϱ for each case on B 1 . We perform three experiments for each problem and present the numerical results in Table 1. In what follows, (AIT)IT represents the (average) number of iterations, (ATime)Time is the (average) time taken for the algorithm to run in seconds, and (AERO)ERO represents the (average) value of M ( z k ) in the last iteration. As we can see in Table 1, Algorithm 1 has different local convergence rates with different values of ϱ . Moreover, Algorithm 1 has at least a local quadratic rate of convergence.
Then, we test an example of m = 400 and n = 800 for B 1 to visually demonstrate the local convergence properties of Algorithm 1 and SNM_Z. In what follows, we set ϱ = 2 in Algorithm 1. The results are shown in Table 2, which shows that Algorithm 1 has a local cubic convergence rate whose convergence rate is actually faster than SNM_Z, which possesses the local quadratic convergence rate.
Finally, we randomly performed 10 trials for each case. The tested results are shown in Table 3, which demonstrates that Algorithm 1 carries out fewer iterations than SNM_Z. In addition, although Algorithm 1 calculates the Newton direction twice in each iteration, it does not consume too much time compared with SNM_Z.
Example 2. 
Consider an example of the WCP (1), where
G ( x , s , y ) = B x + C T y s + d C ( x t ) ,
with B = diag ( b ) where b R n is generated uniformly from [ 0 , 1 ] , C R m × n whose entries are produced from the standard normal distribution randomly. d , t R n and w R n are all generated randomly from [ 0 , 1 ] .
We also generated 10 trials for each case. The initial points x 0 , s 0 and y 0 are all chosen as ( 1 , 0 , , 0 ) T with relevant dimensions. The test results are shown in Table 4, which also indicates that Algorithm 1 is more stable and efficient than SNM_Z.

6. Conclusions

The two-step Newton method, known for its efficiency in solving nonlinear equations, is adopted to solve the WCP in this paper. A novel two-step Newton method designed specifically for solving the WCP is proposed. The best property of this method is its consistent Jacobian matrix in each iteration, resulting in an improved convergence rate without additional computational expenses. To guarantee the global convergence, a new derivative-free line search rule is introduced. With appropriate conditions and parameter selection, the algorithm achieves cubic local convergence. Numerical results show that the two-step Newton method significantly improves the computational efficiency without increasing the computational cost compared to the general smoothing Newton method.

Author Contributions

Conceptualization, X.L.; methodology, X.L.; software, J.Z.; validation, X.L. and J.Z.; formal analysis, J.Z.; writing—original draft, X.L.; writing—review and editing, J.Z.; supervision, X.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data sets used in this paper are available from the corresponding authors upon reasonable request.

Conflicts of Interest

These authors declare no conflict of interest.

References

  1. Potra, F. Weighted complementarity problems-a new paradigm for computing equilibria. SIAM J. Optim. 2012, 22, 1634–1654. [Google Scholar] [CrossRef]
  2. Facchinei, F.; Pang, J. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer: New York, NY, USA, 2003. [Google Scholar]
  3. Che, H.; Wang, Y.; Li, M. A smoothing inexact Newton method for P0 nonlinear complementarity problem. Front. Math. China 2012, 7, 1043–1058. [Google Scholar] [CrossRef]
  4. Amundson, N.; Caboussat, A.; He, J.; Seinfeld, J. Primal-dual interior-point method for an optimization problem related to the modeling of atmospheric organic aerosols. J. Optim. Theory Appl. 2006, 130, 375–407. [Google Scholar]
  5. Caboussat, A.; Leonard, A. Numerical method for a dynamic optimization problem arising in the modeling of a population of aerosol particles. C. R. Math. 2008, 346, 677–680. [Google Scholar] [CrossRef]
  6. Flores, P.; Leine, R.; Glocker, C. Modeling and analysis of planar rigid multibody systems with translational clearance joints based on the non-smooth dynamics approach. Multibody Syst. Dyn. 2010, 23, 165–190. [Google Scholar] [CrossRef]
  7. Pfeiffer, F.; Foerg, M.; Ulbrich, H. Numerical aspects of non-smooth multibody dynamics. Comput. Method Appl. Mech. 2006, 195, 6891–6908. [Google Scholar] [CrossRef]
  8. McShane, K. Superlinearly convergent O( n L)-iteration interior-point algorithms for linear programming and the monotone linear complementarity problem. SIAM J. Optim. 1994, 4, 247–261. [Google Scholar] [CrossRef]
  9. Mizuno, S.; Todd, M.; Ye, Y. On adaptive-step primal-dual interior-point algorithms for linear programming. Math. Oper. Res. 1993, 18, 964–981. [Google Scholar] [CrossRef]
  10. Gowda, M.S. Weighted LCPs and interior point systems for copositive linear transformations on Euclidean Jordan algebras. J. Glob. Optim. 2019, 74, 285–295. [Google Scholar] [CrossRef]
  11. Chi, X.; Wang, G. A full-Newton step infeasible interior-point method for the special weighted linear complementarity problem. J. Optim. Theory Appl. 2021, 190, 108–129. [Google Scholar] [CrossRef]
  12. Chi, X.; Wan, Z.; Hao, Z. A full-modified-Newton step infeasible interior-point method for the special weighted linear complementarity problem. J. Ind. Manag. Optim. 2021, 18, 2579–2598. [Google Scholar] [CrossRef]
  13. Asadi, S.; Darvay, Z.; Lesaja, G.; Mahdavi-Amiri, N.; Potra, F. A full-Newton step interior-point method for monotone weighted linear complementarity problems. J. Optim. Theory Appl. 2020, 186, 864–878. [Google Scholar] [CrossRef]
  14. Liu, L.; Liu, S.; Liu, H. A predictor-corrector smoothing Newton method for symmetric cone complementarity problem. Appl. Math. Comput. 2010, 217, 2989–2999. [Google Scholar] [CrossRef]
  15. Narushima, Y.; Sagara, N.; Ogasawara, H. A smoothing Newton method with Fischer-Burmeister function for second-order cone complementarity problems. J. Optim. Theory Appl. 2011, 149, 79–101. [Google Scholar] [CrossRef]
  16. Liu, X.; Liu, S. A new nonmonotone smoothing Newton method for the symmetric cone complementarity problem with the Cartesian P0-property. Math. Method Oper. Res. 2020, 92, 229–247. [Google Scholar] [CrossRef]
  17. Chen, P.; Lin, G.; Zhu, X.; Bai, F. Smoothing Newton method for nonsmooth second-order cone complementarity problems with application to electric power markets. J. Glob. Optim. 2021, 80, 635–659. [Google Scholar] [CrossRef]
  18. Zhou, S.; Pan, L.; Xiu, N.; Qi, H. Quadratic convergence of smoothing Newton’s method for 0/1 Loss optimization. SIAM J. Optim. 2021, 31, 3184–3211. [Google Scholar] [CrossRef]
  19. Khouja, R.; Mourrain, B.; Yakoubsohn, J. Newton-type methods for simultaneous matrix diagonalization. Calcolo 2022, 59, 38. [Google Scholar] [CrossRef]
  20. Zhang, J. A smoothing Newton algorithm for weighted linear complementarity problem. Optim. Lett. 2016, 10, 499–509. [Google Scholar]
  21. Tang, J.; Zhang, H. A nonmonotone smoothing Newton algorithm for weighted complementarity problem. J. Optim. Theory Appl. 2021, 189, 679–715. [Google Scholar] [CrossRef]
  22. Potra, F.A.; Ptak, V. Nondiscrete induction and iterative processes. SIAM Rev. 1987, 29, 505–506. [Google Scholar]
  23. Kou, J.; Li, Y.; Wang, X. A modification of Newton method with third-order convergence. Appl. Math. Comput. 2006, 181, 1106–1111. [Google Scholar] [CrossRef]
  24. Magrenan Ruiz, A.A.; Argyros, I.K. Two-step Newton methods. J. Complex. 2014, 30, 533–553. [Google Scholar] [CrossRef]
  25. Gander, W. On Halley’s iteration method. Amer. Math. 1985, 92, 131–134. [Google Scholar] [CrossRef]
  26. Ezquerro, J.A.; Hernandez, M.A. On a convex acceleration of Newton’s method. J. Optim. Theory Appl. 1999, 100, 311–326. [Google Scholar] [CrossRef]
  27. Tang, J. A variant nonmonotone smoothing algorithm with improved numerical results for large-scale LWCPs. Comput. Appl. Math. 2018, 37, 3927–3936. [Google Scholar] [CrossRef]
  28. Li, D.; Fukushima, M. A globally and superlinearly convergent Gauss-Newton-based BFGS method for symmetric nonlinear equations. SIAM J. Numer. Anal. 1999, 37, 152–172. [Google Scholar] [CrossRef]
  29. Dennis, J.; More, J. A characterization of superlinear convergence and its applications to quasi-Newton methods. Math. Comput. 1974, 28, 549–560. [Google Scholar] [CrossRef]
Figure 1. Flow chart of Algorithm 1.
Figure 1. Flow chart of Algorithm 1.
Axioms 12 00742 g001
Table 1. Numerical results for a WLCP with B 1 .
Table 1. Numerical results for a WLCP with B 1 .
mn ϱ = 1 ϱ = 1.5 ϱ = 2
ITTimeEROITTimeEROITTimeERO
500100052.0413 1.7219 × 10 10 41.9238 5.4462 × 10 7 41.8250 5.8769 × 10 12
62.3358 3.9422 × 10 7 52.4771 3.1306 × 10 13 41.7943 3.3544 × 10 12
62.3764 1.3445 × 10 11 52.4997 3.1950 × 10 13 41.7302 3.5464 × 10 12
10002000614.0543 1.7549 × 10 12 511.9847 5.9945 × 10 7 412.4097 1.8342 × 10 12
614.5208 2.6955 × 10 12 511.8457 3.0670 × 10 9 412.3936 1.1576 × 10 8
614.2352 1.3315 × 10 9 512.5786 2.7924 × 10 10 412.4111 1.8763 × 10 12
15003000539.7915 5.0310 × 10 7 538.5589 3.0246 × 10 12 436.4203 6.2360 × 10 12
748.1246 2.9957 × 10 7 641.4926 3.2203 × 10 12 439.9579 6.7978 × 10 11
746.4935 1.3715 × 10 9 641.8099 3.4429 × 10 10 446.6347 2.0403 × 10 11
200040006110.1033 5.6894 × 10 10 581.3782 5.9703 × 10 7 4100.4127 2.7987 × 10 11
6107.8945 2.2694 × 10 8 6104.6634 3.3573 × 10 7 4110.2353 1.2117 × 10 11
7128.1354 2.6992 × 10 11 7136.8423 5.7307 × 10 12 4103.2185 2.2580 × 10 10
250050006178.1547 3.4099 × 10 10 5176.0884 6.7770 × 10 7 4100.4127 2.7987 × 10 11
6261.9206 2.5631 × 10 10 6178.2723 1.1975 × 10 10 4110.2353 1.2117 × 10 11
6286.3750 6.7117 × 10 12 6209.1771 6.7177 × 10 12 4130.8529 8.7520 × 10 12
300060006371.0119 4.0263 × 10 10 6356.2151 1.8624 × 10 8 4316.1594 4.5234 × 10 11
6332.6154 2.4767 × 10 8 6306.6382 9.6163 × 10 12 4320.2794 6.5542 × 10 11
6352.4338 1.7184 × 10 10 6369.1693 8.3990 × 10 8 4320.2625 1.8997 × 10 11
Table 2. Variation of the value of M ( z k ) with the number of iterations for B 1 .
Table 2. Variation of the value of M ( z k ) with the number of iterations for B 1 .
kAlgorithm 1SNM_Z
1 3.5025 × 10 0 9.0554 × 10 0
2 1.4006 × 10 1 1.5244 × 10 0
3 1.7885 × 10 4 1.3534 × 10 1
4 3.0450 × 10 12 4.6924 × 10 3
5 1.2604 × 10 5
6 1.5665 × 10 10
Table 3. Numerical comparison results for a WLCP.
Table 3. Numerical comparison results for a WLCP.
BmnAlgorithm 1SNM_Z
AITATimeAEROAITATimeAERO
B 1 50010004.01.6388 2.2736 × 10 7 5.92.5001 3.8463 × 10 8
100020004.015.8679 3.5679 × 10 11 5.617.8929 7.1401 × 10 8
150030004.142.0557 3.5400 × 10 7 5.931.5197 3.7355 × 10 8
200040004.190.4867 5.3932 × 10 7 5.580.4471 5.7230 × 10 8
250050004.1179.8909 4.8241 × 10 7 6.0142.9854 7.1523 × 10 8
300060004.1300.2717 5.7601 × 10 7 6.8215.0043 2.5449 × 10 8
B 2 50010004.21.5968 7.5550 × 10 8 6.81.5040 1.0939 × 10 7
100020004.29.5536 2.7452 × 10 7 7.09.4423 9.6079 × 10 8
150030004.432.8206 2.2022 × 10 7 5.835.8170 3.9237 × 10 8
200040004.580.3553 1.9778 × 10 9 7.173.1503 1.2470 × 10 7
250050004.6159.6309 7.6437 × 10 9 7.5127.3718 1.2020 × 10 7
300060004.8270.6326 5.9255 × 10 8 7.3204.2898 1.1762 × 10 7
Table 4. Numerical comparison results for a WCP.
Table 4. Numerical comparison results for a WCP.
mnAlgorithm 1SNM_Z
AITATimeAEROAITATimeAERO
50010004.41.8729 1.7379 × 10 7 6.72.4799 2.4252 × 10 8
100020004.613.3430 7.0375 × 10 9 6.917.1639 1.1370 × 10 7
150030005.044.8153 1.2673 × 10 7 7.461.5095 4.7525 × 10 8
200040005.2120.9742 4.0044 × 10 8 6.8114.9068 1.7140 × 10 8
250050005.3220.2452 3.6690 × 10 8 7.3217.9106 2.0620 × 10 8
300060005.2371.6272 2.2190 × 10 9 6.6404.5870 1.1662 × 10 7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, X.; Zhang, J. Strong Convergence of a Two-Step Modified Newton Method for Weighted Complementarity Problems. Axioms 2023, 12, 742. https://doi.org/10.3390/axioms12080742

AMA Style

Liu X, Zhang J. Strong Convergence of a Two-Step Modified Newton Method for Weighted Complementarity Problems. Axioms. 2023; 12(8):742. https://doi.org/10.3390/axioms12080742

Chicago/Turabian Style

Liu, Xiangjing, and Jianke Zhang. 2023. "Strong Convergence of a Two-Step Modified Newton Method for Weighted Complementarity Problems" Axioms 12, no. 8: 742. https://doi.org/10.3390/axioms12080742

APA Style

Liu, X., & Zhang, J. (2023). Strong Convergence of a Two-Step Modified Newton Method for Weighted Complementarity Problems. Axioms, 12(8), 742. https://doi.org/10.3390/axioms12080742

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop