Next Article in Journal
Global Existence of Chemotaxis-Navier–Stokes System with Logistic Source on the Whole Space R2
Next Article in Special Issue
An Efficient Penalty Method without a Line Search for Nonlinear Optimization
Previous Article in Journal
A Class of Fractional Viscoelastic Kirchhoff Equations Involving Two Nonlinear Source Terms of Different Signs
Previous Article in Special Issue
Hyper-Heuristic Approach for Tuning Parameter Adaptation in Differential Evolution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Subspace Minimization Conjugate Gradient Method for Solving Nonlinear Monotone Equations with Convex Constraints

School of Mathematics and Statistics, Guizhou University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(3), 170; https://doi.org/10.3390/axioms13030170
Submission received: 6 January 2024 / Revised: 22 February 2024 / Accepted: 27 February 2024 / Published: 6 March 2024
(This article belongs to the Special Issue Numerical Analysis and Optimization)

Abstract

:
The subspace minimization conjugate gradient (SMCG) methods proposed by Yuan and Store are efficient iterative methods for unconstrained optimization, where the search directions are generated by minimizing the quadratic approximate models of the objective function at the current iterative point. Although the SMCG methods have illustrated excellent numerical performance, they are only used to solve unconstrained optimization problems at present. In this paper, we extend the SMCG methods and present an efficient SMCG method for solving nonlinear monotone equations with convex constraints by combining it with the projection technique, where the search direction is sufficiently descent.Under mild conditions, we establish the global convergence and R-linear convergence rate of the proposed method. The numerical experiment indicates that the proposed method is very promising.

1. Introduction

We consider the following nonlinear equations with convex constraints:
F x = 0 , x Ω ,
where Ω R n is a non-empty closed convex set and F: R n R n is a continuous mapping that satisfies the monotonicity condition
F x F y , x y 0 ,
for all x , y R n . It is easy to verify that the solution set of problem (1) is convex under condition (2).
Nonlinear equations have numerous practical applications, e.g., machinery manufacturing problems [1], neural networks [2], economic equilibrium problems [3], image recovery problems [4], and so on. In the context of many practical applications, problem (1) has attracted a substantial number of scholars to put forward more effective iterative methods to find solutions, such as Newton’s method, quasi-Newton methods, trust region methods, Levenberg–Marquardt methods, or their variants ([5,6,7,8,9]). Although these methods are very popular and have fast convergence at an adequately good initial point, they are not suitable for solving large-scale nonlinear equations due to the calculation and storage of the Jacobian matrix or its approximation.
Due to its simple form and low memory requirement, conjugate gradient (CG) methods are used to solve problem (1) by combining them with projection technology proposed by Solodov and Svaiter [10] (see [11,12]). Xiao and Zhu [13] extended the famous CG_DESCENT method [14] for solving nonlinear monotone equations with convex constraints due to its effectiveness. Liu and Li [15] presented an efficient projection method for solving convex constrained monotone nonlinear equations, which can be viewed as another extension of the CG_DESCENT method [14] and was used to solve the sparse signal reconstruction in compressive sensing. Based on the Dai–Yuan (DY) method [16], Liu and Feng [17] presented an efficient derivative-free iterative method and established its Q-linear convergence rate of the proposed method under the local error bound condition. By minimizing the distance between relative matrix and the self-scaling memoryless BFGS method in the Frobenius norm, Gao et al. [18] proposed an adaptive projection method for solving nonlinear equations and applied it to recover a sparse signal from incomplete and contaminated sampling measurements. Based on [19], Li and Zheng [20] proposed two effective derivative-free methods for solving large-scale nonsmooth monotone nonlinear equations. Waziri et al. [21] proposed two DY-type iterative methods for solving (1). By using the projection method [10], Abdulkarim et al. [22] introduced two classes of three-term methods for solving (1) and established the global convergence under a weaker monotonicity condition.
The subspace minimization conjugate gradient (SMCG) methods proposed by Yuan and Stoer [23] are generalizations of the traditional CG methods and are a class of iterative methods for unconstrained optimization. The SMCG methods have illustrated excellent numerical performance and have also received much attention recently. However, the SMCG methods are only used to solve unconstrained optimization at present. Therefore, it is very interesting to study the SMCG methods for solving nonlinear equations with convex constraints. In this paper, we propose an efficient SMCG method for solving nonlinear monotone equations with convex constraints by combining it with the projection technology, where the search direction is in a sufficient descent. Under suitable conditions, the global convergence and the convergence rate of the proposed method are established. The numerical experiment is conducted, which indicates that the proposed method is superior to some efficient conjugate gradient methods.
The remainder of this paper is organized as follows. In Section 2, an efficient SMCG method for solving nonlinear monotone equations with convex constraints is presented. We prove the global convergence and the convergence rate of the proposed method in Section 3. In Section 4, the conducted numerical experiment is discussed to verify the effectiveness of the proposed method. The conclusion is presented in Section 5.

2. The SMCG Method for Solving Nonlinear Monotone Equations with Convex Constraints

In this section, we first review the SMCG methods for unconstrained optimization, and then propose an efficient SMCG method for solving (1) by combining it with the projection technique and exploit some of its important properties.

2.1. The SMCG Method for Unconstrained Optimization

We review the SMCG methods here.
The SMCG methods were proposed by Yuan and Stoer [23] to solve the unconstrained optimization problem
min x R n f x ,
where f : R n R is continuously differentiable. The SMCG methods are of the form x k + 1 = x k + α k d ^ k , where α k is the stepsize and d ^ k is the search direction, which are generated by minimizing the quadratic approximate models of the objective function f at the current iterative point x k in the subspace Ω k = Span { d ^ k 1 , g k } , namely   
min d ^ Ω k m k d ^ = g k T d ^ + 1 2 d ^ T B k d ^ ,
where B k is an approximation to the Hessian matrix and is required to satisfy the quasi-Newton equation B k s ^ k 1 = y ^ k 1 , s ^ k 1 = x k + 1 x k = α k d ^ k , g k = f x k .
In the following, we consider the case that d ^ k 1 and g k are not collinear. Since the vector d ^ k in Ω k = Span { d ^ k 1 , g k } can be expressed as
d ^ k = μ k g k + ν k s ^ k 1 ,
where μ k , ν k R , by substituting (4) into (3), we obtain
min ( μ k , ν k ) R 2 Φ μ k , ν k = | | g k | | 2 g k T s ^ k 1 T μ k ν k + 1 2 μ k ν k T g k T B k g k s ^ k 1 T B k g k g k T B k s ^ k 1 s ^ k 1 T B k s ^ k 1 μ k ν k .
When B k is positive definite, by imposing Φ μ k , ν k = 0 , 0 , we obtain the optimal solution of subproblem (5) (for more details, please see [23]):
μ k * ν k * = 1 Δ k g k T y ^ k 1 g k T s ^ k 1 s ^ k 1 T y ^ k 1 | | g k | | 2 g k T y ^ k 1 | | g k | | 2 ρ k g k T s ^ k 1 ,
where Δ k = ρ k s ^ k 1 T y ^ k 1 g k T y ^ k 1 2 , ρ k = g k T B k g k , y ^ k 1 = g k + 1 g k .  
An important property about the SMCG methods was given by Dai and Kou [24] in 2016. They established the two-dimensional finite termination property of the SMCG methods and presented some Barzilai–Borwein conjugate gradient (BBCG) methods with different ρ k values, and the most efficient one is
ρ k B B C G 3 = 3 | | g k | | 2 | | y ^ k 1 | | 2 2 s ^ k 1 T y ^ k 1 .
Motivated by the SMCG methods [23] and ρ k B B C G 3 , Liu and Liu [25] extended the BBCG3 method to general unconstrained optimization and presented an efficient subspace minimization conjugate gradient method (SMCG_BB). Since then, a lot of SMCG methods [26,27,28] have been proposed for unconstrained optimization. The SMCG methods are very efficient and have received much attention.

2.2. The SMCG Method for Solving (1) and Its Some Important Properties

We will extend the SMCG methods for unconstrained optimization for solving (1) by combining it with the projection technique and exploit some important properties of the search direction in the subsection. The motivation behind we extend the SMCG methods for unconstrained optimization to solve (1) is that the SMCG methods have the following characteristics: (i) The search directions of the SMCG methods are parallel to those of the traditional CG methods when the exact line search is performed, and thus reduce to the traditional CG methods when the exact line search is performed. It implies that the SMCG methods can inherit the finite termination property of the traditional CG methods for convex quadratic minimization. (ii) The search directions of the SMCG methods are generated by solving (3) over the whole two-dimensional subspace Ω k = Span { d ^ k 1 , g k } , while those of the traditional CG methods are d ^ k = g k + β k d ^ k 1 , where β k is called the conjugate parameter. Obviously, the search directions of the traditional CG methods are derived in the special subset of Ω k to make them possess the conjugate property. As a result, the SMCG methods have more choices and thus have more potential in theoretical properties and numerical performance. In theory, the SMCG methods without the exact line search can possess the finite termination property when solving two-dimensional strictly convex quadratic minimization problems [24], while this is impossible for the traditional CG methods when the line search is not exact. In numerical performance, the numerical results in [25,26,27,28] indicated that the SMCG methods are very efficient.
For simplicity, we abbreviate F x k as F k in the following. We are particularly interested in the SMCG methods proposed by Yuan and store [23], where the search directions are given by
d ^ k = μ k * g k + ν k * s ^ k 1 ,
where μ k * and ν k * are determined by (6). For the choice of ρ k in (6), we take the form (7) due to its effectiveness [24]. Therefore, based on (8) and (7), the search direction of the SMCG method for solving problem (1) can be arranged as
d ¯ k = 1 Δ k F k T y k 1 F k T s k 1 s k 1 T y k 1 F k 2 F k + F k T y k 1 F k 2 ρ k F k T s k 1 s k 1 ,
where y k 1 = y ¯ k 1 + r s k 1 , y ¯ k 1 = F x k F k 1 , s k 1 = x k x k 1 , Δ k = ρ k s k 1 T y k 1 F k T y k 1 2 , ρ k = 3 | | F k | | 2 | | y k 1 | | 2 2 s k 1 T y k 1 .
In order to analyze some properties of the search direction, the search direction will be reset as F k when s k 1 T y k 1 < ξ 1 | | y k 1 | | 2 , where ξ 1 > 0 . Therefore, the search direction is truncated as
d k = d ¯ k , if s k 1 T y k 1 ξ 1 | | y k 1 | | 2 , F k , otherwise ,
where d ¯ k is given by (9).
The projection technique, which will be used in the proposed method, is described as follows.
By setting z k = x k + α k d k as a trial point, we define a hyperplane
H k = { x R n | F ( z k ) , x z k = 0 } ,
which strictly separates x k from the zero points of F ( x ) in (1). The projection operator is a mapping from R n to the non-empty closed subset Ω :
P Ω [ x ] = arg min { | | x y | | | y Ω } ,
which enjoys the non-expansive property
| | P Ω [ x ] y | | | | x y | | , y Ω .
Solodov and Svaiter [10] showed that the next iterative point x k + 1 is the projection of x k onto H k , namely
x k + 1 = x k F ( z k ) , x k z k | | F ( z k ) | | 2 F ( z k ) .
By combining (10) with the projection technique, we present an SMCG method for solving (1), which is described in detail as follows.
The following lemma indicates that the search direction d k satisfies the sufficient descent property.
Lemma 1. 
The search direction d k generated by Algorithm 1 always satisfies the sufficient descent condition
d k T F k C | | F k | | 2 ,
for all k 0 .
Proof. 
According to (10), we know that (11) holds with C = 1 if s k 1 T y k 1 < ξ 1 | | y k 1 | | 2 . We next consider the opposite situation. It follows that
d k T F k = 1 Δ k [ F k T y k 1 F k T s k 1 s k 1 T y k 1 | | F k | | 2 F k + F k T y k 1 | | F k | | 2 ρ k F k T s k 1 s k 1 ] T F k = | | F k | | 4 Δ k s k 1 T y k 1 2 F k T y k 1 F k T s k 1 | | F k | | 2 + ρ k F k T s k 1 | | F k | | 2 2 = Δ | | F k | | 4 Δ k · η k | | F k | | 4 Δ k · Δ k ρ k = F k 4 ρ k ,
where the inequality comes from the fact that treating η k as a one variable function of F k T s k 1 | | F k | | 2 and minimizing it can yield η k Δ k ρ k . Consequently, by the choice of ρ k and (10), it holds that
d k T F k | | F k | | 4 ρ k 2 s k 1 T y k 1 3 | | y k 1 | | 2 | | F k | | 2 2 ξ 1 3 | | F k | | 2 .
In sum, (11) holds with C = min 1 , 2 ξ 1 3 . The proof is completed.    □
Algorithm 1 Subspace Minimization Conjugate Gradient Method for Solving (1)
  • Step 0. Initialization. Select x 0 R n , ε > 0 , 0 < σ < 1 , ξ 0 , 1 , ρ ( 0 , 1 ) , κ 0 , 2 . Set k = 0 .
  • Step 1. If F k   ε , stop. Otherwise, compute search direction d k by (10).
  • Step 2. Let z k = x k + α k d k , where α k = max ξ ρ i | i = 0 , 1 , 2 , 3 ,  is determined by
    F ( x k + α k d k ) , d k σ α k | | F ( x k + α k d k ) | | | | d k | | 2 ,
  • Step 3. If z k Ω and F z k ε , x k + 1 = z k , then stop. Otherwise, we determine x k + 1 by
    x k + 1 = P Ω [ x k κ λ k F ( z k ) ] ,
    where
    λ k = F ( z k ) , x k z k F ( z k ) 2 .
  • Step 4. Set k = k + 1 and go to Step 1.
Lemma 2. 
Let the sequences d k and x k be generated by Algorithm 1, then there always exists a stepsize α k satisfying the line search (13).
Proof. 
We prove it by contradiction. Suppose that inequality (13) does not hold for any positive integer i at the k-th iteration, we can determine that
F ( x k + β ρ i d k ) , d k < σ β ρ i F ( x k + β ρ i d k ) d k 2 .
By taking i , it follows from the continuity of F and ρ ( 0 , 1 ) that
F x k T d k 0 ,
which contradicts (11). The proof is completed. □

3. Convergence Analysis

In this section, we will establish the global convergence and the convergence rate of Algorithm 1.

3.1. Global Convergence

We first perform the following assumptions.
Assumption 1. 
There is a solution x * Ω * such that F ( x * ) = 0 .
Assumption 2. 
The mapping F is continuous and monotone.
By utilizing (2), we can obtain
s k 1 T y k 1 = s k 1 T y ¯ k 1 + r s k 1 = s k 1 T F x k F x k 1 + r s k 1 T s k 1 r s k 1 2 > 0 .
The next lemma indicates that sequence | | x k x * | | generated by Algorithm 1 is Fejèr monotone with respect to Ω .
Lemma 3. 
Suppose that Assumptions 1 and 2 hold, and x k and z k are generated by Algorithm 1. Then, it holds that
| | x k + 1 x * | | 2 | | x k x * | | 2 κ 2 κ σ 2 | | x k z k | | 4 , x * Ω * .
Moreover, the sequence x k is bounded and
k = 0 | | x k z k | | 4 < + .
Proof. 
From (2), the following holds:
F z k , x k x * F z k , x k z k , x * Ω * ,
which, together with (13), implies that
F z k , x k z k σ α k 2 | | F z k | | | | d k | | 2 0 .
As a result, we have
| | x k + 1 x * | | 2 = P Ω [ x k κ λ k F z k x * ] | | x k κ λ k F z k x * | | 2 = | | x k x * | | 2 2 κ λ k F z k , x k x * + | | κ λ k F z k | | 2 | | x k x * | | 2 2 κ λ k F z k , x k z k + | | κ λ k F z k | | 2 = | | x k x * | | 2 κ 2 κ F z k , x k z k 2 | | F z k | | 2 | | x k x * | | 2 κ 2 κ σ 2 | | x k z k | | 4 .
It follows that the sequence | | x k x * | | is non-increasing and thus, the sequence x k is bounded. We also have
κ 2 κ σ 2 k = 0 | | x k z k | | 4 < | | x 0 x * | | 2 < + .
By the definition of z k , we can determine that
lim k x k z k = lim k α k | | d k | | = 0 .
The following lemma is proved only based on the continuity assumption on F.
Lemma 4. 
Suppose that d k is generated by Algorithm 1. Then, for all k 0 , we have
C | | F k | | | | d k | | C 3 | | F k | | ,
where 0 < C < C 3 .
Proof. 
From (11) and by utilizing the Cauchy–Schwartz inequality, it follows that
| | d k | | C | | F k | | .
In the following, we consider two cases: when i   s k 1 T y k 1 ξ 1 | | y k 1 | | 2 holds, we have
1 Δ k = 1 ρ k s k 1 T y k 1 ( F k T y k 1 ) 2 = 1 3 2 | | y k 1 | | 2 | | F k | | 2 ( F k T y k 1 ) 2 2 | | y k 1 | | 2 | | F k | | 2 .
Therefore, by (10), (16) and (22), as well as the Cauchy–Schwarz inequality, we obtain
| | d k | | = 1 Δ k · [ ( F k T y k 1 F k T s k 1 s k 1 T y k 1 | | F k | | 2 ) F k + ( F k T y k 1 | | F k | | 2 3 | | y k 1 | | 2 | | F k | | 2 2 s k 1 T y k 1 F k T s k 1 ) s k 1 ] 1 Δ k · [ F k T y k 1 F k T s k 1 F k + s k 1 T y k 1 | | F k | | 3 + F k T y k 1 | | F k | | 2 s k 1 + 3 | | y k 1 | | 2 | | F k | | 2 2 s k 1 T y k 1 F k T s k 1 s k 1 ] 2 | | y k 1 | | 2 | | F k | | 2 · [ 3 s k 1 y k 1 F k 3 + 3 2 s k 1 T y k 1 | | y k 1 | | 2 s k 1 2 F k 3 ] = 6 s k 1 y k 1 + 3 s k 1 T y k 1 s k 1 2 F k = 6 s k 1 2 s k 1 y k 1 + 3 s k 1 T y k 1 s k 1 2 F k 6 s k 1 2 s k 1 T y k 1 + 3 s k 1 T y k 1 s k 1 2 F k 9 s k 1 2 s k 1 T y k 1 F k 9 r F k .
ii   s k 1 T y k 1 < ξ 1 y k 1 2 or k = 0 , d k = F k . In sum, (20) holds for all k 0 with C 3 = max 9 r , 1 and C in (11). The proof is completed. □
In the following theorem, we establish the global convergence of Algorithm 1.
Theorem 1. 
Suppose that Assumption 2 holds, and the sequences x k and z k are generated by Algorithm 1. Then, the following holds:
lim k inf | | F k | | = 0 .
Proof. 
We prove it by contradiction. Suppose that (24) does not hold, i.e., there exists a constant r > 0 such that | | F k | | r , k 0 . Together with (21), it implies that
| | d k | | C r , k 0 .
By utilizing (19) and (25), we can determine that lim k α k = 0 . By α k = β ρ i k and the line search (13), for a large enough k, we can determine that
F ( x k + β ρ i k 1 d k ) , d k < σ β ρ i k 1 F ( x k + β ρ i k 1 d k ) d k 2 .
It follows from (20) and | | F k | | r that the sequence d k is bounded. Together with the boundedness of x k , we know that there exist convergent subsequences for both x k and d k . Without the loss of generality, we assume that the two sequences x k and d k are convergent. Hence, taking limits on (26) yields
F x ¯ , d ¯ < 0 ,
where x ¯ and d ¯ are the corresponding limit points. By taking limits on both sides of (11), we obtain
F x ¯ , d ¯ C F x ¯ 2 .
It follows from (27) and (28) that F x ¯ = 0 , which contradicts | | F k | | r . Therefore, we obtain (24). The proof is completed. □

3.2. R-Linear Convergence Rate

We begin to analyze the Q-linear convergence and R-linear convergence of Algorithm 1. We say that a method enjoys Q-linear convergence to mean that its iterative sequence x k satisfies lim sup n x n + 1 x * x n x * ϕ , where ϕ 0 , 1 ; we say that a method enjoys R-linear convergence to mean that for its iterative sequence x k , there exists two positive constants m 0 , , q 0 , 1 such that x n x * m q k holds (See [29]).
Assumption 3. 
For any x * Ω * , there exist constant ω 0 , 1 and δ > 0 such that
ω d i s t x , Ω * F x 2 , x N x * , δ ,
where d i s t x , Ω * denotes the distance from x to the solution set Ω * , and N x * , δ = { x Ω | x x * δ } .
Theorem 2. 
Suppose that Assumptions 2 and 3 hold, and let the sequence x k be generated by Algorithm 1. Then, the sequence dist x k , Ω * is Q-linearly convergent to 0 and the sequence x k is R-linearly convergent to x ¯ Ω * .
Proof. 
By setting u k : = arg min x k u u Ω * , we know that u k is the nearest solution from x k , i.e.,
x k u k = d i s t x k , Ω * .
From (17), (21) and (29), for u k Ω * , we have
d i s t x k + 1 , Ω * 2 = x k + 1 u k 2 d i s t x k , Ω * 2 σ 2 α k d k 4 d i s t x k , Ω * 2 σ 2 α k 4 C 4 F k 4 d i s t x k , Ω * 2 σ 2 ω 2 α k 4 C 4 d i s t x k , Ω * 2 = 1 σ 2 ω 2 α k 4 C 4 d i s t x k , Ω * 2 ,
which, together with σ 0 , 1 , ω 0 , 1 , α k 0 , 1 , and C 0 , 1 implies that the sequence d i s t x k , Ω * is Q-linearly convergent to 0. If d i s t x k , Ω * has this property, then the sequence x k is R-linearly convergent to x ¯ Ω * . The proof is completed. □

4. Numerical Experiments

In this section, the numerical experiment is conducted to compare the performance of Algorithm 1 with that of the HTTCGP method [30], the PDY method [17], the MPRPA method [18], and the PCG method [15], which are very effective types of projection algorithm for solving (1). All codes of the test methods were implemented in MATLAB R2019a and were run on an HP personal desktop computer with Intel(R) Core(TM) i5-10500 CPU 3.10 GHz, 8.00 GB RAM, and Windows 10 operation system.
In Algorithm 1, we choose the following the parameter values:
ρ = 0.53 , σ = 0.0001 , ξ = 0.55 , ξ 1 = 10 7 , ξ = 0.55 , κ = 1.9 , r = 0.1 .
The parameters of the other four test algorithms use the default values from [15,17,18,30], respectively. In the numerical experiment, all test methods are terminated if the iteration exceeds 10,000, or if the function value of the current iterations satisfies the condition F x k 10 5 .
Denote
F x = F 1 x , F 2 x , , F n x T .
The test problems are given as follows.
Problem 1. 
This problem is a logarithmic function with Ω = x R n | x i > 1 [17], i.e.,
F i x = l n x i + 1 x i n , i = 1 , 2 , 3 , , n .
Problem 2. 
This problem is a discrete boundary value problem with Ω = R + n [17], i.e.,
F 1 x = 2 x 1 + 0.5 h 2 x 1 + h 3 x 2 , F i x = 2 x i + 0.5 h 2 x i + i h 3 x i 1 + x i + 1 , F n x = 2 x n + 0.5 h 2 x n + n h 3 x n 1 ,
where h = 1 n + 1 , i = 2 , 3 , , n 1 .
Problem 3. 
This problem is a trigexp funtion with Ω = R + n [17], i.e.,
F 1 x = 3 x 1 3 + 2 x 2 5 + sin x 1 x 2 sin x 1 + x 2 , F i x = x i 1 e x i 1 x i + x i 4 + 3 x i 2 + 2 x i + 1 + sin x i 1 x i sin x i 1 + x i 8 , F n x = x n 1 e x n 1 x n + 4 x n 3 ,
where i = 2 , 3 , , n 1 .
Problem 4. 
This problem is a tridiagonal exponential problem with Ω = R + n [17], i.e.,
F i x = e x i 1 ,
where i = 1 , 2 , , n .
Problem 5. 
This problem is problem 4.6 in [17] with Ω = R + n , i.e.,
F i x = x i 2 sin | x i 1 | ,
where i = 1 , 2 , , n .
Problem 6. 
This problem is problem 4.7 in [17], i.e.,
F 1 x = 2.5 x 1 + x 2 1 , F i x = x i 1 + 2.5 x i + x i + 1 1 , F n x = x n 1 + 2.5 x n 1 ,
where Ω = x R n | x 3 , i = 2 , 3 , , n 1 .
Problem 7. 
This problem is problem 4.8 in [17], i.e.,
F i x = 2 x i sin x i ,
where Ω = x R n | x 2 , i = 1 , 2 , , n .
Problem 8. 
This problem is problem 3 in [31], i.e.,
F 1 x = x 1 e cos x 1 + x 2 n + 1 , F i x = x i e cos x i 1 + x i + x i + 1 n + 1 , F n x = x n e cos x n 1 + x n n + 1 ,
where Ω = R + n , i = 2 , , n 1 .
Problem 9. 
This problem is problem 4.3 in [32], i.e.,
F i x = i n e x i 1 ,
where Ω = R + n , i = 1 , 2 , , n .
Problem 10. 
This problem is problem 4.8 in [32], i.e.,
F i x = e x i 2 + 3 sin x i cos x i 1 ,
where Ω = R + n , i = 1 , 2 , , n .
Problem 11. 
This problem is problem 4.5 in [32], i.e.,
F 1 x = x 1 e cos x 1 + x 2 2 , F i x = x i e cos x i 1 + x i + x i + 1 i , F n x = x n e cos x n 1 + x n n ,
where Ω = R + n , i = 2 , , n 1 .
Problem 12. 
This problem is problem 5 in [31], i.e.,
F 1 x = e x 1 1 , F i x = e x i + x i 1 1 ,
where Ω = R + n , i = 2 , , n 1 .
Problem 13. 
This problem is problem 6 in [31], i.e.,
F 1 x = 2 x 1 x 2 + e x 1 1 , F i x = x i 1 + 2 x i x i + 1 + e x i 1 , F n x = x n 1 + 2 x n + e x n 1 ,
where Ω = R + n , i = 2 , , n 1 .
Problem 14. 
This problem is problem 4.3 in [20], i.e.,
F 1 x = x 1 2 x 1 2 + 2 x 2 2 1 , F i x = x i 2 x i 1 2 + 2 x i 2 + 2 x i + 1 2 1 , F n x = x n 2 x n 1 2 + 2 x n 2 1 ,
where Ω = R + n , i = 2 , , n 1 .
Problem 15. 
This problem is a complementarity problem in [20], i.e.,
F i x = x i 1 2 1.01 ,
where Ω = R + n , i = 1 , 2 , , n .
The above 15 problems with different dimensions ( n = 1000 , 5000 , 10,000 , and 50,000 ) are used to test the five test methods, as well as different initial points x 0 = a 1 * o n e s n , 1 , where a 1 = 0.1 , 0.2 , 0.5 , 0.12 , 0.15 , 2.0 , and m = o n e s n , 1 . Some of the numerical results are listed in Table 1, where “Al” represents Algorithm 1, “Pi” ( i = 1 , 2 , , 15 ) stands for the i-th test problem listed above, and “Ni” and “NF” denote the number of iterations and the number of function calculations, respectively. Other numerical results are available at https://www.cnblogs.com/888-0516-2333/p/18026523 (accessed on 6 January 2024).
The performance profiles proposed by Dolan and Moré [33] are used to compare the numerical performance of the test methods in terms of Ni, NF, and T, respectively. We explain the performance profile by taking the number of iterations as an example. Denote the test set and the set of algorithms by P and A, respectively. We assume that we have n a algorithms and n p problems. For each problem with p P and algorithm a A , t p , a represents the number of iterations required to solve problem p by algorithm a. We use the performance ratio
r p , a = t p , a min t p , a | a A
to compare the performance on problem p by solver a with the best performance by any algorithm on this problem. To obtain an overall assessment of the performance of the algorithm, we define
ρ a τ = 1 n p s i z e p P | r p , a τ ,
which is the probability for algorithm a A that a performance ratio r p , a is within a factor τ a R of the best possible ratio and reflects the numerical performance of algorithm a relative to the other test algorithms in A. Obviously, algorithms with large probability p a τ are to be preferred. Therefore, in the figure plotted with these ρ a τ of the test methods, the higher the curve is, the better the corresponding algorithm a performs.
As shown in Figure 1, we observe that, in terms of the number of iterations, Algorithm 1 is the best, followed by the HTTCGP, MPRPA, and PCG methods, and the PDY method is the worst. Figure 2 indicates that Algorithm 1 has significant improvement over the other four test methods in terms of the number of function calculations, since it successfully solves about 78% of test problems with the least number of function calculations, while the percentages of the other four methods are all less than 10%. As for the reason for the significant improvement in terms of NF, it is due to the fact that the search direction of Algorithm 1 is generated by minimizing the quadratic approximate model in the two-dimensional subspace Ω k = Span { d ^ k 1 , g k } , which implies that the search direction has new parameters corresponding to F k and thus results in that it requires less function calculations in Step 2. This is also the advantage of the SMCG methods compared with other CG methods. We can see from Figure 3 that Algorithm 1 is much faster than the other four test methods.
The numerical experiment indicates Algorithm 1 is superior to the the other four test methods.

5. Conclusions

In this paper, an efficient SMCG method is presented for solving nonlinear monotone equations with convex constraints. The sufficient descent property of the search direction is analyzed, and the global convergence and convergence rate of the proposed algorithm are established under suitable assumptions. The numerical results confirm the effectiveness of the proposed method.
The SMCG method has illustrated a good numerical performance for solving nonlinear monotone equations with convex constraints. There is a wide research gap with regard to studying the SMCG methods for solving nonlinear monotone equations with convex constraints, including exploiting suitable quadratic or non-quadratic approximate models to derive new search directions. This is also our future research focus.

Author Contributions

Conceptualization, T.S. and Z.L.; methodology, T.S. and Z.L.; software, T.S. and Z.L.; validation, T.S. and Z.L.; formal analysis, T.S. and Z.L.; investigation, T.S. and Z.L.; resources, T.S. and Z.L.; data curation, T.S. and Z.L.; writing—original draft preparation, T.S.; writing—review and editing, Z.L.; visualization, T.S. and Z.L.; supervision, T.S. and Z.L.; project administration, T.S. and Z.L.; funding acquisition, Z.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by National Science Foundation of China (No. 12261019), Guizhou Science Foundation (No. QHKJC-ZK[2022]YB084).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article and corresponding link.

Acknowledgments

We would like to thank the Associate Editor and the anonymous referees for their valuable comments and suggestions.

Conflicts of Interest

The authors declare no competing interests.

References

  1. Guo, D.S.; Nie, Z.Y.; Yan, L.C. The application of noise-tolerant ZD design formula to robots’ kinematic control via time-varying nonlinear equations solving. IEEE Trans. Syst. Man Cybern. Syst. 2017, 48, 2188–2197. [Google Scholar] [CrossRef]
  2. Shi, Y.; Zhang, Y. New discrete-time models of zeroing neural network solving systems of time-variant linear and nonlinear inequalities. IEEE Trans. Syst. Man Cybern. Syst. 2017, 50, 565–576. [Google Scholar] [CrossRef]
  3. Dirkse, S.P.; Ferris, M.C. MCPLIB: A collection of nonlinear mixed complementarity problems. Optim. Methods Softw. 1995, 5, 319–345. [Google Scholar] [CrossRef]
  4. Xiao, Y.H.; Wang, Q.Y.; Hu, Q.J. Non-smooth equations based methods for l1-norm problems with applications to compressed sensing. Nonlinear Anal. 2011, 74, 3570–3577. [Google Scholar] [CrossRef]
  5. Yuan, Y.X. Subspace methods for large scale nonlinear equations and nonlinear least squares. Optim. Eng. 2009, 10, 207–218. [Google Scholar] [CrossRef]
  6. Ahmad, F.; Tohidi, E.; Carrasco, J.A. A parameterized multi-step Newton method for solving systems of nonlinear equations. Numer. Algorithms 2016, 71, 631–653. [Google Scholar] [CrossRef]
  7. Lukšan, L.; Vlček, J. New quasi-Newton method for solving systems of nonlinear equations. Appl. Math. 2017, 62, 121–134. [Google Scholar] [CrossRef]
  8. Yu, Z. On the global convergence of a Levenberg-Marquardt method for constrained nonlinear equations. JAMC 2004, 16, 183–194. [Google Scholar] [CrossRef]
  9. Zhang, J.L.; Wang, Y. A new trust region method for nonlinear equations. Math. Methods Oper. Res. 2003, 58, 283–298. [Google Scholar] [CrossRef]
  10. Solodov, M.V.; Svaiter, B.F. A globally convergent inexact Newton method for systems of monotone equations. In Reformulation: Nonsmooth, Piecewise Smooth, Semismooth and Smoothing Methods; Fukushima, M., Qi, L., Eds.; Kluwer Academic: Boston, MA, USA, 1998; pp. 355–369. [Google Scholar]
  11. Zheng, Y.; Zheng, B. Two new Dai–Liao-type conjugate gradient methods for unconstrained optimization problems. J. Optim. Theory Appl. 2017, 175, 502–509. [Google Scholar] [CrossRef]
  12. Li, M.; Liu, H.W.; Liu, Z.X. A new family of conjugate gradient methods for unconstrained optimization. J. Appl. Math. Comput. 2018, 58, 219–234. [Google Scholar] [CrossRef]
  13. Xiao, Y.H.; Zhu, H. A conjugate gradient method to solve convex constrained monotone equations with applications in compressive sensing. J. Math. Anal. Appl. 2013, 405, 310–319. [Google Scholar] [CrossRef]
  14. Hager, H.H.; Zhang, H. A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 2005, 16, 170–192. [Google Scholar] [CrossRef]
  15. Liu, J.K.; Li, S.J. A projection method for convex constrained monotone nonlinear equations with applications. Comput. Math. Appl. 2015, 70, 2442–2453. [Google Scholar] [CrossRef]
  16. Dai, Y.H.; Yuan, Y.X. A nonlinear conjugate gradient with a strong global convergence property. SIAM J. Optim. 1999, 10, 177–182. [Google Scholar] [CrossRef]
  17. Liu, J.K.; Feng, Y.M. A derivative-free iterative method for nonlinear monotone equations with convex constraints. Numer. Algorithms 2019, 82, 245–262. [Google Scholar] [CrossRef]
  18. Gao, P.T.; He, C.J.; Liu, Y. An adaptive family of projection methods for constrained monotone nonlinear equations with applications. Appl. Math. Comput. 2019, 359, 1–16. [Google Scholar] [CrossRef]
  19. Bojari, S.; Eslahchi, M.R. Two families of scaled three-term conjugate gradient methods with sufficient descent property for nonconvex optimization. Numer. Algorithms 2020, 83, 901–933. [Google Scholar] [CrossRef]
  20. Li, Q.; Zheng, B. Scaled three-term derivative-free methods for solving large-scale nonlinear monotone equations. Numer. Algorithms 2021, 87, 1343–1367. [Google Scholar] [CrossRef]
  21. Waziri, M.Y.; Ahmed, K. Two Descent Dai-Yuan Conjugate Gradient Methods for Systems of Monotone Nonlinear Equations. J. Sci. Comput. 2022, 90, 36. [Google Scholar] [CrossRef]
  22. Ibrahim, A.H.; Alshahrani, M.; Al-Homidan, S. Two classes of spectral three-term derivative-free method for solving nonlinear equations with application. Numer. Algorithms 2023. [Google Scholar] [CrossRef]
  23. Yuan, Y.X.; Stoer, J. A subspace study on conjugate gradient algorithms. Z. Angew. Math. Mech. 1995, 75, 69–77. [Google Scholar] [CrossRef]
  24. Dai, Y.H.; Kou, C.X. A Barzilai-Borwein conjugate gradient method. Sci. China Math. 2016, 59, 1511–1524. [Google Scholar] [CrossRef]
  25. Liu, H.W.; Liu, Z.X. An efficient Barzilai–Borwein conjugate gradient method for unconstrained optimization. J. Optim. Theory Appl. 2019, 180, 879–906. [Google Scholar] [CrossRef]
  26. Li, Y.F.; Liu, Z.X.; Liu, H.W. A subspace minimization conjugate gradient method based on conic model for unconstrained optimization. Comput. Appl. Math. 2019, 38, 16. [Google Scholar] [CrossRef]
  27. Zhao, T.; Liu, H.W.; Liu, Z.X. New subspace minimization conjugate gradient methods based on regularization model for unconstrained optimization. Numer. Algorithms 2021, 87, 1501–1534. [Google Scholar] [CrossRef]
  28. Wang, T.; Liu, Z.; Liu, H. A new subspace minimization conjugate gradient method based on tensor model for unconstrained optimization. Int. J. Comput. Math. 2019, 96, 1924–1942. [Google Scholar] [CrossRef]
  29. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equation in Several Variables; Academic Press: New York, NY, USA; London, UK, 1970. [Google Scholar]
  30. Yin, J.H.; Jian, J.B.; Jiang, X.Z.; Liu, M. X; Wang, L.Z. A hybrid three-term conjugate gradient projection method for constrained nonlinear monotone equations with applications. Numer. Algorithms 2021, 88, 389–418. [Google Scholar] [CrossRef]
  31. Ou, Y.G.; Li, J.Y. A new derivative-free SCG-type projection method for nonlinear monotone equations with convex constraints. J. Appl. Math. Comput. 2018, 56, 195–216. [Google Scholar] [CrossRef]
  32. Ma, G.D.; Jin, J.C.; Jian, J.B.; Yin, J.H.; Han, D.L. A modified inertial three-term conjugate gradient projection method for constrained nonlinear equations with applications in compressed sensing. Numer. Algorithms 2023, 92, 1621–1653. [Google Scholar] [CrossRef]
  33. Dolan, E.D.; More, J.J. Benchmarking optimization software with performance profiles. Math. Program 2002, 91, 201–213. [Google Scholar] [CrossRef]
Figure 1. Performance profilesof the five algorithms with respect to number of iterations (Ni).
Figure 1. Performance profilesof the five algorithms with respect to number of iterations (Ni).
Axioms 13 00170 g001
Figure 2. Performance profiles of the five algorithms with respect to number of function evaluations (NF).
Figure 2. Performance profiles of the five algorithms with respect to number of function evaluations (NF).
Axioms 13 00170 g002
Figure 3. Performance profiles of the five algorithms with respect to CPU time (T).
Figure 3. Performance profiles of the five algorithms with respect to CPU time (T).
Axioms 13 00170 g003
Table 1. The numerical results (n = 10,000).
Table 1. The numerical results (n = 10,000).
P x 0 AlPDYHTTCGPMPRPAPCG x 0 AlHTTCGPPDYMPRPAPCG
Ni/NFNi/NFNi/NFNi/NFNi/NFNi/NFNi/NFNi/NFNi/NFNi/NF
P10.1 m4\914\313\728\579\231.2 m8\1714\315\1134\6911\27
0.2 m5\1114\313\729\598\201.5 m8\1720\425\1135\719\22
0.5 m6\1318\394\931\6310\252.0 m8\1715\336\1436\7311\27
P20.1 m4\107\236\1912\2510\361.2 m5\118\257\2213\279\32
0.2 m4\107\236\1912\2510\361.5 m6\138\257\2213\279\32
0.5 m3\86\205\169\198\292.0 m6\138\258\2514\2910\35
P30.1 m19\3916\4226\7242\8516\391.2 m17\3527\8149\12637\7518\52
0.2 m19\3919\4931\8342\8516\391.5 m19\4020\5841\10933\6717\46
0.5 m19\3922\5728\8041\8318\482.0 m18\3832\14846\12134\7019\59
P40.1 m22\4624\9750\17277\17019\771.2 m29\6032\12953\18592\20522\89
0.2 m24\5026\10552\17983\18420\811.5 m30\6233\13353\18592\20522\89
0.5 m26\5430\12159\20286\19121\852.0 m31\6434\13754\18995\21223\93
P50.1 m1\31\420\6127\559\241.2 m1\41\523\7130\6111\31
0.2 m1\31\421\6429\599\241.5 m1\41\523\7129\5911\31
0.5 m1\31\422\6730\6110\272.0 m1\41\624\7631\6410\28
P60.1 m5\116\207\2212\259\321.2 m6\137\227\2213\279\32
0.2 m5\116\206\1912\258\291.5 m5\116\207\2213\2710\35
0.5 m3\86\205\169\198\292.0 m6\147\228\2514\2910\34
P70.1 m8\1811\3616\5717\3517\651.2 m6\1310\3118\7012\2511\42
0.2 m8\2010\3315\5416\3320\781.5 m6\1310\3120\7911\2315\59
0.5 m7\1511\3318\6814\2918\692.0 m3\87\2317\698\1715\61
P80.1 m5\117\1920\6128\5710\261.2 m7\1513\3124\7332\6510\26
0.2 m5\117\1921\6429\5910\261.5 m7\1514\3324\7432\6510\26
0.5 m6\136\1623\7031\6310\262.0 m8\1714\3325\7732\6511\29
P90.1 m6\139\2426\8034\6912\311.2 m6\139\2424\7333\6711\29
0.2 m6\139\2426\8034\6912\311.5 m6\139\2424\7332\6511\29
0.5 m6\139\2425\7734\6912\312.0 m6\139\2423\7031\6310\26
P100.1 m51\10540\192197\81613\4068\3111.2 m27\5726\143147\61714\4352\241
0.2 m51\10531\159187\77413\4071\3241.5 m31\6536\180121\51214\4343\201
0.5 m29\6126\131181\75214\4362\2852.0 m34\7135\174132\55716\4943\201
P110.1 m1\51\716\8123\9310\511.2 m1\41\418\9224\971\5
0.2 m1\51\717\8624\9710\511.5 m1\71\318\931\41\3
0.5 m1\51\317\8625\1011\32.0 m1\61\819\1001\42\11
P120.1 m20\4118\7595\30625\5144\1521.2 m21\4315\6073\23328\5743\149
0.2 m20\4115\5995\30624\4944\1521.5 m21\4315\6165\21026\5324\85
0.5 m20\4117\6798\31525\5144\1522.0 m18\3720\7692\29323\4738\131
P130.1 m30\6224\119182\74183\24699\4421.2 m37\7627\136216\880104\333127\641
0.2 m34\7025\124188\76188\26198\4401.5 m37\7728\157226\96385\28479\389
0.5 m34\7126\133191\78696\288101\4582.0 m3\8025\127210\872120\4543\48
P140.1 m54\11131\163101\430129\41614\701.2 m47\9928\15261\269167\53318\94
0.2 m40\8242\260125\530184\58217\851.5 m55\12027\15891\393157\50518\96
0.5 m49\10131\159123\522177\56317\862.0 m52\11534\177138\588147\47322\119
P150.1 m34\7222\17739\24516\8020\1401.2 m21\4720\16548\30615\7616\113
0.2 m31\6623\18535\21816\8018\1261.5 m31\6722\18448\31118\9143\291
0.5 m28\6023\16242\26217\8618\1272.0 m24\5419\15558\36116\8217\121
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, T.; Liu, Z. An Efficient Subspace Minimization Conjugate Gradient Method for Solving Nonlinear Monotone Equations with Convex Constraints. Axioms 2024, 13, 170. https://doi.org/10.3390/axioms13030170

AMA Style

Song T, Liu Z. An Efficient Subspace Minimization Conjugate Gradient Method for Solving Nonlinear Monotone Equations with Convex Constraints. Axioms. 2024; 13(3):170. https://doi.org/10.3390/axioms13030170

Chicago/Turabian Style

Song, Taiyong, and Zexian Liu. 2024. "An Efficient Subspace Minimization Conjugate Gradient Method for Solving Nonlinear Monotone Equations with Convex Constraints" Axioms 13, no. 3: 170. https://doi.org/10.3390/axioms13030170

APA Style

Song, T., & Liu, Z. (2024). An Efficient Subspace Minimization Conjugate Gradient Method for Solving Nonlinear Monotone Equations with Convex Constraints. Axioms, 13(3), 170. https://doi.org/10.3390/axioms13030170

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop