Next Article in Journal
Nonparametric Estimation of the Density Function of the Distribution of the Noise in CHARN Models
Next Article in Special Issue
Modified Iterative Schemes for a Fixed Point Problem and a Split Variational Inclusion Problem
Previous Article in Journal
Fuzzy Algebraic Modeling of Spatiotemporal Timeseries’ Paradoxes in Cosmic Scale Kinematics
Previous Article in Special Issue
New and Improved Criteria on Fundamental Properties of Solutions of Integro—Delay Differential Equations with Constant Delay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Construction and Convergence Analysis of Non-Monotonic Iterative Methods for Solving ρ-Demicontractive Fixed Point Problems and Variational Inequalities Involving Pseudomonotone Mapping

by
Chainarong Khunpanuk
1,
Bancha Panyanak
2,3,* and
Nuttapol Pakkaranang
1,*
1
Mathematics and Computing Science Program, Faculty of Science and Technology, Phetchabun Rajabhat University, Phetchabun 67000, Thailand
2
Research Group in Mathematics and Applied Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
3
Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(4), 623; https://doi.org/10.3390/math10040623
Submission received: 1 January 2022 / Revised: 14 February 2022 / Accepted: 15 February 2022 / Published: 17 February 2022
(This article belongs to the Special Issue New Advances in Functional Analysis)

Abstract

:
Two new inertial-type extragradient methods are proposed to find a numerical common solution to the variational inequality problem involving a pseudomonotone and Lipschitz continuous operator, as well as the fixed point problem in real Hilbert spaces with a ρ -demicontractive mapping. These inertial-type iterative methods use self-adaptive step size rules that do not require previous knowledge of the Lipschitz constant. We also show that the proposed methods strongly converge to a solution of the variational inequality and fixed point problems under appropriate standard test conditions. Finally, we present several numerical examples to show the effectiveness and validation of the proposed methods.

1. Introduction

Assume that Y is a nonempty, closed, and convex subset of a real Hilbert space X with the inner product · , · and the induced norm · . The main contribution of this study is to investigate the convergence analysis of the iterative schemes for solving variational inequality and fixed point problems in real Hilbert spaces. The reason and inspiration for investigating such a common solution problem is its potential applicability to mathematical models whose constraints can be stated as fixed point problems. This is especially relevant in applications such as signal processing, composite minimization, optimum control, and image restoration; see, for example, [1,2,3,4,5]. Let us take a look at both of the problems highlighted by this research.
Let : Y X be an operator. First, we look at the classic variational inequality problem [6,7] which is expressed as follows:
Find r * Y such that ( r * ) , y r * 0 , y Y .
The solution set of a problem (1) is denoted by V I ( Y , ) . The variational inequality problem has been widely applied to study real world applications, such as partial differential equations, optimization, optimal control, mechanics, mathematical programming, and finance (see [8,9,10,11,12,13,14]). The problem (1) is a significant one in applied sciences. Many authors have committed themselves to investigating not only the theory of existence and the stability of solutions, but also iterative methods for solving such problems.
On the other hand, projection methods are important for determining the numerical solution to variational inequalities. Several authors proposed various projection methods to solve the problem (1) (see for details [15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32]). Most methods for solving the problem (1) use the projection method, which is computed on the feasible set Y . Korpelevich [15] and Antipin [33] established the extragradient method described below. Their method takes the following form:
u 1 Y , y k = P Y [ u k ( u k ) ] , u k + 1 = P Y [ u k ( y k ) ] ,
where 0 < < 1 L . According to the above method, each iteration must estimate two projections on the feasible set Y . Of course, if the feasible set Y has a convoluted structure, this might have an impact on the computing efficacy of the approach adopted. In this part, we will limit our attention to giving various approaches for overcoming this obstacle. The first is the following subgradient extragradient method proposed by Censor et al. [17]. This method is in the following form:
u 1 Y , y k = P Y [ u k ( u k ) ] , u k + 1 = P X k [ u k ( y k ) ] ,
where 0 < < 1 L and
X k = { z X : u k ( u k ) y k , z y k 0 } .
Furthermore, Tseng’s extragradient method [19] requires only one projection for each iteration. This method is written as follows:
u 1 Y , y k = P Y [ u k ( u k ) ] , u k + 1 = y k ( y k ) ( u k ) .
where 0 < < 1 L . In terms of computation, the method (4) is extremely efficient because it only requires one solution to a minimization problem per iteration. As a result, the method (4) is less computationally expensive and performs better in most situations.
Let T : X X be a mapping and the fixed point problem (FPP) for the mapping T is to: find r * X such that
T ( r * ) = r * .
The solution set of a fixed point problem is known as the fixed point set of a mapping T and is denoted by F i x ( T ) . Most of methods for solving the problem (5) are derived from the standard Mann iteration, specifically, from u 1 X and construct sequence { u k + 1 } for all k 1 by
u k + 1 = α k u k + ( 1 α k ) T u k ,
where the variable sequence { α k } must meet certain requirements in order to accomplish weak convergence. Another formalised iterative approach that is more effective in infinite-dimensional Hilbert spaces for achieving strong convergence is the Halpern iteration. The iterative sequence can be written as follows:
u k + 1 = α k u 1 + ( 1 α k ) T u k ,
where u 1 X and the sequence α k ( 0 , 1 ) is non-summable and slowly diminishing, i.e.,
α k 0 and k = 1 + α k = + .
Furthermore, it is worth mentioning that, in addition to the Halpern iteration, there is a general form of it, namely the viscosity method [20], in which the cost mapping T is merged with a contraction mapping in the iterates. Finally, another technique that provides strong convergence is the hybrid steepest descent method proposed in [34].
Tan et al. [35,36] recently introduced a new numerical method, namely the extragradient viscosity method, for solving variational inequalities involving a constraint set as a fixed point set for a ρ -demicontractive mapping. These methods were obtained by combining the extragradient methods [15,17] with the Mann-type method [37] and the viscosity-type method [20]. The authors proved that all methods have strong convergence when the operator is pseudomonotone and meets the Lipschitz criterion. These methods have the advantage of being numerically computed using optimization tools, as discussed in [35,36].
The primary disadvantage of these methods is that they rely on viscosity and Mann-type techniques to obtain strong convergence. As we all know, achieving strong convergence is critical for iterative sequences, especially in infinite-dimensional spaces. There are only a few techniques with strong convergence that use inertial schemes. The Mann and Viscosity types of steps may be difficult to estimate from an algorithmic perspective, affecting the algorithm’s convergence rate and applicability. These methods increase the number of numerical and computational steps, making the system more complex.
Hence, a natural question arises:
Is it possible to introduce strongly convergent inertial extragradient methods for solving variational inequalities and fixed point problems with a self-adaptive step size rule without requiring Mann and Viscosity-type methods?
Motivated by the above, as well as the works cited in [35,36], we provide the positive answer to the above question by introducing two strong convergence extragradient-type methods for solving pseudomonotone variational inequalities and the ρ -demicontractive fixed point problem in real Hilbert spaces. Furthermore, we avoid the use of any hybrid schemes, such as the Mann-type and the Viscosity scheme, in order to obtain the strong convergence of these methods. We proposed novel methods that leverage inertial schemes and have a strong convergence.
The paper is organized as follows: Section 2 contains basic results and identities. Section 3 introduces two novel methods and proves their convergence analysis. Finally, Section 4 provides some numerical data to explain the practical efficacy of the proposed methods.

2. Preliminaries

Let Y be a nonempty, closed, and convex subset of X , the real Hilbert space. Assume that the sequences u k u and u k u represent the weak and strong convergence of u k to u . For each u , y X , the following information is available to us:
(1)
u + y 2 = u 2 + 2 u , y + y 2 ;
(2)
u + y 2 u 2 + 2 y , u + y ;
(3)
b u + ( 1 b ) y 2 = b u 2 + ( 1 b ) y 2 b ( 1 b ) u y 2 .
The definition of metric projection  P Y ( u ) of u X is defined by
P Y ( u ) = arg min { u y : y Y } .
It is well-known that P Y is non-expansive and P Y satisfies the following conditions:
(1)
u P Y ( u ) , y P Y ( u ) 0 , y Y ;
(2)
P Y ( u ) P Y ( y ) 2 P Y ( u ) P Y ( y ) , u y , y Y .
Definition  1.
In [38] suppose that T : X X is a nonlinear function with F i x ( T ) . Then, I T is said to be demiclosed at zero if, for all { u k } in X , the following conclusion holds:
u k u and ( I T ) u k 0 u F i x ( T ) .
Lemma  1.
In [39] let { p k } [ 0 , + ) , { q k } ( 0 , 1 ) and { r k } R are three sequences meet the following requirements:
p k + 1 ( 1 q k ) p k + q k r k , k N and k = 1 + q k = + .
If lim sup j + r k j 0 for each subsequence { p k j } of { p j } meet
lim inf j + ( p k j + 1 p k j ) 0 .
Then, lim k + p k = 0 .
Definition 2.
In [40,41] for any u 1 , u 2 X ; p F i x ( T ) , an operator T : X X is said to be
(1)
L-Lipschitz continuous if there exists a constant L > 0 such that
( u 1 ) ( u 2 ) L u 1 u 2 ;
(2)
pseudomonotone if
( u 1 ) , u 2 u 1 0 ( u 2 ) , u 1 u 2 0 ;
(3)
sequentially weakly continuous if a sequence { ( u k ) } weakly convergent to ( u ) for any sequence { u k } that is weakly convergent to u ;
(4)
ρ-demicontractive if there exists a constant 0 ρ < 1 such that
T ( u 1 ) p 2 u 1 p 2 + ρ ( I T ) ( u 1 ) 2 ,
or equivalently
T ( u 1 ) u 1 , u 1 p ρ 1 2 u 1 T ( u 1 ) 2 .
Next, in order to prove the strong convergence theorems, we assumed the following conditions are satisfied:
( 1)
The solution set F i x ( T ) V I ( Y , ) ;
( 2)
The mapping is pseudomonotone, Lipschitz continuous and sequentially weakly continuous;
( 3)
The T : X X is ρ -demicontractive and ( I T ) is demiclosed at zero.

3. Main Results

In this section, we examine at the convergence of two new inertial extragradient methods for solving variational inequality and fixed point problems in detail. These techniques made use of fixed and non-monotone step size criteria.
Lemma  2.
A sequence { k } generated by (15) is convergent to ℷ and bounded by min μ L , 1 1 + P , where P = k = 1 + ϰ k .
Proof. 
Since the mapping is Lipschitz continuous, there exists a positive constant L . It is given that ( t k ) ( y k ) , z k y k > 0 , and
μ ( t k y k 2 + z k y k 2 ) 2 ( t k ) ( y k ) , z k y k 2 μ t k y k z k y k 2 ( t k ) ( y k ) z k y k 2 μ t k y k z k y k 2 L t k y k z k y k μ L .
Using mathematical induction on the definition of k + 1 , we have
min μ L , 1 k 1 + P .
Let [ k + 1 k ] + = max 0 , k + 1 k and [ k + 1 k ] = max 0 , ( k + 1 k ) . From the definition of { k } , we have
k = 1 + ( k + 1 k ) + = k = 1 + max 0 , k + 1 k P < + .
That is, the series k = 1 + ( k + 1 k ) + is convergent. Next, we need to prove the convergence of k = 1 + ( k + 1 k ) . Let k = 1 + ( k + 1 k ) = + . For this reason, we have k + 1 k = ( k + 1 k ) + ( k + 1 k ) . Thus, we obtain
k + 1 1 = k = 0 k ( k + 1 k ) = k = 0 k ( k + 1 k ) + k = 0 k ( k + 1 k ) .
By letting k + in expression (10), we have k as k + . This is a contradiction. Due to the convergence of the series k = 0 k ( k + 1 k ) + and k = 0 k ( k + 1 k ) taking k + in expression (10), we obtain lim k + k = . This completes the proof of lemma.    □
Lemma  3.
The step size sequence { k } generated in (24) is monotonically decreasing and bounded by min μ L , 1 1 + P , where P = k = 1 + ϰ k .
Proof. 
It is given that is Lipschitz-continuous with constant L > 0 , and we have
μ t k y k ( t k ) ( y k ) μ t k y k L t k y k μ L .
Using mathematical induction on the definition of k + 1 , we have
min μ L , 1 k 1 + P .
Let [ k + 1 k ] + = max 0 , k + 1 k and [ k + 1 k ] = max 0 , ( k + 1 k ) . From the definition of { k } , we have
k = 1 + ( k + 1 k ) + = k = 1 + max 0 , k + 1 k P < + .
That is, the series k = 1 + ( k + 1 k ) + is convergent. Next, we need to prove the convergence of k = 1 + ( k + 1 k ) . Let k = 1 + ( k + 1 k ) = + . For this reason, we have k + 1 k = ( k + 1 k ) + ( k + 1 k ) . Thus, we obtain
k + 1 1 = k = 0 k ( k + 1 k ) = k = 0 k ( k + 1 k ) + k = 0 k ( k + 1 k ) .
By letting k + in expression (13), we have k as k + . This is a contradiction. Due to the convergence of the series k = 0 k ( k + 1 k ) + and k = 0 k ( k + 1 k ) taking k + in (13), we obtain lim k + k = . This completes the proof of lemma.    □
Lemma  4.
Let : X X be a mapping satisfies the conditions( 1) ( 2). Let { u k } be a sequence is generated by Algorithm 1. For each r * V I ( Y , ) , we have
z k r * 2 t k r * 2 1 μ k k + 1 t k y k 2 1 μ k k + 1 z k y k 2 .
Algorithm 1 Inertial Subgradient Extragradient Method with Non-Monotone Step Size Rule.
Step 0: Take u 0 , u 1 Y , θ ( 0 , 1 ) , μ ( 0 , 1 ) , 1 > 0 . Moreover, select a non-negative real sequence { ϰ k } such that k = 1 + ϰ k < + and { β k } ( 0 , 1 ) satisfies the following conditions:
lim k + β k = 0 and k = 1 + β k = + .
Step 1: Compute
t k = u k + θ k ( u k u k 1 ) β k u k + θ k ( u k u k 1 ) ,
while θ k taken as follows:
0 θ k θ k ^ and θ k ^ = min θ 2 , ϵ k u k u k 1 if u k u k 1 , θ 2 otherwise .
Moreover, a positive sequence ϵ k = ( β k ) such that lim k + ϵ k β k = 0 .
Step 2: Compute
y k = P Y ( t k k ( t k ) ) .
If t k = y k , then STOP. Else, move to Step 3.
Step 3: First, construct a half-space X k = { z X : t k k ( t k ) y k , z y k 0 } and compute
z k = P X k ( t k k ( y k ) ) .
Step 4: Compute u k + 1 = ( 1 α k ) z k + α k T ( z k ) .
Step 5: Compute
k + 1 = min k + ϰ k , μ t k y k 2 + μ z k y k 2 2 ( t k ) ( y k ) , z k y k if ( t k ) ( y k ) , z k y k > 0 , k + ϰ k , o t h e r w i s e .
Set k : = k + 1 and go back to Step 1.
Proof. 
First, we have to compute the following
z k r * 2 = P X k [ t k k ( y k ) ] r * 2 = P X k [ t k k ( y k ) ] + [ t k k ( y k ) ] [ t k k ( y k ) ] r * 2 = [ t k k ( y k ) ] r * 2 + P X k [ t k k ( y k ) ] [ t k k ( y k ) ] 2 + 2 P X k [ t k k ( y k ) ] [ t k k ( y k ) ] , [ t k k ( y k ) ] r * .
It is hypothesized that r * V I ( Y , ) Y X k . Thus, we have
P X k [ t k k ( y k ) ] [ t k k ( y k ) ] 2 + P X k [ t k k ( y k ) ] [ t k k ( y k ) ] , [ t k k ( y k ) ] r * = [ t k k ( y k ) ] P X k [ t k k ( y k ) ] , r * P X k [ t k k ( y k ) ] 0 .
It also indicates that
P X k [ t k k ( y k ) ] [ t k k ( y k ) ] , [ t k k ( y k ) ] r * P X k [ t k k ( y k ) ] [ t k k ( y k ) ] 2 .
We obtain by combining Equations (16) and (18)
z k r * 2 t k k ( y k ) r * 2 P X k [ t k k ( y k ) ] [ t k k ( y k ) ] 2 t k r * 2 t k z k 2 + 2 k ( y k ) , r * z k .
We acquire on Y as a result of the definition of a mapping on Y . Thus, we have
( r * ) , y r * ( y ) , y r * 0 , y Y .
Since r * V I ( Y , ) , we have
( y ) , y r * 0 , y Y .
By letting y = y k Y , we have
( y k ) , y k r * 0 .
Thus, we have
( y k ) , r * z k = ( y k ) , r * y k + ( y k ) , y k z k ( y k ) , y k z k .
We obtain by combining formulas (19) and (20)
z k r * 2 t k r * 2 t k z k 2 + 2 k ( y k ) , y k z k t k r * 2 t k y k + y k z k 2 + 2 k ( y k ) , y k z k t k r * 2 t k y k 2 y k z k 2 + 2 t k k ( y k ) y k , z k y k .
From given z k = P X k [ t k k ( y k ) ] we have
2 t k k ( y k ) y k , z k y k = 2 t k k ( t k ) y k , z k y k + 2 k ( t k ) ( y k ) , z k y k k k + 1 2 k + 1 ( t k ) ( y k ) , z k y k μ k k + 1 t k y k 2 + μ k k + 1 z k y k 2 .
From (21) and (22) we obtain
z k r * 2 t k r * 2 t k y k 2 y k z k 2 + k k + 1 μ t k y k 2 + μ z k y k 2 t k r * 2 1 μ k k + 1 t k y k 2 1 μ k k + 1 z k y k 2 .
   □
Lemma  5.
Let : X X satisfies the items ( 1) ( 2). Let { u k } be a sequence is generated by Algorithm 2. Then, for each r * V I ( Y , ) , we have
z k r * 2 t k r * 2 1 μ 2 k 2 k + 1 2 t k y k 2 .
Algorithm 2 Inertial Tseng’s Extragradient Method with Non-Monotone Step Size Rule.
Step 0: Take u 0 , u 1 Y , θ ( 0 , 1 ) , μ ( 0 , 1 ) , 1 > 0 . Moreover, select a non-negative real sequence { ϰ k } such that k = 1 + ϰ k < + and { β k } ( 0 , 1 ) satisfies the following conditions:
lim k + β k = 0 and k = 1 + β k = + .
Step 1: Compute
t k = u k + θ k ( u k u k 1 ) β k u k + θ k ( u k u k 1 ) ,
while θ k taken as follows:
0 θ k θ k ^ and θ k ^ = min θ 2 , ϵ k u k u k 1 if u k u k 1 , θ 2 otherwise .
Moreover, a positive sequence ϵ k = ( β k ) such that lim k + ϵ k β k = 0 .
Step 2: Compute
y k = P Y ( t k k ( t k ) ) .
If t k = y k , then STOP. Otherwise, go to Step 3.
Step 3: Compute
z k = y k + k ( t k ) ( y k ) .
Step 4: Compute
u k + 1 = ( 1 α k ) z k + α k T ( z k ) .
Step 5: Compute
min k + ϰ k , μ t k y k ( t k ) A ( y k ) if ( t k ) ( y k ) , k + ϰ k , otherwise .
Set k : = k + 1 and move back to Step 1.
Proof. 
From r * V I ( Y , ) and due to value of z k , we may write
z k r * 2 = y k + k [ ( u k ) ( y k ) ] r * 2 = y k r * 2 + k 2 ( u k ) ( y k ) 2 + 2 k y k r * , ( u k ) ( y k ) = y k + u k u k r * 2 + k 2 ( u k ) ( y k ) 2 + 2 k y k r * , ( u k ) ( y k ) = y k u k 2 + u k r * 2 + 2 y k u k , u k r * + k 2 ( u k ) ( y k ) 2 + 2 k y k r * , ( u k ) ( y k ) = u k r * 2 + y k u k 2 + 2 y k u k , y k r * + 2 y k u k , u k y k + k 2 ( u k ) ( y k ) 2 + 2 k y k r * , ( u k ) ( y k ) .
Due to the value of y k = P Y [ u k k ( u k ) ] we have
u k k ( u k ) y k , y y k 0 , y Y .
For some r * V I ( Y , ) we may write
u k y k , r * y k k ( u k ) , r * y k .
From Equations (26) and (28) we obtain
z k r * 2 u k r * 2 + y k u k 2 + 2 k ( u k ) , r * y k 2 u k y k , u k y k + k 2 ( u k ) ( y k ) 2 2 k ( u k ) ( y k ) , r * y k = u k r * 2 u k y k 2 + k 2 ( u k ) ( y k ) 2 2 k ( y k ) , y k r * .
Due to the definition of a mapping on Y , we obtain
( r * ) , y r * ( y ) , y r * 0 , y Y .
Since r * V I ( Y , ) , we have
( y ) , y r * 0 , y Y .
Substituting y = y k Y , we have
( y k ) , y k r * 0 .
From Equations (29) and (30) we obtain
z k r * 2 u k r * 2 u k y k 2 + μ 2 k 2 k + 1 2 u k y k 2 = u k r * 2 1 μ 2 k 2 k + 1 2 u k y k 2 .
Theorem  1.
Let : X X be an operator satisfies the conditions( 1) ( 3). Then, sequence { u k } generated by Algorithm 1 strongly converges to r * V I ( Y , ) F i x ( T ) where r * = P V I ( Y , ) F i x ( T ) ( 0 ) .
Proof. Claim 1:
The sequence { u k } is bounded.
Indeed, we have u k + 1 = ( 1 α k ) z k + α k T ( z k ) . Thus, we obtain
u k + 1 r * 2 = ( 1 α k ) z k + α k T ( z k ) r * 2 = z k r * 2 + 2 α k z k r * , T ( z k ) z k + α k 2 T ( z k ) z k 2 z k r * 2 + α k ( ρ 1 ) T ( z k ) z k 2 + α k 2 T ( z k ) z k 2 = z k r * 2 α k ( 1 ρ α k ) T ( z k ) z k 2 .
Due to the definition of sequence { t k } , we can write
t k r * = u k + θ k ( u k u k 1 ) β k u k θ k β k ( u k u k 1 ) r * = ( 1 β k ) ( u k r * ) + ( 1 β k ) θ k ( u k u k 1 ) β k r *
( 1 β k ) u k r * + ( 1 β k ) θ k u k u k 1 + β k r * ( 1 β k ) u k r * + β k K 1 ,
for some K 1 we have
( 1 β k ) θ k β k u k u k 1 + r * K 1 .
The above expression is derived from Equation (14) as follows:
lim k + θ k β k u k u k 1 lim k + ϵ k β k = 0 .
Since by Lemma 2, step size sequence k implies that there exists a fixed number ϑ ( 0 , 1 μ ) such that
lim k + 1 μ k k + 1 = 1 μ > ϑ > 0 .
As a result, there exists a finite natural number N 1 N such that
1 μ k k + 1 > ϑ > 0 , k N 1 .
By Lemma 4, we may rewrite
z k r * 2 t k r * 2 , k N 1 .
From expressions (32), (34) and (36) infer that
u k + 1 r * ( 1 β k ) u k r * + β k K 1 α k ( 1 ρ α k ) T ( z k ) z k 2 .
Since { α k } ( a , 1 ρ ) we obtain
u k + 1 r * ( 1 β k ) u k r * + β k K 1 max u k r * , K 1 max u N 1 r * , K 1 .
Therefore, we can conclude that the sequence { u k } is bounded.
Claim 2:
1 μ k k + 1 t k y k 2 + 1 μ k k + 1 z k y k 2 + α k ( 1 ρ α k ) T ( z k ) z k 2 u k r * 2 u k + 1 r * 2 + β k K 2 .
for some K 2 > 0 . Indeed, it follows from definition of { u k + 1 } that
u k + 1 r * 2 = ( 1 α k ) z k + α k T ( z k ) r * 2 = z k r * 2 + 2 α k z k r * , T ( z k ) z k + α k 2 T ( z k ) z k 2 z k r * 2 + α k ( ρ 1 ) T ( z k ) z k 2 + α k 2 T ( z k ) z k 2 = z k r * 2 α k ( 1 ρ α k ) T ( z k ) z k 2 .
Using expression (23) we have
z k r * 2 t k r * 2 1 μ k k + 1 t k y k 2 1 μ k k + 1 z k y k 2 .
Indeed, it follow from expression (34) that
t k r * 2 ( 1 β k ) 2 u k r * 2 + β k 2 K 1 2 + 2 K 1 β k ( 1 β k ) u k r * u k r * 2 + β k β k K 1 2 + 2 K 1 ( 1 β k ) u k r * u k r * 2 + β k K 2 ,
for some K 2 > 0 . Combining expressions (40)–(42) we obtain
u k + 1 r * 2 u k r * 2 + β k K 2 α k ( 1 ρ α k ) T ( z k ) z k 2 1 μ k k + 1 t k y k 2 1 μ k k + 1 z k y k 2 .
Claim 3: From definition of { t k } we can write
t k r * 2 = u k + θ k ( u k u k 1 ) β k u k θ k β k ( u k u k 1 ) r * 2 = ( 1 β k ) ( u k r * ) + ( 1 β k ) θ k ( u k u k 1 ) β k r * 2 ( 1 β k ) ( u k r * ) + ( 1 β k ) θ k ( u k u k 1 ) 2 + 2 β k r * , t k r * = ( 1 β k ) 2 u k r * 2 + ( 1 β k ) 2 θ k 2 u k u k 1 2 + 2 θ k ( 1 β k ) 2 u k r * u k u k 1 + 2 β k r * , t k u k + 1 + 2 β k r * , u k + 1 r * ( 1 β k ) u k r * 2 + θ k 2 u k u k 1 2 + 2 θ k ( 1 β k ) u k r * u k u k 1 + 2 β k r * t k u k + 1 + 2 β k r * , u k + 1 r * = ( 1 β k ) u k r * 2 + β k [ θ k u k u k 1 θ k β k u k u k 1 + 2 ( 1 β k ) u k r * θ k β k u k u k 1 + 2 r * t k u k + 1 + 2 r * , r * u k + 1 ] .
Combining expressions (36) and (44) we obtain
u k + 1 r * 2 ( 1 β k ) u k r * 2 + β k [ θ k u k u k 1 θ k β k u k u k 1 + 2 ( 1 β k ) u k r * θ k β k u k u k 1 + 2 r * t k u k + 1 + 2 r * , r * u k + 1 ] .
Claim 4:The sequence u k r * 2 converges to zero.
Set
p k : = u k r * 2
and
r k : = θ k u k u k 1 θ k β k u k u k 1 + 2 ( 1 β k ) u k r * θ k β k u k u k 1 + 2 r * t k u k + 1 + 2 r * , r * u k + 1 .
Then, Claim 4 can be rewritten as follows:
p k + 1 ( 1 β k ) p k + β k r k .
Indeed, from Lemma 1, it suffices to show that lim sup j + r k j 0 for every subsequence { p k j } of { p k } satisfying
lim inf j + ( p k j + 1 p k j ) 0 .
This is equivalently to need to show that
lim sup j + r * , r * u k j + 1 0
and
lim sup j + t k j u k j + 1 0 ,
for every subsequence { u k j r * } of { u k r * } satisfying
lim inf j + u k j + 1 r * u k j r * 0 .
Suppose that { u k j r * } is a subsequence of { u k r * } satisfying
lim inf j + u k j + 1 r * u k j r * 0 .
Then
lim inf j + u k j + 1 r * 2 u k j r * 2 = lim inf j + u k j + 1 r * u k j r * u k j + 1 r * + u k j r * 0 .
It follows from Claim 2 that
lim sup j + 1 μ k j k j + 1 t k j y k j 2 + 1 μ k j k j + 1 z k j y k j 2 + α k j ( 1 ρ α k j ) T ( z k j ) z k j 2 lim sup j + u k j r * 2 u k j + 1 r * 2 + lim sup j + β k j K 2 = lim inf j + u k j + 1 r * 2 u k j r * 2 0 .
The above relation implies that
lim j + t k j y k j = 0 , lim j + z k j y k j = 0 , lim j + T ( z k j ) z k j = 0 .
Therefore, we obtain
lim j + z k j t k j = 0 .
Now, we compute
t k j u k j = u k j + θ k j ( u k j u k j 1 ) β k j u k j + θ k j ( u k j u k j 1 ) u k j θ k j u k j u k j 1 + β k j u k j + θ k j β k j u k j u k j 1 = β k j θ k j β k j u k j u k j 1 + β k j u k j + β k j 2 θ k j β k j u k j u k j 1 0 .
This together with lim j + z k j t k j = 0 , yields that
lim j + z k j u k j = 0 .
From u k j + 1 = ( 1 α k j ) z k j + α k j T ( z k j ) , one sees that
lim j + u k j + 1 z k j = α k j T ( z k j ) z k j ( 1 ρ ) T ( z k j ) z k j .
Thus, we obtain
lim j + u k j + 1 z k j = 0 .
The above expression implies that
lim j + u k j u k j + 1 lim j + u k j z k j + lim j + z k j u k j + 1 = 0 ,
and
lim j + t k j u k j + 1 lim j + t k j z k j + lim j + z k j u k j + 1 = 0 .
Since the sequence { u k j } is a bounded, without loss of generality we can assume that { u k j } converges weakly to some u ^ X . Next, we need to prove that u ^ V I ( Y , ) . We have expression (48) and lim k k = . Since { t k j } weakly convergent to u ^ and due to lim j + t k j y k j = 0 , sequence { y k j } also weakly convergent to u ^ . Next, we need to prove that u ^ V I ( Y , ) . It gives that
y k j = P Y [ t k j k j ( t k j ) ]
that is equivalent to
t k j k j ( t k j ) y k j , y y k j 0 , y Y .
As a result of the aforementioned inequality, we have
t k j y k j , y y k j k j ( t k j ) , y y k j , y Y .
Consequently, we obtain
1 k j t k j y k j , y y k j + ( t k j ) , y k j t k j ( t k j ) , y t k j , y Y .
Since min μ L , 1 1 and { t k j } is a bounded sequence. By the use of lim j + t k j y k j = 0 and j + in (58), we obtain
lim inf j + ( t k j ) , y t k j 0 , y Y .
Additionally, it follows that
( y k j ) , y y k j = ( y k j ) ( t k j ) , y t k j + ( t k j ) , y t k j + ( y k j ) , t k j y k j .
Since lim j + t k j y k j = 0 and Lipschitz condition on mapping , we obtain
lim j + ( t k j ) ( y k j ) = 0 ,
which together with (60) and (61), we obtain
lim inf j + ( y k j ) , y y k j 0 , y Y .
To prove further, let us take a positive sequence { ϵ j } that is convergent to zero and decreasing. For every { ϵ j } there exists a least positive integer represented by m j such that
( t k i ) , y t k i + ϵ j > 0 , i m j ,
where the existence of m j follows from expression (62). Since { ϵ j } is decreasing, it is easy to see that the sequence m j is increasing. If there exists a natural number N 0 N such that ( ( u n m k ) 0 , for all n m k N 0 . Thus, we consider that
k m n = ( t k m n ) ( t k m n ) 2 , k m n N 0 .
Using the above value of k m n , we obtain
( t k m n ) , k m n = 1 , k m n N 0 .
Combining expressions (63) and (65), we obtain
( t k m n ) , y + ϵ k k m n t k m n > 0 .
Along with the definition of pseudomonotone mapping , we can write
( y + ϵ k k m n ) , y + ϵ k k m n t k m n > 0 .
For all k m n N 0 , we have
( y ) , y t k m n ( y ) ( y + ϵ k k m n ) , y + ϵ k k m n t k m n ϵ k ( y ) , k m n .
Since the sequence { t k n } weakly converges to u ^ Y . Thus, { ( t k n ) } weakly converges to ( u ^ ) . Let ( u ^ ) 0 , that implies that
( u ^ ) lim inf n + ( t k n ) .
Since { t k m n } { t k n } and lim k + ϵ k = 0 , we have
0 lim n + ϵ k k m n = lim n + ϵ k ( t k m n ) 0 ( u ^ ) = 0 .
By letting n + in expression (68), we obtain
( y ) , y u ^ 0 , y Y .
Let u Y be arbitrary element and 0 < ϖ 1 . Let us consider that
u ^ ϖ = ϖ u + ( 1 ϖ ) u ^ .
Then u ^ ϖ Y . From expression (71), we have
ϖ ( u ^ ϖ ) , u u ^ 0 .
Hence, we have
( u ^ ϖ ) , u u ^ 0 .
Let ϖ 0 . Then u ^ ϖ u ^ along a line segment. By the continuity of an operator, ( u ^ ϖ ) converges to ( u ^ ) as ϖ 0 . It follows from (74) that
( u ^ ) , u u ^ 0 .
Therefore, u ^ is a solution of problem (1). From given r * = P V I ( Y , ) F i x ( T ) ( 0 ) , we have
0 r * , y r * 0 , y V I ( Y , ) F i x ( T ) .
From (50), one obtains { t k j } converges weakly to u ^ X . It follows from (51) that { z k j } converges weakly to u ^ X . By the demiclosedness of ( I T ) , we obtain that u ^ F i x ( T ) . Thus, u ^ V I ( Y , ) F i x ( T ) . Thus, we have
lim j + r * , r * u k j = r * , r * u ^ 0 .
Using the fact lim j + u k j + 1 u k j = 0 . Thus, we have
lim sup j + r * , r * u k j + 1 lim sup j + r * , r * u k j + lim sup j + r * , u k j u k j + 1 0 .
Combining Claim 3 and in the light of Lemma 1, we observe that u k r * as k + . The proof of Theorem 1 is completed. □
Theorem  2.
Let : X X be an operator satisfies the conditions ( 1) ( 3). Then, sequence { u k } generated by Algorithm 2 is strongly convergent to r * V I ( Y , ) F i x ( T ) where r * = P V I ( Y , ) F i x ( T ) ( 0 ) .
Proof. Claim 1:
The sequence { u k } is bounded.
Indeed, we have
u k + 1 = ( 1 α k ) z k + α k T ( z k ) .
Due to the definition of a sequence { u k + 1 } , we have
u k + 1 r * 2 = ( 1 α k ) z k + α k T ( z k ) r * 2 = z k r * 2 + 2 α k z k r * , T ( z k ) z k + α k 2 T ( z k ) z k 2 z k r * 2 + α k ( ρ 1 ) T ( z k ) z k 2 + α k 2 T ( z k ) z k 2 = z k r * 2 α k ( 1 ρ α k ) T ( z k ) z k 2 .
Thus, we have
t k r * = u k + θ k ( u k u k 1 ) β k u k θ k β k ( u k u k 1 ) r * = ( 1 β k ) ( u k r * ) + ( 1 β k ) θ k ( u k u k 1 ) β k r *
( 1 β k ) u k r * + ( 1 β k ) θ k u k u k 1 + β k r * ( 1 β k ) u k r * + β k M 1 ,
where M 1 is
( 1 β k ) θ k β k u k u k 1 + r * M 1 .
The above expression is derived from Equation (24) as follows:
lim k + θ k β k u k u k 1 lim k + ϵ k β k = 0 .
Using Lemma 3, step size sequence k such that ϵ ( 0 , 1 μ 2 ) and
lim k + 1 μ 2 k 2 k + 1 2 = 1 μ 2 > ϵ > 0 .
Thus, there exists a finite number k 0 N such that
1 μ 2 k 2 k + 1 2 > ϵ > 0 , k k 0 .
By the use of Lemma 5, we may rewrite
z k r * 2 t k r * 2 , k k 0 .
From expressions (79), (81) and (83) infer that
u k + 1 r * ( 1 β k ) u k r * + β k M 1 α k ( 1 ρ α k ) T ( z k ) z k 2 .
Since { α k } ( a , 1 ρ ) , we obtain
u k + 1 r * ( 1 β k ) u k r * + β k M 1 max u k r * , M 1 max u k 0 r * , M 1 .
Finally, we can conclude that { u k } is a bounded sequence.
Claim 2:
1 μ 2 k 2 k + 1 2 t k y k 2 + α k ( 1 ρ α k ) T ( z k ) z k 2 u k r * 2 u k + 1 r * 2 + β k M 2 ,
for some M 2 > 0 . Indeed, it follows from definition of { u k + 1 } that
u k + 1 r * 2 = ( 1 α k ) z k + α k T ( z k ) r * 2 = z k r * 2 + 2 α k z k r * , T ( z k ) z k + α k 2 T ( z k ) z k 2 z k r * 2 + α k ( ρ 1 ) T ( z k ) z k 2 + α k 2 T ( z k ) z k 2 = z k r * 2 α k ( 1 ρ α k ) T ( z k ) z k 2 .
Using Lemma 5, we have
z k r * 2 t k r * 2 1 μ 2 k 2 k + 1 2 t k y k 2 .
Indeed, it follow from expression (81) that
t k r * 2 ( 1 β k ) 2 u k r * 2 + β k 2 M 1 2 + 2 M 1 β k ( 1 β k ) u k r * u k r * 2 + β k β k M 1 2 + 2 M 1 ( 1 β k ) u k r * u k r * 2 + β k M 2 ,
for some M 2 > 0 . Combining expressions (87)–(89) we obtain
u k + 1 r * 2 u k r * 2 + β k M 2 α k ( 1 ρ α k ) T ( z k ) z k 2 1 μ 2 k 2 k + 1 2 t k y k 2 .
Claim 3: From definition of { t k } we can write
t k r * 2 = u k + θ k ( u k u k 1 ) β k u k θ k β k ( u k u k 1 ) r * 2 = ( 1 β k ) ( u k r * ) + ( 1 β k ) θ k ( u k u k 1 ) β k r * 2 ( 1 β k ) ( u k r * ) + ( 1 β k ) θ k ( u k u k 1 ) 2 + 2 β k r * , t k r * = ( 1 β k ) 2 u k r * 2 + ( 1 β k ) 2 θ k 2 u k u k 1 2 + 2 θ k ( 1 β k ) 2 u k r * u k u k 1 + 2 β k r * , t k u k + 1 + 2 β k r * , u k + 1 r * ( 1 β k ) u k r * 2 + θ k 2 u k u k 1 2 + 2 θ k ( 1 β k ) u k r * u k u k 1 + 2 β k r * t k u k + 1 + 2 β k r * , u k + 1 r * = ( 1 β k ) u k r * 2 + β k [ θ k u k u k 1 θ k β k u k u k 1 + 2 ( 1 β k ) u k r * θ k β k u k u k 1 + 2 r * t k u k + 1 + 2 r * , r * u k + 1 ] .
Combining expressions (83) and (91) we obtain
u k + 1 r * 2 ( 1 β k ) u k r * 2 + β k [ θ k u k u k 1 θ k β k u k u k 1 + 2 ( 1 β k ) u k r * θ k β k u k u k 1 + 2 r * t k u k + 1 + 2 r * , r * u k + 1 ] .
Claim 4:The sequence u k r * 2 converges to zero.
Set
p k : = u k r * 2
and
r k : = θ k u k u k 1 θ k β k u k u k 1 + 2 ( 1 β k ) u k r * θ k β k u k u k 1 + 2 r * t k u k + 1 + 2 r * , r * u k + 1 .
Then, Claim 4 can be rewritten as follows:
p k + 1 ( 1 β k ) p k + β k r k .
Indeed, from Lemma 1, it suffices to show that lim sup j + r k j 0 for every subsequence { p k j } of { p k } satisfying
lim inf j + ( p k j + 1 p k j ) 0 .
This is equivalently to need to show that
lim sup j + r * , r * u k j + 1 0
and
lim sup j + t k j u k j + 1 0 ,
for every subsequence { u k j r * } of { u k r * } satisfying
lim inf j + u k j + 1 r * u k j r * 0 .
Suppose that { u k j r * } is a subsequence of { u k r * } satisfying
lim inf j + u k j + 1 r * u k j r * 0 .
Then
lim inf j + u k j + 1 r * 2 u k j r * 2 = lim inf j + u k j + 1 r * u k j r * u k j + 1 r * + u k j r * 0 .
It follows from Claim 2 that
lim sup j + 1 μ 2 k j 2 k j + 1 2 t k j y k j 2 + α k j ( 1 ρ α k j ) T ( z k j ) z k j 2 lim sup j + u k j r * 2 u k j + 1 r * 2 + lim sup j + β k j K 2 = lim inf j + u k j + 1 r * 2 u k j r * 2 0 .
The above relation implies that
lim j + t k j y k j = 0 , lim j + T ( z k j ) z k j = 0 .
It follows that
z k j y k j = y k j + k j [ ( t k j ) ( y k j ) ] y k j k j L t k j y k j .
The above expression implies that
lim j + z k j y k j = 0 .
The proof is similar to the Claim 4 of Theorem 1. So we omit it here. □

4. Numerical Illustrations

In contrast to some previous work in the literature, this part describes the algorithmic repercussions of the presented techniques, as well as an analysis of how differences in control parameters affect the numerical efficacy of the proposed algorithms.
Example  1.
Consider the HpHard problem, which is taken from [42]. Many researchers have considered this example for numerical experiments (see for details, [43,44,45]). Let us say a mapping : R m R m is defined by
( u ) = M u + q
and q = 0 where
M = N N T + B + D .
We used N = r a n d ( m ) as a random matrix and B = 0.5 K 0.5 K T as a skew-symmetric matrix with K = r a n d ( m ) and D = d i a g ( r a n d ( m , 1 ) ) during this experiment denotes a diagonal matrix. The practicable set Y is interpreted as follows:
Y = { u R m : 10 u i 10 } .
It is obvious that ℑ is monotone and that Lipschitz is continuous by L = M . Let T : X X be provided by T u = 1 2 u . The starting point for this experiment are u 0 = u 1 = ( 2 , 2 , , 2 ) and dimension of the space is taken differently with stopping criterion D k = t k y k 10 10 . Numerical observations for Example 1 are shown in Figure 1, Figure 2, Figure 3 and Figure 4 and Table 1 and Table 2. Control criteria applied are as follows: (1) Algorithm 1 (shortly, alg-1): 1 = 0.55 , θ = 0.45 , μ = 0.44 , ϵ k = 100 ( 1 + k ) 2 , β k = 1 ( 2 k + 4 ) , α k = k ( 2 k + 1 ) . (2) Algorithm 2 (shortly, alg-2): 1 = 0.55 , θ = 0.45 , μ = 0.44 , ϵ k = 100 ( 1 + k ) 2 , β k = 1 ( 2 k + 4 ) , α k = k ( 2 k + 1 ) . (3) Algorithm 1 in [36] (shortly, mtalg-1): γ 1 = 0.55 , δ = 0.45 , ϕ = 0.44 , θ k = 1 ( 2 k + 4 ) , k = 1 2 ( 1 θ k ) , ϵ k = 100 ( 1 + k ) 2 . (4) Algorithm 2 in [36] (shortly, mtalg-2): γ 1 = 0.55 , δ = 0.45 , ϕ = 0.44 , θ k = 1 ( 2 k + 4 ) , k = 1 2 ( 1 θ k ) , ϵ k = 100 ( 1 + k ) 2 . (5) Algorithm 1 in [35] (shortly, vtalg-1): τ 1 = 0.55 , θ = 0.45 , μ = 0.44 , ϵ k = 100 ( 1 + k ) 2 , β k = 1 ( 2 k + 4 ) , α k = k ( 2 k + 1 ) , f ( u ) = u 2 . (6) Algorithm 2 in [35] (shortly, vtalg-2): τ 1 = 0.55 , θ = 0.45 , μ = 0.44 , ϵ k = 100 ( 1 + k ) 2 , β k = 1 ( 2 k + 4 ) , α k = k ( 2 k + 1 ) , f ( u ) = u 2 .
Example  2.
Consider a nonlinear operator : R 2 R 2 is defined by
( u , y ) = ( u + y + sin u ; u + y + sin y )
and the feasible set Y is a set expressed by Y = [ 1 , 1 ] × [ 1 , 1 ] . It is easy to check that ℑ is monotone and Lipschitz continuous with the constant L = 3 . Let E be a 2 × 2 matrix defined by
E = 1 0 0 2
We consider the mapping T : R 2 R 2 by T z = E 1 E z , where z = ( u , y ) T . It is obvious to see that T is 0-demicontractive and thus ρ = 0 . The solution of the problem is r * = ( 0 , 0 ) T . The starting points for this experiment are used differently with stopping criterion D k = t k y k 10 10 . Numerical observations for Example 2 are shown in Figure 5, Figure 6, Figure 7 and Figure 8 and Table 3 and Table 4. Control criteria applied are as follows: (1) Algorithm 1 (shortly, alg-1): 1 = 0.45 , θ = 0.35 , μ = 0.33 , ϵ k = 10 ( 1 + k ) 2 , β k = 1 ( 3 k + 6 ) , α k = k ( 3 k + 1 ) . (2) Algorithm 2 (shortly, alg-2): 1 = 0.45 , θ = 0.35 , μ = 0.33 , ϵ k = 10 ( 1 + k ) 2 , β k = 1 ( 3 k + 4 ) , α k = k ( 3 k + 1 ) . (3) Algorithm 1 in [36] (shortly, mtalg-1): γ 1 = 0.45 , δ = 0.35 , ϕ = 0.33 , θ k = 1 ( 3 k + 6 ) , k = 1 2.5 ( 1 θ k ) , ϵ k = 10 ( 1 + k ) 2 . (4) Algorithm 2 in [36] (shortly, mtalg-2): γ 1 = 0.45 , δ = 0.35 , ϕ = 0.33 , θ k = 1 ( 3 k + 6 ) , k = 1 2.5 ( 1 θ k ) , ϵ k = 10 ( 1 + k ) 2 . (5) Algorithm 1 in [35] (shortly, vtalg-1): τ 1 = 0.45 , θ = 0.35 , μ = 0.33 , ϵ k = 10 ( 1 + k ) 2 , β k = 1 ( 3 k + 6 ) , α k = k ( 3 k + 1 ) , f ( u ) = u 2 . (6) Algorithm 2 in [35] (shortly, vtalg-2): τ 1 = 0.45 , θ = 0.35 , μ = 0.33 , ϵ k = 10 ( 1 + k ) 2 , β k = 1 ( 3 k + 6 ) , α k = k ( 3 k + 1 ) , f ( u ) = u 2 .
Example  3.
Suppose that X = L 2 ( [ 0 , 1 ] ) be a Hilbert space through an inner product
u , y = 0 1 u ( t ) y ( t ) d t , u , y X ,
where the induced norm
u = 0 1 | u ( t ) | 2 d t .
Let Y : = { u L 2 ( [ 0 , 1 ] ) : u 1 } be the unit ball and : Y X is defined by
( u ) ( t ) = 0 1 u ( t ) H ( t , s ) f ( u ( s ) ) d s + g ( t ) ,
where
H ( t , s ) = 2 t s e ( t + s ) e e 2 1 , f ( u ) = cos u , g ( t ) = 2 t e t e e 2 1 .
It is evident that ℑ is Lipschitz-continuous with Lipschitz constant L = 2 and monotone [44]. The projection on Y is inherently explicit, that is,
P C ( u ) = u u if u > 1 , u , u 1 .
An operator T : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) is of form
T ( u ) ( t ) = 0 1 t u ( s ) d s , t [ 0 , 1 ] .
A straightforward computation implies that T is 0-demicontractive. The solution of the problem is r * ( t ) = 0 . The starting point for this experiment are taken differently with stopping criterion D k = t k y k 10 6 . Numerical observations for Example 3 are shown in Figure 9, Figure 10, Figure 11 and Figure 12 and Table 5 and Table 6. Control criteria applied are as follows: (1) Algorithm 1 (shortly, alg-1): 1 = 0.33 , θ = 0.66 , μ = 0.55 , ϵ k = 1 ( 1 + k ) 2 , β k = 1 ( 4 k + 8 ) , α k = k ( 5 k + 1 ) . (2) Algorithm 2 (shortly, alg-2): 1 = 0.33 , θ = 0.66 , μ = 0.55 , ϵ k = 1 ( 1 + k ) 2 , β k = 1 ( 4 k + 8 ) , α k = k ( 5 k + 1 ) . (3) Algorithm 1 in [36] (shortly, mtalg-1): γ 1 = 0.33 , δ = 0.66 , ϕ = 0.55 , θ k = 1 ( 4 k + 8 ) , k = 1 2 ( 1 θ k ) , ϵ k = 1 ( 1 + k ) 2 . (4) Algorithm 2 in [36] (shortly, mtalg-2): γ 1 = 0.33 , δ = 0.66 , ϕ = 0.55 , θ k = 1 ( 4 k + 8 ) , k = 1 2 ( 1 θ k ) , ϵ k = 1 ( 1 + k ) 2 . (5) Algorithm 1 in [35] (shortly, vtalg-1): τ 1 = 0.33 , θ = 0.66 , μ = 0.55 , ϵ k = 1 ( 1 + k ) 2 , β k = 1 ( 4 k + 8 ) , α k = k ( 4 k + 1 ) , f ( u ) = u 3 . (6) Algorithm 2 in [35] (shortly, vtalg-2): τ 1 = 0.33 , θ = 0.66 , μ = 0.55 , ϵ k = 1 ( 1 + k ) 2 , β k = 1 ( 4 k + 8 ) , α k = k ( 4 k + 1 ) , f ( u ) = u 3 .

5. Discussion about Numerical Illustrations

Regarding the above-mentioned numerical experiments, we have the following findings:
(1)
Examples 1–3 reported results for several algorithms in both finite and infinite-dimensional spaces. It is clear to see that the provided algorithms outperformed in terms of number of iterations and elapsed time in almost all situations. All experiments show that the proposed algorithms perform better the previously existing algorithms.
(2)
The appearance of unsuitable variable step size causes a hump in the graph of algorithms in Example 2. It does not really effect the overall performance of the algorithms.
(3)
Examples 1–3 reported results for different algorithms for both finite and infinite-dimensional spaces. In most cases, we can see that the algorithm’s performance is determined by the nature of the problem and the tolerance value employed.
(4)
For large-dimensional problems, all approaches typically took longer and showed significant variation in execution time. The number of iterations, on the other hand, changes slightly less.
(5)
It is also observed that a specific formula for stepsize evaluation enhances the algorithm’s efficiency and the pace of convergence. In other words, rather than the fixed stepsize, the appropriate variable stepsize improves the performance of algorithms.
(6)
In Examples 2 and 3, it can also be shown that the initial point choice and the complexity of the operators have an influence on the performance of algorithms in terms of the number of iterations and time of execution in seconds.

Author Contributions

Conceptualization, C.K. and N.P.; Funding acquisition, B.P.; Investigation, N.P.; Methodology, N.P.; Project administration, C.K.; Supervision, B.P.; Validation, C.K. and N.P.; Writing—original draft, C.K. and N.P.; Writing—review & editing, C.K. and N.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Chiang Mai University and the NSRF via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation (grant number B05F640183).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thanks the referees and editor for reading this paper carefully, providing valuable suggestions and comments, and pointing out a minor errors in the original version of this paper. The first and third authors would like to thank Phetchabun Rajabhat University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maingé, P.E. A Hybrid Extragradient-Viscosity Method for Monotone Operators and Fixed Point Problems. SIAM J. Control. Optim. 2008, 47, 1499–1515. [Google Scholar] [CrossRef]
  2. Maingé, P.E.; Moudafi, A. Coupling viscosity methods with the extragradient algorithm for solving equilibrium problems. J. Nonlinear Convex Anal. 2008, 9, 283–294. [Google Scholar]
  3. Iiduka, H.; Yamada, I. A subgradient-type method for the equilibrium problem over the fixed point set and its applications. Optimization 2009, 58, 251–261. [Google Scholar] [CrossRef]
  4. Qin, X.; An, N.T. Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 2019, 74, 821–850. [Google Scholar] [CrossRef] [Green Version]
  5. An, N.T.; Nam, N.M.; Qin, X. Solving k-center problems involving sets based on optimization techniques. J. Glob. Optim. 2019, 76, 189–209. [Google Scholar] [CrossRef]
  6. Stampacchia, G. Formes bilinéaires coercitives sur les ensembles convexes. Comptes Rendus Hebd. Des Seances Acad. Des Sci. 1964, 258, 4413. [Google Scholar]
  7. Konnov, I.V. On systems of variational inequalities. Russ. Math. C/C -Izv.-Vyss. Uchebnye Zaved. Mat. 1997, 41, 77–86. [Google Scholar]
  8. Kassay, G.; Kolumbán, J.; Páles, Z. On Nash stationary points. Publ. Math. 1999, 54, 267–279. [Google Scholar]
  9. Kassay, G.; Kolumbán, J.; Páles, Z. Factorization of Minty and Stampacchia variational inequality systems. Eur. J. Oper. Res. 2002, 143, 377–389. [Google Scholar] [CrossRef]
  10. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar] [CrossRef]
  11. Konnov, I. Equilibrium Models and Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2007; Volume 210. [Google Scholar]
  12. Elliott, C.M. Variational and Quasivariational Inequalities Applications to Free—Boundary ProbLems. (Claudio Baiocchi Furthermore, António Capelo). SIAM Rev. 1987, 29, 314–315. [Google Scholar] [CrossRef]
  13. Nagurney, A. Network Economics: A Variational Inequality Approach; Kluwer Academic Publishers Group: Dordrecht, The Netherlands, 1999. [Google Scholar] [CrossRef]
  14. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  15. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  16. Noor, M.A. Some iterative methods for nonconvex variational inequalities. Comput. Math. Model. 2010, 21, 97–108. [Google Scholar] [CrossRef]
  17. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2010, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  19. Tseng, P. A Modified Forward-Backward Splitting Method for Maximal Monotone Mappings. SIAM J. Control. Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  20. Moudafi, A. Viscosity Approximation Methods for Fixed-Points Problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  21. Zhang, L.; Fang, C.; Chen, S. An inertial subgradient-type method for solving single-valued variational inequalities and fixed point problems. Numer. Algorithms 2018, 79, 941–956. [Google Scholar] [CrossRef]
  22. Iusem, A.N.; Svaiter, B.F. A variant of korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42, 309–321. [Google Scholar] [CrossRef]
  23. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2017, 79, 597–610. [Google Scholar] [CrossRef]
  24. Thong, D.V.; Hieu, D.V. Weak and strong convergence theorems for variational inequality problems. Numer. Algorithms 2017, 78, 1045–1060. [Google Scholar] [CrossRef]
  25. Rehman, H.; Kumam, P.; Shutaywi, M.; Alreshidi, N.A.; Kumam, W. Inertial optimization based two-step methods for solving equilibrium problems with applications in variational inequality problems and growth control equilibrium models. Energies 2020, 13, 3292. [Google Scholar] [CrossRef]
  26. Hammad, H.A.; Rehman, H.; la Sen, M.D. Advanced algorithms and common solutions to variational inequalities. Symmetry 2020, 12, 1198. [Google Scholar] [CrossRef]
  27. Yordsorn, P.; Kumam, P.; Rehman, H.; Ibrahim, A.H. A weak convergence self-adaptive method for solving pseudomonotone equilibrium problems in a real Hilbert space. Mathematics 2020, 8, 1165. [Google Scholar] [CrossRef]
  28. Rehman, H.; Gibali, A.; Kumam, P.; Sitthithakerngkiet, K. Two new extragradient methods for solving equilibrium problems. Rev. Real Acad. Cienc. Exactas Fis. Nat. Ser. Mat. 2021, 115. [Google Scholar] [CrossRef]
  29. Rehman, H.; Kumam, P.; Gibali, A.; Kumam, W. Convergence analysis of a general inertial projection-type method for solving pseudomonotone equilibrium problems with applications. J. Inequalities Appl. 2021, 2021. [Google Scholar] [CrossRef]
  30. Rehman, H.; Alreshidi, N.A.; Muangchoo, K. A New Modified Subgradient Extragradient Algorithm Extended for Equilibrium Problems with Application in Fixed Point Problems. J. Nonlinear Convex Anal. 2021, 22, 421–439. [Google Scholar]
  31. Muangchoo, K.; Rehman, H.; Kumam, P. Weak convergence and strong convergence of nonmonotonic explicit iterative methods for solving equilibrium problems. J. Nonlinear Convex Anal. 2021, 22, 663–681. [Google Scholar]
  32. Rehman, H.; Kumam, P.; Özdemir, M.; Karahan, I. Two generalized non-monotone explicit strongly convergent extragradient methods for solving pseudomonotone equilibrium problems and applications. Math. Comput. Simul. 2021. [Google Scholar] [CrossRef]
  33. Antipin, A.S. On a method for convex programs using a symmetrical modification of the Lagrange function. Ekon. Mat. Metod. 1976, 12, 1164–1173. [Google Scholar]
  34. Yamada, I.; Ogura, N. Hybrid Steepest Descent Method for Variational Inequality Problem over the Fixed Point Set of Certain Quasi-nonexpansive Mappings. Numer. Funct. Anal. Optim. 2005, 25, 619–655. [Google Scholar] [CrossRef]
  35. Tan, B.; Zhou, Z.; Li, S. Viscosity-type inertial extragradient algorithms for solving variational inequality problems and fixed point problems. J. Appl. Math. Comput. 2021. [Google Scholar] [CrossRef]
  36. Tan, B.; Fan, J.; Qin, X. Inertial extragradient algorithms with non-monotonic step sizes for solving variational inequalities and fixed point problems. Adv. Oper. Theory 2021, 6. [Google Scholar] [CrossRef]
  37. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506. [Google Scholar] [CrossRef]
  38. Zhou, H.; Qin, X. Fixed Points of Nonlinear Operators; De Gruyter: Berlin, Germany, 2020. [Google Scholar]
  39. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. Theory Methods Appl. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  40. Hicks, T.L.; Kubicek, J.D. On the Mann iteration process in a Hilbert space. J. Math. Anal. Appl. 1977, 59, 498–504. [Google Scholar] [CrossRef] [Green Version]
  41. Karamardian, S. Complementarity problems over cones with monotone and pseudomonotone maps. J. Optim. Theory Appl. 1976, 18, 445–454. [Google Scholar] [CrossRef]
  42. Harker, P.T.; Pang, J.S. A damped-Newton method for the linear complementarity problem. Comput. Solut. Nonlinear Syst. Equ. 1990, 26, 265. [Google Scholar]
  43. Solodov, M.V.; Svaiter, B.F. A New Projection Method for Variational Inequality Problems. SIAM J. Control. Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
  44. Van Hieu, D.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 2016, 66, 75–96. [Google Scholar] [CrossRef]
  45. Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2017, 70, 687–704. [Google Scholar] [CrossRef]
Figure 1. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when m = 5 .
Figure 1. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when m = 5 .
Mathematics 10 00623 g001
Figure 2. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when m = 10 .
Figure 2. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when m = 10 .
Mathematics 10 00623 g002
Figure 3. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when m = 50 .
Figure 3. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when m = 50 .
Mathematics 10 00623 g003
Figure 4. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when m = 200 .
Figure 4. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when m = 200 .
Mathematics 10 00623 g004
Figure 5. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = ( 1 , 1 ) T .
Figure 5. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = ( 1 , 1 ) T .
Mathematics 10 00623 g005
Figure 6. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = ( 2 , 2 ) T .
Figure 6. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = ( 2 , 2 ) T .
Mathematics 10 00623 g006
Figure 7. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = ( 1 , 1 ) T .
Figure 7. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = ( 1 , 1 ) T .
Mathematics 10 00623 g007
Figure 8. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = ( 2 , 3 ) T .
Figure 8. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = ( 2 , 3 ) T .
Mathematics 10 00623 g008
Figure 9. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = 1 .
Figure 9. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = 1 .
Mathematics 10 00623 g009
Figure 10. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = t .
Figure 10. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = t .
Mathematics 10 00623 g010
Figure 11. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = sin ( t ) .
Figure 11. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = sin ( t ) .
Mathematics 10 00623 g011
Figure 12. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = cos ( t ) .
Figure 12. Computational illustration of Algorithms 1 and 2 with Algorithm 1 in [36], Algorithm 2 in [36] and Algorithm 1 in [35], Algorithm 2 in [35] when u 0 = u 1 = cos ( t ) .
Mathematics 10 00623 g012
Table 1. Example 1 obtained numerical values.
Table 1. Example 1 obtained numerical values.
Total Number of Iterations
m alg-1alg-2mtalg-1mtalg-2vtalg-1vtalg-2
5361994786049
104624102806251
20422593855953
50382586875755
100373284885656
200383684945662
Table 2. Example 1 obtained numerical values.
Table 2. Example 1 obtained numerical values.
Required CPU Time
m alg-1alg-2mtalg-1mtalg-2vtalg-1vtalg-2
50.2468410.13177030.58651350.40095390.3605334650.3001653
100.2840760.15231230.51592760.47228160.3750913360.3097725
200.26022460.16336520.42469980.46309320.3931423670.3358743
500.2933020.18548080.43206120.53353810.3317281560.3663686
1000.25665730.23012280.47520240.50678620.3589975370.3936471
2000.35442960.36950340.73711520.84418440.5166239630.6142675
Table 3. Example 2 obtained numerical values.
Table 3. Example 2 obtained numerical values.
Total Number of Iterations
u 0 = u 1 alg-1alg-2mtalg-1mtalg-2vtalg-1vtalg-2
( 1 , 1 ) T 493568758285
( 2 , 2 ) T 483661657778
( 1 , 1 ) T 443372838692
( 2 , 3 ) T 513765708179
Table 4. Example 2 obtained numerical values.
Table 4. Example 2 obtained numerical values.
Required CPU Time
u 0 = u 1 alg-1alg-2mtalg-1mtalg-2vtalg-1vtalg-2
( 1 , 1 ) T 0.22841930.16317070.29698210.32243850.34690490.3625844
( 2 , 2 ) T 0.22978590.17579310.37206560.30782420.38474760.4105755
( 1 , 1 ) T 0.19861260.15124950.32200280.37294620.37878760.4068135
( 2 , 3 ) T 0.23809880.17032520.26909710.30696720.34486970.3428332
Table 5. Example 3 obtained numerical values.
Table 5. Example 3 obtained numerical values.
Total Number of Iterations
u 0 = u 1 alg-1alg-2mtalg-1mtalg-2vtalg-1vtalg-2
1443376706657
t423189845848
sin ( t ) 453475645851
cos ( t ) 473574945851
Table 6. Example 3 obtained numerical values.
Table 6. Example 3 obtained numerical values.
Required CPU Time
u 0 = u 1 alg-1alg-2mtalg-1mtalg-2vtalg-1vtalg-2
10.13108310.11491040.23808250.21717210.19153580.178602
t0.05836170.05383500.11549740.10599930.07841570.0548289
sin ( t ) 0.13727860.10292740.26929710.1858250.17459960.1476468
cos ( t ) 0.13642290.12534820.22073760.26975670.1725040.1452834
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khunpanuk, C.; Panyanak, B.; Pakkaranang, N. A New Construction and Convergence Analysis of Non-Monotonic Iterative Methods for Solving ρ-Demicontractive Fixed Point Problems and Variational Inequalities Involving Pseudomonotone Mapping. Mathematics 2022, 10, 623. https://doi.org/10.3390/math10040623

AMA Style

Khunpanuk C, Panyanak B, Pakkaranang N. A New Construction and Convergence Analysis of Non-Monotonic Iterative Methods for Solving ρ-Demicontractive Fixed Point Problems and Variational Inequalities Involving Pseudomonotone Mapping. Mathematics. 2022; 10(4):623. https://doi.org/10.3390/math10040623

Chicago/Turabian Style

Khunpanuk, Chainarong, Bancha Panyanak, and Nuttapol Pakkaranang. 2022. "A New Construction and Convergence Analysis of Non-Monotonic Iterative Methods for Solving ρ-Demicontractive Fixed Point Problems and Variational Inequalities Involving Pseudomonotone Mapping" Mathematics 10, no. 4: 623. https://doi.org/10.3390/math10040623

APA Style

Khunpanuk, C., Panyanak, B., & Pakkaranang, N. (2022). A New Construction and Convergence Analysis of Non-Monotonic Iterative Methods for Solving ρ-Demicontractive Fixed Point Problems and Variational Inequalities Involving Pseudomonotone Mapping. Mathematics, 10(4), 623. https://doi.org/10.3390/math10040623

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop