Next Article in Journal
Enumeration of Self-Dual Codes of Length 6 over ℤp
Next Article in Special Issue
Fixed Point Results for Generalized ℱ-Contractions in Modular b-Metric Spaces with Applications
Previous Article in Journal
The Fixed Point Property of Non-Retractable Topological Spaces
Previous Article in Special Issue
Split Variational Inclusion Problem and Fixed Point Problem for a Class of Multivalued Mappings in CAT(0) Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mildly Inertial Subgradient Extragradient Method for Variational Inequalities Involving an Asymptotically Nonexpansive and Finitely Many Nonexpansive Mappings

1
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
2
General Education Center, National Yunlin University of Science and Technology, Douliou 64002, Taiwan
3
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China
4
Research Center for Interneural Computing, China Medical University Hospital, Taichung 40447, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(10), 881; https://doi.org/10.3390/math7100881
Submission received: 13 July 2019 / Revised: 19 September 2019 / Accepted: 19 September 2019 / Published: 22 September 2019
(This article belongs to the Special Issue Variational Inequality)

Abstract

:
In a real Hilbert space, let the notation VIP indicate a variational inequality problem for a Lipschitzian, pseudomonotone operator, and let CFPP denote a common fixed-point problem of an asymptotically nonexpansive mapping and finitely many nonexpansive mappings. This paper introduces mildly inertial algorithms with linesearch process for finding a common solution of the VIP and the CFPP by using a subgradient approach. These fully absorb hybrid steepest-descent ideas, viscosity iteration ideas, and composite Mann-type iterative ideas. With suitable conditions on real parameters, it is shown that the sequences generated our algorithms converge to a common solution in norm, which is a unique solution of a hierarchical variational inequality (HVI).

1. Introduction

Let C be a convex and closed nonempty set in a real Hilbert space ( H , · ) with inner product · , · . Let Fix ( S ) indicate the fixed-point set of a non-self operator S : C H , i.e., Fix ( S ) = { u C : u = S u } . One says that a self operator T : C C is asymptotically nonexpansive if and only if T n u T n v ( 1 + θ n ) u v n 1 , u , v C , where lim n θ n = 0 is a real sequence. In the case of θ n = 0 n 1 , one says that T is nonexpansive. Both the class of nonexpansive operators and asymptotically nonexpansive operators via various iterative techniques have been studied recently; see, e.g., the works by the authors of [1,2,3,4,5,6,7,8,9,10,11,12,13]. Let A : H H be a self operator. Consider the classical variational inequality problem (VIP) of consisting of u * C such that
A u * , v u * 0 v C .
The set of solutions of problem (1) is indicated by VI( C , A ). Recently, many authors studied the VIP via mean-valued and projection-based methods; see, e.g., the works by the authors of [14,15,16,17,18,19,20,21]. In 1976, Korpelevich [22] first designed and investigated an extragradient method for a solution of problem (1), that is, for arbitrarily given u 0 C , { u n } is the sequence constructed by
v n = P C ( u n τ A u n ) , u n + 1 = P C ( u n τ A v n ) n 0 ,
with τ ( 0 , 1 L ) . If problem (1) has a solution, then he showed the weak convergence of { u n } constructed by (2) to a solution of problem (1). Since then, Korpelevich’s extragradient method and its variants have been paid great attention to by many scholars, who improved it in various techniques and approaches; see, e.g., the works by the authors of [23,24,25,26,27,28,29,30,31,32,33,34].
Let { T i } i = 1 N be N nonexpansive mappings on H, such that Ω = i = 1 N Fix ( T i ) . Let F be a κ -Lipschitzian, η -strongly monotone self-mapping on H, and f be a contractive map with constant δ ( 0 , 1 ) . In 2015, Bnouhachem et al. [2] introduced an iterative algorithm for solving a hierarchical fixed point problem (HFPP) for a finite pool { T i } i = 1 N , i.e., for arbitrarily given x 0 H , the sequence { x n } is constructed by
y n = ( 1 β n ) T N , n T N 1 , n T 1 , n x n + β n x n , x n + 1 = γ n x n + ( ( 1 γ n ) I α n μ F ) y n + α n ρ f ( y n ) , n 0 ,
where T i , n = ( 1 δ i , n ) I + δ i , n T i and δ i , n ( 0 , 1 ) for integer i { 1 , 2 , , N } . Let the parameters satisfy 0 < μ κ 2 < 2 η and 0 ρ τ < ν , with ν = μ ( η μ κ 2 2 ) . Also, suppose that the sequences { α n } , { β n } , { γ n } ( 0 , 1 ) satisfy the following requirements.
(i)
n = 0 α n = and lim n α n = 0 ;
(ii)
{ β n } [ σ , 1 ) and lim n β n = β < 1 ;
(iii)
lim sup n γ n < 1 and lim inf n γ n > 0 ;
(iv)
lim n | δ i , n 1 δ i , n | = 0 for i = 1 , 2 , , N .
They proved the strong convergence of { x n } to a point x * Ω , which is only a solution to the HFPP: ( μ F ρ f ) x * , y x * 0 y Ω .
On the other hand, let the mappings A 1 , A 2 : C H be both inverse-strongly monotone and the mapping T : C C be asymptotically nonexpansive one with { θ n } . In 2018, by the modified extragradient method, Cai et al. [35] designed a viscosity implicit method for computing a point in the common solution set Ω of the VIPs for A 1 and A 2 and the FPP of T, i.e., for arbitrarily given x 1 C , the sequence { x n } is constructed by
v n = t n x n + ( 1 t n ) u n , z n = P C ( v n μ A 2 v n ) , u n = P C ( z n λ A 1 z n ) , x n + 1 = P C [ ( I α n ρ F ) T n u n + α n f ( x n ) ] ,
where f : C C be a δ -contraction with 0 δ < 1 , and { α n } , { t n } are the sequences in ( 0 , 1 ] satisfying
(i)
n = 1 α n = , lim n α n = 0 and n = 1 | α n + 1 α n | < ;
(ii)
lim n θ n α n = 0 ;
(iii)
0 < ϵ t n 1 and n = 1 | t n + 1 t n | < ;
(iv)
n = 1 T n + 1 u n T n u n < .
They proved that { x n } converges strongly to a point x * Ω , which is a unique solution to the VIP: ( f ρ F ) x * , y x * 0 y Ω .
Under the setting of extragradient approaches, we must calculate metric projections twice for every iteration. Without doubt, if C is a general convex and closed subset, the computation of the projection onto C might be prohibitively consuming-time. In 2011, motivated by Korpelevich’s extragradient method, Censor et al. [5] first purposed the subgradient extragradient method, where a projection onto a half-space is used in place of the second projection onto C:
v n = P C ( u n A u n ) , C n = { u H : u n A u n v n , u v n 0 } , u n + 1 = P C n ( u n A v n ) n 0 ,
with ( 0 , 1 L ) . In 2014, Kraikaew and Saejung [36] introduced the Halpern subgradient extragradient method for solving VIP (1) and proved that the sequence generated by the proposed method converges strongly to a solution of VIP (1).
In 2018, by virtue of the inertial technique, Thong and Hieu [37] first introduced the inertial subgradient extragradient method and proved weak convergence of the proposed method to a solution of VIP (1). Very recently, Thong and Hieu [37] introduced two inertial subgradient extragradient algorithms with the linesearch process to solve the VIP (1) for Lipschitzian, monotone operator A, and the FPP of quasi-nonexpansive operator S satisfying the demiclosedness in H.
Under mild assumptions, they proved that the sequences defined by the above algorithms converge to a point in Fix ( S ) VI ( C , A ) with the aid of dual spaces. Being motivated by the research work [2,37,38] and using the subgradient extragradient technique, this paper designs two mildly inertial algorithms with linesearch process to solve the VIP (1) for Lipschitzian, pseudomonotone operator, and the CFPP of an asymptotically nonexpansive mapping and finitely many nonexpansive mappings in H. Our algorithms fully absorb inertial subgradient extragradient approaches with linesearch process, hybrid steepest-descent algorithms, viscosity iteration techniques, and composite Mann-type iterative methods. Under suitable conditions, it is shown that the sequences constructed by our algorithms converge to a common solution of the VIP and CFPP in norm, which is only a solution of a hierarchical variational inequality (HVI). Finally, we apply our main theorems to deal with the VIP and CFPP in an illustrating example.
The outline of the article is arranged as follows. In Section 2, some concepts and preliminary conclusions are recalled for later use. In Section 3, the convergence criteria of the suggested algorithms are established. In Section 4, our main theorems are used to deal with the VIP and CFPP in an illustrating example. As our algorithms concern solving VIP (1) with Lipschitzian, pseudomonotone operator, and the CFPP of an asymptotically nonexpansive mapping and finitely many nonexpansive mappings, they are more advantageous and more subtle than Algorithms 1 and 2 in [37]. Our theorems strengthen and generalize the corresponding results announced in Bnouhachem et al. [2], Cai et al. [35], Kraikaew and Saejung [36], and Thong and Hieu [37,38].
Algorithm 1: of Thong and Hieu [37]
1Initial Step: Given x 0 , x 1 H arbitrarily. Let γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) .
2 Iteration Steps: Compute x n + 1 in what follows,
 Step 1. Put u n = x n α n ( x n 1 x n ) and calculate y n = P C ( u n n A u n ) , where n is chosen to be the largest { γ , γ l , γ l 2 , } satisfying μ u n y n A u n A y n .
 Step 2. Calculate z n = P C n ( u n n A y n ) with C n : = { u H : u n n A u n y n , u y n 0 } .
 Step 3. Calculate x n + 1 = ( 1 β n ) u n + β n S z n . If u n = z n = x n + 1 then u n Fix ( S ) VI ( C , A ) . Put n : = n + 1 and return to Step 1.
Algorithm 2: of Thong and Hieu [37]
1Initial Step: Given x 0 , x 1 H arbitrarily. Let γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) .
2 Iteration Steps: Compute x n + 1 in what follows,
 Step 1. Put u n = x n α n ( x n 1 x n ) and calculate y n = P C ( u n n A u n ) , where n is chosen to be the largest { γ , γ l , γ l 2 , } satisfying μ u n y n A u n A y n .
 Step 2. Calculate z n = P C n ( u n n A y n ) with C n : = { u H : u n n A u n y n , u y n 0 } .
 Step 3. Calculate x n + 1 = ( 1 β n ) x n + β n S z n . If u n = z n = x n = x n + 1 then x n Fix ( S ) VI ( C , A ) . Put n : = n + 1 and return to Step 1.

2. Preliminaries

Given a sequence { u n } in H. We use the notations u n u and u n u to indicate the strong convergence of { u n } to u and weak convergence of { u n } to u, respectively. An operator T : C H is said to be
(i)
L-Lipschitz continuous (or L-Lipschitzian) iff L > 0 s.t.
T u T v L u v u , v C ;
(ii)
monotone iff
T u T v , u v 0 u , v C ;
(iii)
pseudomonotone iff
T u , v u 0 T v , v u 0 u , v C ;
(iv)
β -strongly monotone if β > 0 s.t.
T u T v , u v β u v 2 u , v C ;
(v)
sequentially weakly continuous if { u n } C , the relation holds: u n u T u n T u .
It is clear that every monotone mapping is pseudomonotone but the converse is not valid; e.g., take T x : = a a + x , x , a ( 0 , + ) .
For every u H , we know that there is only a nearest point in C, indicated by P C u , s.t. u P C u u v v C . The operator P C is said to be the metric projection from H to C.
Proposition 1.
The following hold in real Hilbert spaces:
(i) 
u v , P C u P C v P C u P C v 2 u , v H ;
(ii) 
u P C u , v P C u 0 u H , v C ;
(iii) 
u v 2 u P C u 2 v P C u 2 u H , v C ;
(iv) 
u v 2 + 2 u v , v = u 2 v 2 u , v H ;
(v) 
λ u + ( 1 λ ) v 2 + λ ( 1 λ ) u v 2 = λ u 2 + ( 1 λ ) v 2 u , v H , λ [ 0 , 1 ] .
An operator S : H H is called an averaged one if α ( 0 , 1 ) s.t. S = ( 1 α ) I + α T , where I is the identity operator of H and T : H H is a nonexpansive operator. In this case, S is also called α -averaged. It is clear that the averaged operator S is also nonexpansive and Fix ( S ) = Fix ( T ) .
Lemma 1.
[2] If the mappings { T i } i = 1 N defined on H are averaged and have a common fixed point, then i = 1 N Fix ( T i ) = Fix ( T 1 T 2 T N ) .
The next result immediately follows from the subdifferential inequality of the function · 2 / 2 .
Lemma 2.
The following inequality holds,
u + v 2 u 2 2 v , u + v u , v H .
Lemma 3.
[39] Assume that the mapping A is pseudomonotone and continuous on C. Given a point u C . Then the relation holds: A u , v u 0 v C A v , v u 0 v C .
Lemma 4.
[40] Let { t n } be a sequence in [ 0 , + ) satisfying the condition t n + 1 s n b n + ( 1 s n ) t n n 1 , where { s n } and { b n } lie in R : = ( , ) s.t. (a) { s n } [ 0 , 1 ] and n = 1 s n = , and (b) lim sup n b n 0 or n = 1 | s n b n | < . Then t n 0 as n .
Definition 1.
An operator S : C H is called ζ-strictly pseudocontractive iff ζ [ 0 , 1 ) s.t. S u S v 2 ζ ( I S ) u ( I S ) v 2 u v 2 u , v C .
Lemma 5.
[41] Assume that S : C H is ζ-strictly pseudocontractive. Define T : C H by T u = μ S u + ( 1 μ ) u u C . If μ [ ζ , 1 ) , T is nonexpansive such that Fix ( T ) = Fix ( S ) .
Lemma 6.
[42] Let ( 0 , 1 ] , S : C H be nonexpansive, and S : C H be defined as S u : = S u μ F ( S u ) u C , where F is κ-Lipschitzian and η-strongly monotone self-mapping on H. Then, S is a contractive map provided 0 < μ < 2 η κ 2 , i.e., S u S v ( 1 τ ) u v u , v C , where τ = 1 1 μ ( 2 η μ κ 2 ) ( 0 , 1 ] .
Lemma 7.
[43] Assume that the Banach space X admits a weakly continuous duality mapping; the subset C X is nonempty, convex, and closed; and the asymptotically nonexpansive mapping S : C C has a fixed point. Then, I S is demiclosed at zero, i.e., if the sequence { u n } C satisfies u n u C and u n S u n 0 , then u Fix ( S ) .

3. Main Results

In this section, we always suppose the following conditions.
  • T is an asymptotically nonexpansive operator on H with { θ n } and { T i } i = 1 N are N nonexpansive operators on H.
  • A is L-Lipschitzian, pseudomonotone on H, and sequentially weakly continuous on C, s.t. Ω = i = 0 N Fix ( T i ) VI ( C , A ) with T 0 : = T .
  • f is a contractive map on H with coefficient δ [ 0 , 1 ) , and F is κ -Lipschitzian, η -strongly monotone on H.
  • ν δ < τ : = 1 1 ρ ( 2 η ρ κ 2 ) for ν 0 and ρ ( 0 , 2 η κ 2 ) .
  • T i , n : = ( 1 δ i , n ) I + δ i , n T i and δ i , n ( 0 , 1 ) for i = 1 , 2 , , N .
  • { σ n } [ 0 , 1 ] and { α n } , { β n } , { γ n } ( 0 , 1 ) such that
    (i)
    sup n 1 σ n α n < and lim n θ n α n = 0 ;
    (ii)
    n = 1 α n = and lim n α n = 0 ;
    (iii)
    { β n } [ σ , 1 ) and lim n β n = β < 1 ;
    (iv)
    lim sup n γ n < 1 , lim inf n γ n > 0 and α n + γ n 1 n 1 . For example, take
    α n = 1 n + 1 , σ n = 1 ( n + 1 ) 2 = θ n , β n = n 2 ( n + 1 ) , γ n = n 4 ( n + 1 ) .
Remark 1.
For Step 2 in Algorithm 3, the composite mapping T N , n T N 1 , n T 1 , n with T i , n : = ( 1 δ i , n ) I + δ i , n T i and δ i , n ( 0 , 1 ) for i = 1 , 2 , , N , has the following property,
i = 1 N Fix ( T i ) = i = 1 N Fix ( T i , n ) = Fix ( T N , n T N 1 , n T 1 , n ) n 1 ,
due to Lemmas 1 and 5.
Algorithm 3: MISEA I
1Initial Step: Given x 0 , x 1 H arbitrarily. Let γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) .
2 Iteration Steps: Compute x n + 1 in what follows.
 Step 1. Put u n = x n σ n ( x n 1 x n ) and calculate y n = P C ( u n n A u n ) , where n is chosen to be the largest { γ , γ l , γ l 2 , } satisfying
A u n A y n μ u n y n .

 Step 2. Calculate z n = β n x n + ( 1 β n ) T N , n T N 1 , n T 1 , n P C n ( u n n A y n ) with C n : = { u H : u n n A u n y n , u y n 0 } .
 Step 3. Calculate
x n + 1 = α n ν f ( x n ) + γ n x n + ( ( 1 γ n ) I α n ρ F ) T n z n .

 Update n : = n + 1 and return to Step 1.
Lemma 8.
The Armijo-like search rule (6) is defined well, and the following holds: min { γ , μ l L } n γ .
Proof. 
As A is L-Lipschitzian, we get μ L A u n A P C ( u n γ l m A u n ) μ u n P C ( u n γ l m A u n ) . Therefore, (6) is valid for γ l m μ L . This means that n is defined well. It is clear that n γ . In the case of n = γ , the inequality holds. In the case of n < γ , from (6) it follows that A u n A P C ( u n n l A u n ) > μ n l u n P C ( u n n l A u n ) . Thus, from the L-Lipschitzian property of A, we get n > μ l L . Consequently, the inequality holds. □
Lemma 9.
Let { u n } , { y n } , { z n } be the sequences generated by Algorithm 3. Then
z n ω 2 β n x n ω 2 + ( 1 β n ) u n ω 2 ( 1 β n ) ( 1 μ ) [ u n y n 2 + v n y n 2 ] ω Ω , n 1 ,
where v n : = P C n ( u n n A y n ) .
Proof. 
First, take an arbitrary p Ω C C n . We note that
2 v n p 2 = 2 P C n ( u n n A y n ) P C n p 2 2 v n p , u n n A y n p = v n p 2 + u n p 2 v n u n 2 2 v n p , n A y n .
So, it follows that v n p 2 u n p 2 v n u n 2 2 v n p , n A y n , which together with (6) and the pseudomonotonicity of A, deduces that A y n , y n p 0 and
v n p 2 u n p 2 v n u n 2 + 2 n ( A y n , p y n + A y n , y n v n ) u n p 2 v n u n 2 + 2 n A y n , y n v n = u n p 2 v n y n 2 y n u n 2 + 2 u n n A y n y n , v n y n .
As v n = P C n ( u n n A y n ) with C n : = { u H : u n n A u n y n , u y n 0 } , we have u n n A u n y n , v n y n 0 , which together with (6), implies that
2 u n n A y n y n , v n y n = 2 u n n A u n y n , v n y n + 2 n A u n A y n , v n y n 2 μ u n y n v n y n μ ( u n y n 2 + v n y n 2 ) .
Therefore, substituting the last inequality for (9), we obtain
v n p 2 u n p 2 ( 1 μ ) u n y n 2 ( 1 μ ) v n y n 2 p Ω ,
which together with Algorithm 3 and Fix ( T N , n T N 1 , n T 1 , n ) = i = 1 N Fix ( T i , n ) = i = 1 N Fix ( T i ) , due to Lemmas 1 and 5 implies that for all ω Ω ,
z n ω 2 β n x n ω 2 + ( 1 β n ) T N , n T N 1 , n T 1 , n v n ω 2 β n x n ω 2 + ( 1 β n ) v n ω 2 β n x n ω 2 + ( 1 β n ) [ u n ω 2 ( 1 μ ) u n y n 2 ( 1 μ ) v n y n 2 ] = β n x n ω 2 + ( 1 β n ) u n ω 2 ( 1 β n ) ( 1 μ ) [ u n y n 2 + v n y n 2 ] .
This completes the proof. □
Lemma 10.
Let { u n } , { x n } , { y n } , and { z n } be bounded vector sequences generated by Algorithm 3. If T n x n T n + 1 x n 0 , x n x n + 1 0 , u n y n 0 , u n z n 0 and { w n k } { u n } such that w n k z H , then z Ω .
Proof. 
From Algorithm 3, we get u n x n = σ n ( x n x n 1 ) n 1 , and therefore u n x n = σ n x n x n 1 x n x n 1 . Utilizing the assumption x n x n + 1 0 , we have u n x n 0 . So, it follows from the assumption u n y n 0 , that x n y n x n u n + u n y n 0 ( n ) . Therefore, according to the assumption u n z n 0 , we get x n z n x n u n + u n z n 0 ( n ) . Furthermore, in terms of Lemma 9 we deduce that for each ω Ω ,
( 1 β n ) ( 1 μ ) [ u n y n 2 + v n y n 2 ] β n x n ω 2 + ( 1 β n ) u n ω 2 z n ω 2 β n x n ω 2 + ( 1 β n ) ( x n ω + x n x n 1 ) 2 z n ω 2 = β n x n ω 2 + ( 1 β n ) [ x n ω 2 + x n x n 1 ( 2 x n ω + x n x n 1 ) ] z n ω 2 = x n ω 2 z n ω 2 + ( 1 β n ) x n x n 1 ( 2 x n ω + x n x n 1 ) ( x n ω + z n ω ) x n z n + x n x n 1 ( 2 x n ω + x n x n 1 ) .
As lim n ( 1 β n ) = ( 1 β ) > 0 , μ ( 0 , 1 ) , x n x n + 1 0 and x n z n 0 , from the boundedness of { x n } , { z n } we get
lim n u n y n = 0 and lim n v n y n = 0 .
Thus we obtain that x n v n x n u n + u n y n + y n v n 0 ( n ) . □
Now, according to (7) in Algorithm 3, we have
x n + 1 x n = ( 1 γ n ) ( T n z n x n ) α n ρ F T n z n + α n ν f ( x n ) = ( 1 γ n ) ( T n z n T n x n ) + ( 1 γ n ) ( T n x n x n ) α n ρ F T n z n + α n ν f ( x n ) .
So it follows that
( 1 γ n ) T n x n x n = x n + 1 x n α n ν f ( x n ) ( 1 γ n ) ( T n z n T n x n ) + α n ρ F T n z n x n + 1 x n + α n ( ν f ( x n ) + ρ F T n z n ) + ( 1 γ n ) T n z n T n x n x n + 1 x n + α n ( ν f ( x n ) + ρ F T n z n ) + ( 1 + θ n ) z n x n .
Since lim inf n ( 1 γ n ) > 0 , α n 0 , θ n 0 , x n x n + 1 0 and x n z n 0 , from the boundedness of { x n } , { z n } and the Lipschitz continuity of f , F , T , we infer that
lim n x n T n x n = 0 .
Also, let the mapping W : H H be defined as W x : = β x + ( 1 β ) T N , n T N 1 , n T 1 , n x , where β [ σ , 1 ) . By Lemma 5 we know that W is nonexpansive self-mapping on H and Fix ( W ) = i = 1 N Fix ( T i ) . We observe that
W x n x n W x n z n + z n x n = ( β β n ) ( x n T N , n T N 1 , n T 1 , n x n ) + ( 1 β n ) ( T N , n T N 1 , n T 1 , n x n T N n T N 1 n T 1 n v n ) + z n x n | β β n | x n T N , n T N 1 , n T 1 , n x n + ( 1 β n ) T N , n T N 1 , n T 1 , n x n T N n T N 1 n T 1 n v n + z n x n | β β n | x n T N , n T N 1 , n T 1 , n x n + x n v n + z n x n .
Since { x n } is bounded and the composite T N , n T N 1 , n T 1 , n is nonexpansive, from lim n β n = β , x n v n 0 and x n z n 0 we deduce that
lim n x n W x n = 0 .
Noticing y n = P C ( u n n A u n ) , we get u n n A u n y n , x y n 0 x C , and hence
1 n u n y n , x y n + A u n , y n u n A u n , x u n x C .
Then, by the boundedness of { u n k } and Lipschitzian property of A, we know that { A u n k } is bounded. Also, from u n y n 0 , we have that { y n k } is bounded as well. Observe that n min { γ , μ l L } . So, from (13), it follows that lim inf k A u n k , x u n k 0 x C . Moreover, note that A y n , x y n = A y n A u n , x u n + A u n , x u n + A y n , u n y n . Since u n y n 0 , from L-Lipschitzian property of A we get A u n A y n 0 , which together with (13) arrives at lim inf k A y n k , x y n k 0 x C .
We below claim that x n T x n 0 . Indeed, observe that
T x n x n T x n T n + 1 x n + T n + 1 x n T n x n + T n x n x n ( 1 + θ 1 ) x n T n x n + T n + 1 x n T n x n + T n x n x n = ( 2 + θ 1 ) x n T n x n + T n + 1 x n T n x n .
Therefore, from (11) and the assumption T n x n T n + 1 x n 0 , we get
lim n x n T x n = 0 .
We now select a sequence { ϵ k } ( 0 , 1 ) s.t. ϵ k 0 as k . For every k 1 , we indicate by m k the smallest natural number s.t.
A y n j , x y n j + ϵ k 0 j m k .
As { ϵ k } is decreasing, { m k } obviously is increasing. Considering that { y m k } C ensures A y m k 0 k 1 , we put ν m k = A y m k A y m k 2 , we have A y m k , ν m k = 1 k 1 . Therefore, from (15), we have A y m k , x + ϵ k ν m k y m k 0 k 1 . Also, from the pseudomonotonicity of A we get A ( x + ϵ k ν m k ) , x + ϵ k ν m k y m k 0 k 1 . This means that
A x , x y m k A x A ( x + ϵ k ν m k ) , x + ϵ k ν m k y m k ϵ k A x , ν m k k 1 .
We show that lim k ϵ k ν m k = 0 . In fact, from u n k z and u n y n 0 , we get y n k z . Hence, { y n } C ensures z C . Also, since A is sequentially weakly continuous, we infer that A y n k A z . So, we get A z 0 (otherwise, z is a solution). Utilizing the sequentially weak lower semicontinuity of the norm · , we have 0 < A z lim inf k A y n k . Since { y m k } { y n k } and ϵ k 0 as k , we deduce that 0 lim sup k ϵ k ν m k = lim sup k ϵ k A y m k lim sup k ϵ k lim inf k A y n k = 0 . Thus we have ϵ k μ m k 0 .
Finally, we claim z Ω . In fact, from u n x n 0 and u n k z , we have x n k z . By (14) we get x n k T x n k 0 . Because Lemma 7 ensures the demiclosedness of I T at zero, we have z Fix ( T ) . Moreover, using u n x n 0 and u n k z , we have x n k z . Using (12) we get x n k W x n k 0 . Using Lemma 7 we deduce that I W has the demiclosedness at zero. So, we have ( I W ) z = 0 , i.e., z Fix ( W ) = i = 1 N Fix ( T i ) . In addition, taking k , we conclude that the right hand side of (16) tends to zero according to the Lipschitzian property of A, the boundedness of { y m k } , { ν m k } and the limit lim k ϵ k ν m k = 0 . Consequently, we get A y , y z = lim inf k A y , y y m k 0 y C . By Lemma 3 we have z VI ( C , A ) . So, z i = 0 N Fix ( T i ) VI ( C , A ) = Ω .
Remark 2.
It is clear that the boundedness assumption of the generated sequences in Lemma 10 can be disposed with when T is the identity.
Theorem 1.
Assume that the sequence { x n } constructed by Algorithm 3 satisfies T n x n T n + 1 x n 0 . Then
x n x * Ω x n x n + 1 0 , x n T N , n T N 1 , n T 1 , n x n 0
where x * Ω is only a solution to the HVI: ( ν f ρ F ) x * , ω x * 0 ω Ω .
Proof. 
We first note that lim sup n γ n < 1 and lim inf n γ n > 0 . Then, we may suppose that { γ n } [ a , b ] ( 0 , 1 ) . We show that P Ω ( ν f + I ρ F ) is a contractive map. In fact, using Lemma 6 we get
P Ω ( ν f + I ρ F ) u P Ω ( ν f + I ρ F ) v ν f ( u ) f ( v ) + ( I ρ F ) u ( I ρ F ) v ν δ u v + ( 1 τ ) u v = [ 1 ( τ ν δ ) ] u v u , v H .
This means that P Ω ( ν f + I ρ F ) has only a fixed point x * H , i.e., x * = P Ω ( ν f + I ρ F ) x * . Accordingly, there is only a solution x * Ω = i = 0 N Fix ( T i ) VI ( C , A ) to the VIP
( ν f ρ F ) x * , ω x * 0 ω Ω .
It is now easy to see that the necessity of the theorem is valid. Indeed, if x n x * Ω = i = 0 N Fix ( T i ) VI ( C , A ) , then T 1 x * = x * , . . . , T N x * = x * , which together with i = 1 N Fix ( T i ) = i = 1 N Fix ( T i , n ) = Fix ( T N n T N 1 n T 1 n ) (due to Lemmas 1 and 5), imply that x n x n + 1 x n x * + x n + 1 x * 0 ( n ) , and
x n T N n T N 1 n T 1 n x n x n x * + T N n T N 1 n T 1 n x n x * x n x * + x n x * = 2 x n x * 0 ( n ) .
We below claim the sufficiency of the theorem. For this purpose, we suppose lim n ( x n x n + 1 + x n T N , n T N 1 , n T 1 , n x n ) = 0 and prove the sufficiency by the following steps. □
Step 1. We claim the boundedness of { x n } . In fact, noticing lim n θ n α n = 0 , we know that θ n α n ( τ ν δ ) 2 n n 0 for some n 0 1 . Therefore, we have that for all n n 0 ,
α n ν δ + γ n + ( 1 γ n α n τ ) ( 1 + θ n ) 1 α n ( τ ν δ ) + θ n 1 α n ( τ ν δ ) 2 .
Let p be an arbitrary point in Ω = i = 0 N Fix ( T i ) VI ( C , A ) . Then T p = p , T i p = p , i = 1 , , N , and (10) is true, that is,
v n p 2 + ( 1 μ ) u n y n 2 + ( 1 μ ) v n y n 2 u n p 2 .
Thus, we obtain
v n p u n p n 1 .
From the definition of u n , we have
u n p x n p + σ n x n x n 1 = x n p + α n · σ n α n x n x n 1 .
From sup n 1 σ n α n < and sup n 1 x n x n 1 < , we infer that sup n 1 σ n α n x n x n 1 < , which immediately yields that M 1 > 0 s.t.
σ n α n x n x n 1 M 1 n 1 .
Using (19)–(21), we obtain
v n p u n p x n p + α n M 1 n 1 .
Accordingly, by Algorithm 3, Lemma 6 and (22) we conclude that for all n n 0 ,
z n p = ( 1 β n ) ( T N , n T N 1 , n T 1 , n v n p ) + β n ( x n p ) ( 1 β n ) T N , n T N 1 , n T 1 , n v n p + β n x n p ( 1 β n ) ( x n p + α n M 1 ) + β n x n p x n p + α n M 1 ,
and therefore
x n + 1 p = γ n ( x n p ) + α n ( ν f ( x n ) ρ F p ) + ( ( 1 γ n ) I α n ρ F ) T n z n ( ( 1 γ n ) I α n ρ F ) p α n ν δ x n p + α n ( ν f ρ F ) p + γ n x n p + ( ( 1 γ n ) I α n ρ F ) T n z n ( ( 1 γ n ) I α n ρ F ) p = α n ν δ x n p + α n ( ν f ρ F ) p + γ n x n p + ( 1 γ n ) ( I α n 1 γ n ρ F ) T n z n ( I α n 1 γ n ρ F ) p α n ν δ x n p + α n ( ν f ρ F ) p + γ n x n p + ( 1 γ n ) ( 1 α n 1 γ n τ ) ( 1 + θ n ) z n p α n ν δ x n p + α n ( ν f ρ F ) p + γ n x n p + ( 1 γ n α n τ ) ( 1 + θ n ) ( x n p + α n M 1 ) = [ α n ν δ + γ n + ( 1 γ n α n τ ) ( 1 + θ n ) ] x n p + ( 1 γ n α n τ ) ( 1 + θ n ) α n M 1 + α n ( ν f ρ F ) p [ 1 α n ( τ ν δ ) 2 ] x n p + α n ( τ ν δ ) 2 · 2 ( M 1 + ( ν f ρ F ) p ) τ ν δ max { 2 ( M 1 + ( ν f ρ F ) p ) τ ν δ , x n p } .
By induction, we conclude that x n p max { 2 ( M 1 + ( ρ F ν f ) p ) τ ν δ , x n 0 p } n n 0 . Therefore, we get the boundedness of vector sequence { x n } .
Step 2. We claim that M 4 > 0 s.t. n n 0 ,
( 1 γ n α n τ ) ( 1 β n ) ( 1 + θ n ) ( 1 μ ) [ u n y n 2 + v n y n 2 ] x n p 2 x n + 1 p 2 + α n M 4 .
In fact, using Lemma 6, Lemma 9, and the convexity of · 2 , from α n + γ n 1 , we obtain that for all n n 0 ,
x n + 1 p 2 = α n ν ( f ( x n ) f ( p ) ) + γ n ( x n p ) + ( ( 1 γ n ) I α n ρ F ) T n z n ( ( 1 γ n ) I α n ρ F ) p + α n ( ν f ρ F ) p 2 α n ν ( f ( x n ) f ( p ) ) + γ n ( x n p ) + ( ( 1 γ n ) I α n ρ F ) T n z n ( ( 1 γ n ) I α n ρ F ) p 2 + 2 α n ( ν f ρ F ) p , x n + 1 p = α n ν ( f ( x n ) f ( p ) ) + γ n ( x n p ) + ( 1 γ n ) [ ( I α n 1 γ n ρ F ) T n z n ( I α n 1 γ n ρ F ) p ] 2 + 2 α n ( ν f ρ F ) p , x n + 1 p [ α n ν δ x n p + γ n x n p + ( 1 γ n ) ( 1 α n 1 γ n τ ) ( 1 + θ n ) z n p ] 2 + 2 α n ( ν f ρ F ) p , x n + 1 p α n ν δ x n p 2 + γ n x n p 2 + ( 1 γ n α n τ ) ( 1 + θ n ) z n p 2 + 2 α n ( ν f ρ F ) p , x n + 1 p α n ν δ x n p 2 + γ n x n p 2 + ( 1 γ n α n τ ) ( 1 + θ n ) [ β n x n p 2 + ( 1 β n ) u n p 2 ( 1 β n ) ( 1 μ ) ( u n y n 2 + v n y n 2 ) ] + α n M 2 ,
where sup n 1 2 ( ν f ρ F ) p x n + 1 p M 2 for some M 2 > 0 . Also, from (22), we get
u n p 2 x n p 2 + α n ( 2 M 1 x n p + α n M 1 2 ) x n p 2 + α n M 3 ,
where sup n 1 { 2 M 1 x n p + α n M 1 2 } M 3 for some M 3 > 0 . Note that α n ν δ + γ n + ( 1 γ n α n τ ) ( 1 + θ n ) 1 α n ( τ ν δ ) 2 for all n n 0 . Substituting (25) for (24), we deduce that for all n n 0 ,
x n + 1 p 2 α n ν δ x n p 2 + γ n x n p 2 + ( 1 γ n α n τ ) ( 1 + θ n ) [ β n x n p 2 + ( 1 β n ) ( x n p 2 + α n M 3 ) ( 1 β n ) ( 1 μ ) ( u n y n 2 + v n y n 2 ) ] + α n M 2 [ α n ν δ + γ n + ( 1 γ n α n τ ) ( 1 + θ n ) ] x n p 2 + ( 1 γ n α n τ ) ( 1 + θ n ) [ α n M 3 ( 1 β n ) ( 1 μ ) ( u n y n 2 + v n y n 2 ) ] + α n M 2 ( 1 α n ( τ ν δ ) 2 ) x n p 2 ( 1 γ n α n τ ) ( 1 β n ) ( 1 + θ n ) ( 1 μ ) × × [ u n y n 2 + v n y n 2 ] + α n M 2 + α n M 3 x n p 2 ( 1 γ n α n τ ) ( 1 β n ) ( 1 + θ n ) ( 1 μ ) [ u n y n 2 + v n y n 2 ] + α n M 4 ,
where M 4 : = M 2 + M 3 . This immediately implies that for all n n 0 ,
( 1 γ n α n τ ) ( 1 β n ) ( 1 + θ n ) ( 1 μ ) [ u n y n 2 + v n y n 2 ] x n p 2 x n + 1 p 2 + α n M 4 .
Step 3. We claim that M > 0 s.t. n n 0 ,
x n + 1 p 2 [ 1 α n ( τ ν δ ) 2 ] x n p 2 + α n ( τ ν δ ) 2 [ 4 τ ν δ ( ν f ρ F ) p , x n + 1 p + σ n α n · 2 M τ ν δ x n x n 1 ] .
In fact, we get
u n p 2 ( x n p + σ n x n x n 1 ) 2 x n p 2 + σ n x n x n 1 M ,
with sup n 1 { 2 x n p + σ n x n x n 1 } M for some M > 0 . Note that α n ν δ + γ n + ( 1 γ n α n τ ) ( 1 + θ n ) 1 α n ( τ ν δ ) 2 for all n n 0 . Thus, combining (24) and (27), we have that for all n n 0 ,
x n + 1 p 2 α n ν δ x n p 2 + γ n x n p 2 + ( 1 γ n α n τ ) ( 1 + θ n ) [ x n p 2 + σ n x n x n 1 M ] + 2 α n ( ν f ρ F ) p , x n + 1 p = [ α n ν δ + γ n + ( 1 γ n α n τ ) ( 1 + θ n ) ] x n p 2 + ( 1 γ n α n τ ) ( 1 + θ n ) σ n x n x n 1 M + 2 α n ( ν f ρ F ) p , x n + 1 p [ 1 α n ( τ ν δ ) 2 ] x n p 2 + α n ( τ ν δ ) 2 [ 4 ( ν f ρ F ) p , x n + 1 p τ ν δ + σ n α n · x n x n 1 2 M τ ν δ ] .
Step 4. We claim that x n x * Ω , which is only a solution to the VIP (17). In fact, setting p = x * , we obtain from (28) that
x n + 1 x * 2 [ 1 α n ( τ ν δ ) 2 ] x n x * 2 + α n ( τ ν δ ) 2 [ 4 τ ν δ ( ν f ρ F ) x * , x n + 1 x * + σ n α n · 2 M τ ν δ x n x n 1 ] .
According to Lemma 4, it is sufficient to prove that lim sup n ( ν f ρ F ) x * , x n + 1 x * 0 . As x n x n + 1 0 , α n 0 , β n β < 1 and θ n 0 , from (26) and { γ n } [ a , b ] ( 0 , 1 ) , we have
lim sup n ( 1 b α n τ ) ( 1 β n ) ( 1 + θ n ) ( 1 μ ) [ u n y n 2 + v n y n 2 ] lim sup n ( 1 γ n α n τ ) ( 1 β n ) ( 1 + θ n ) ( 1 μ ) [ u n y n 2 + v n y n 2 ] lim sup n ( x n p + x n + 1 p ) x n x n + 1 = 0 .
This immediately implies that
lim n u n y n = 0 and lim n v n y n = 0 .
In addition, it is clear that u n x n = σ n x n x n 1 x n x n 1 0 ( n ) , and hence x n y n x n u n + u n y n 0 ( n ) . So it follows from (30) that x n v n x n y n + y n v n 0 ( n ) . Thus, from Algorithm 3 and the assumption x n T N n T N 1 n T 1 n x n 0 , we obtain
z n x n = ( 1 β n ) T N , n T N 1 , n T 1 , n v n x n T N , n T N 1 , n T 1 , n v n x n T N , n T N 1 , n T 1 , n v n T N , n T N 1 , n T 1 , n x n + T N , n T N 1 , n T 1 , n x n x n v n x n + T N , n T N 1 , n T 1 , n x n x n 0 ( n ) .
As x n y n 0 , x n z n 0 and u n x n 0 , we deduce that as n ,
u n y n u n x n + x n y n 0 and u n z n u n x n + x n z n 0 .
On the other hand, from the boundedness of { x n } , it follows that { x n k } { x n } s.t.
lim sup n ( ν f ρ F ) x * , x n x * = lim k ( ν f ρ F ) x * , x n k x * .
Utilizing the reflexivity of H and the boundedness of { x n } , one may suppose that x n k x ˜ . Therefore, one gets from (33),
lim sup n ( ν f ρ F ) x * , x n x * = ( ν f ρ F ) x * , x ˜ x * .
It is easy to see from u n x n 0 and x n k x ˜ that w n k x ˜ . Since T n x n T n + 1 x n 0 , x n x n + 1 0 , u n y n 0 , u n z n 0 and w n k x ˜ , from Lemma 10 we get x ˜ Ω . Therefore, from (17) and (34), we infer that
lim sup n ( ν f ρ F ) x * , x n x * = ( ν f ρ F ) x * , x ˜ x * 0 ,
which together with x n x n + 1 0 , implies that
lim sup n ( ν f ρ F ) x * , x n + 1 x * = lim sup n [ ( ν f ρ F ) x * , x n + 1 x n + ( ν f ρ F ) x * , x n x * ] = ( ν f ρ F ) x * , x ˜ x * 0 .
Observe that { α n ( τ ν δ ) 2 } [ 0 , 1 ] , n = 1 α n ( τ ν δ ) 2 = , and
lim sup n [ 4 τ ν δ ( ν f ρ F ) x * , x n + 1 x * + σ n α n · 2 M τ ν δ x n x n 1 ] 0 .
Consequently, by Lemma 4 we obtain from (29) that x n x * 0 as n .
Next, we introduce another mildly inertial subgradient extragradient algorithm with line-search process.
It is remarkable that Lemmas 8 and 9 remain true for Algorithm 4.
Algorithm 4: MISEA II
1Initial Step: Given x 0 , x 1 H arbitrary. Let γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) .
2 Iteration Steps: Compute x n + 1 in what follows:
 Step 1. Put u n = x n σ n ( x n 1 x n ) and calculate y n = P C ( u n n A u n ) , where n is chosen to be the largest { γ , γ l , γ l 2 , } satisfying
A u n A y n μ u n y n .

 Step 2. Calculate z n = β n x n + ( 1 β n ) T N , n T N 1 , n T 1 , n P C n ( u n n A y n ) with C n : = { u H : u n n A u n y n , u y n 0 } .
 Step 3. Calculate
x n + 1 = γ n u n + ( ( 1 γ n ) I α n ρ F ) T n z n + α n ν f ( x n ) .

 Update n : = n + 1 and return to Step 1.
Theorem 2.
Assume that the sequence { x n } constructed by Algorithm 4 satisfies T n x n T n + 1 x n 0 . Then,
x n x * Ω x n x n + 1 0 , x n T N , n T N 1 , n T 1 , n x n 0
where x * Ω is only a solution to the HVI: ( ν f ρ F ) x * , ω x * 0 ω Ω .
Proof. 
Using the similar inference to that in the proof of Theorem 1, we obtain that there is only a solution x * Ω = i = 0 N Fix ( T i ) VI ( C , A ) to the HVI (17), and that the necessity of the theorem is true.
We claim the sufficiency of the theorem below. For this purpose, we suppose lim n ( x n x n + 1 + x n T N , n T N 1 , n T 1 , n x n ) = 0 and prove the sufficiency by the following steps.
Step 1. We claim the boundedness of { x n } . In fact, using the similar reasoning to that in Step 1 for the proof of Theorem 1, we know that inequalities (18)–(23) hold. Taking into account lim n θ n α n = 0 , we know that θ n α n ( τ ν δ ) 2 n n 0 for some n 0 1 . Hence we deduce that for all n n 0 ,
α n ν δ + γ n + ( 1 γ n α n τ ) ( 1 + θ n ) 1 α n ( τ ν δ ) + θ n 1 α n ( τ ν δ ) 2 .
Also, from Algorithm 4, Lemma 6, and (22) and (23) we obtain
x n + 1 p = γ n ( u n p ) + α n ( ν f ( x n ) ρ F p ) + ( ( 1 γ n ) I α n ρ F ) T n z n ( ( 1 γ n ) I α n ρ F ) p α n ν δ x n p + α n ( ν f ρ F ) p + γ n u n p + ( 1 γ n ) ( I α n 1 γ n ρ F ) T n z n ( I α n 1 γ n ρ F ) p α n ν δ x n p + α n ( ν f ρ F ) p + γ n u n p + ( 1 γ n ) ( 1 α n 1 γ n τ ) ( 1 + θ n ) z n p α n ν δ x n p + α n ( ν f ρ F ) p + γ n ( x n p + α n M 1 ) + ( 1 γ n α n τ ) ( 1 + θ n ) ( x n p + α n M 1 ) = [ α n ν δ + γ n + ( 1 γ n α n τ ) ( 1 + θ n ) ] x n p + [ γ n + ( 1 γ n α n τ ) ( 1 + θ n ) ] α n M 1 + α n ( ν f ρ F ) p [ 1 α n ( τ ν δ ) 2 ] x n p + α n ( τ ν δ ) 2 · 2 ( M 1 + ( ν f ρ F ) p ) τ ν δ max { 2 ( M 1 + ( ν f ρ F ) p ) τ ν δ , x n p } .
By induction, we conclude that x n p max { 2 ( M 1 + ( ρ F ν f ) p ) τ ν δ , x n 0 p } n n 0 . Therefore, we obtain the boundedness of vector sequence { x n } .
Step 2. One claims M 4 > 0 s.t. n n 0 ,
( 1 γ n α n τ ) ( 1 β n ) ( 1 + θ n ) ( 1 μ ) [ u n y n 2 + v n y n 2 ] x n p 2 x n + 1 p 2 + α n M 4 .
In fact, using Lemma 6, Lemma 9, and the convexity of · 2 , from α n + γ n 1 , we obtain that for all n n 0 ,
x n + 1 p 2 = γ n ( u n p ) + α n ν ( f ( x n ) f ( p ) ) + ( ( 1 γ n ) I α n ρ F ) T n z n ( ( 1 γ n ) I α n ρ F ) p + α n ( ν f ρ F ) p 2 α n ν ( f ( x n ) f ( p ) ) + γ n ( u n p ) + ( ( 1 γ n ) I α n ρ F ) T n z n ( ( 1 γ n ) I α n ρ F ) p 2 + 2 α n ( ν f ρ F ) p , x n + 1 p [ α n ν δ x n p + γ n u n p + ( 1 γ n α n τ ) ( 1 + θ n ) z n p ] 2 + 2 α n ( ν f ρ F ) p , x n + 1 p α n ν δ x n p 2 + γ n u n p 2 + ( 1 γ n α n τ ) ( 1 + θ n ) z n p 2 + 2 α n ( ν f ρ F ) p , x n + 1 p α n ν δ x n p 2 + γ n u n p 2 + ( 1 γ n α n τ ) ( 1 + θ n ) [ β n x n p 2 + ( 1 β n ) u n p 2 ( 1 β n ) ( 1 μ ) ( u n y n 2 + v n y n 2 ) ] + α n M 2 ,
where sup n 1 2 ( ν f ρ F ) p x n + 1 p M 2 for some M 2 > 0 . Also, from (22) we have
u n p 2 x n p 2 + α n M 3 ,
where sup n 1 { 2 M 1 x n p + α n M 1 2 } M 3 for some M 3 > 0 . Note that α n ν δ + γ n + ( 1 γ n α n τ ) ( 1 + θ n ) 1 α n ( τ ν δ ) 2 for all n n 0 . Substituting (40) for (39), we deduce that for all n n 0 ,
x n + 1 p 2 γ n ( x n p 2 + α n M 3 ) + α n ν δ x n p 2 + ( 1 γ n α n τ ) ( 1 + θ n ) [ x n p 2 + α n M 3 ( 1 β n ) ( 1 μ ) ( u n y n 2 + v n y n 2 ) ] + α n M 2 = [ α n ν δ + γ n + ( 1 γ n α n τ ) ( 1 + θ n ) ] x n p 2 + γ n α n M 3 + ( 1 γ n α n τ ) ( 1 + θ n ) [ α n M 3 ( 1 β n ) ( 1 μ ) ( u n y n 2 + v n y n 2 ) ] + α n M 2 ( 1 α n ( τ ν δ ) 2 ) x n p 2 ( 1 γ n α n τ ) ( 1 β n ) ( 1 + θ n ) ( 1 μ ) × × [ u n y n 2 + v n y n 2 ] + α n M 2 + α n M 3 x n p 2 ( 1 γ n α n τ ) ( 1 β n ) ( 1 + θ n ) ( 1 μ ) [ u n y n 2 + v n y n 2 ] + α n M 4 ,
where M 4 : = M 2 + M 3 . This immediately implies that for all n n 0 ,
( 1 γ n α n τ ) ( 1 β n ) ( 1 + θ n ) ( 1 μ ) [ u n y n 2 + v n y n 2 ] x n p 2 x n + 1 p 2 + α n M 4 .
Step 3. One claims that M > 0 s.t. n n 0 ,
x n + 1 p 2 [ 1 α n ( τ ν δ ) 2 ] x n p 2 + α n ( τ ν δ ) 2 [ 4 τ ν δ ( ν f ρ F ) p , x n + 1 p + σ n α n · 2 M τ ν δ x n x n 1 ] .
In fact, we get
u n p 2 x n p 2 + σ n x n x n 1 M ,
where sup n 1 { 2 x n p + σ n x n x n 1 } M for some M > 0 . Observe that α n ν δ + γ n + ( 1 γ n α n τ ) ( 1 + θ n ) 1 α n ( τ ν δ ) 2 for all n n 0 . Thus, combining (39) and (42), we have that for all n n 0 ,
x n + 1 p 2 γ n ( x n p 2 + σ n x n x n 1 M ) + α n ν δ x n p 2 + ( 1 γ n α n τ ) ( 1 + θ n ) [ β n x n p 2 + ( 1 β n ) ( x n p 2 + σ n x n x n 1 M ) ] + 2 α n ( ν f ρ F ) p , x n + 1 p γ n ( x n p 2 + σ n x n x n 1 M ) + α n ν δ x n p 2 + ( 1 γ n α n τ ) ( 1 + θ n ) [ x n p 2 + σ n x n x n 1 M ] + 2 α n ( ν f ρ F ) p , x n + 1 p = [ α n ν δ + γ n + ( 1 γ n α n τ ) ( 1 + θ n ) ] x n p 2 + γ n σ n x n x n 1 M + ( 1 γ n α n τ ) ( 1 + θ n ) σ n x n x n 1 M + 2 α n ( ν f ρ F ) p , x n + 1 p [ 1 α n ( τ ν δ ) 2 ] x n p 2 + σ n x n x n 1 M + 2 α n ( ν f ρ F ) p , x n + 1 p = [ 1 α n ( τ ν δ ) 2 ] x n p 2 + α n ( τ ν δ ) 2 [ 4 ( ν f ρ F ) p , x n + 1 p τ ν δ + σ n α n · x n x n 1 2 M τ ν δ ] .
Step 4. One claims that x n x * Ω , which is only a solution to the VIP (17). In fact, using the similar inference to that in Step 4 for the proof of Theorem 1, one derives the desired conclusion. □
Example 1.
We can get an example of T satisfying the condition assumed in Theorems 1 and 2. As a matter of fact, we put H = R , whose inner product and induced norm are defined by a , b = a b and · = | · | indicate, respectively. Let T : H H be defined as T x : = sin ( 7 8 x ) x H . Then T is a contraction with constant 7 8 , and hence a nonexpansive mapping. Thus, T is an asymptotically nonexpansive mapping. As
T n x T n y 7 8 T n 1 x T n 1 y ( 7 8 ) n x y x , y H ,
we know that for any sequence { x n } H ,
T n + 1 x n T n x n ( 7 8 ) n 1 T 2 x n T x n = ( 7 8 ) n 1 sin ( 7 8 T x n ) sin ( 7 8 x n ) 2 ( 7 8 ) n 1 0
as n . That is, T n x n T n + 1 x n 0 ( n ) .
Remark 3.
Compared with the corresponding results in Bnouhachem et al. [2], Cai et al. [35], Kraikaew and Saejung [36], and Thong and Hieu [37,38], our results improve and extend them in what follows.
(i) 
The problem of obtaining a point of VI ( C , A ) in the work by the authors of [36] is extendable to the development of our problem of obtaining a point of i = 0 N Fix ( T i ) VI ( C , A ) , where T 0 : = T is asymptotically nonexpansive and { T i } i = 1 N is a pool of nonexpansive maps. The Halpern subgradient method for solving the VIP in the work by the authors of [36] is extendable to the development of our mildly inertial subgradient algorithms with linesearch process for solving the VIP and CFPP.
(ii) 
The problem of obtaining a point of VI ( C , A ) in the work by the authors of [37] is extendable to the development of our problem of finding a point of i = 0 N Fix ( T i ) VI ( C , A ) , where T 0 : = T is asymptotically nonexpansive and { T i } i = 1 N is a pool of nonexpansive maps. The inertial subgradient method with weak convergence for solving the VIP in the work by the authors of [37] is extendable to the development of our mildly inertial subgradient algorithms with linesearch process (which are convergent in norm) for solving the VIP and CFPP.
(iii) 
The problem of obtaining a point of VI ( C , A ) Fix ( T ) (where A is monotone and T is quasi-nonexpansive) in the work by the authors of [38] is extendable to the development of our problem of obtaining a point of i = 0 N Fix ( T i ) VI ( C , A ) , where T 0 : = T is asymptotically nonexpansive and { T i } i = 1 N is a pool of nonexpansive maps. The inertial subgradient extragradient method with linesearch (which is weakly convergent) for solving the VIP and FPP in the work by the authors of [38] is extendable to the development of our mildly inertial subgradient algorithms with linesearch process (which are convergent in norm) for solving the VIP and CFPP. It is worth mentioning that the inertial subgradient method with linesearch process in the work by the authors of [38] combines the inertial subgradient approaches [37] with the Mann method.
(iv) 
The problem of obtaining a point in the common fixed-point set i = 1 N Fix ( T i ) of N nonexpansive mappings { T i } i = 1 N in the work by the authors of [2], is extendable to the development of our problem of obtaining a point of i = 0 N Fix ( T i ) VI ( C , A ) , where T 0 : = T is asymptotically nonexpansive and { T i } i = 1 N is a pool of nonexpansive maps. The iterative algorithm for hierarchical FPPs for finitely many nonexpansive mappings in the work by the authors of [2] (i.e., iterative scheme (3) in this paper), is extendable to the development of our mildly inertial subgradient algorithms with linesearch process for solving the VIP and CFPP. Meantime, the restrictions lim sup n γ n < 1 , lim inf n γ n > 0 and lim n | δ n 1 i δ i , n | = 0 for i = 1 , , N imposed on (3), are dropped, where 0 < lim inf n γ n < lim sup n γ n < 1 is weakened to the condition 0 < lim inf n γ n lim sup n γ n < 1 .
(v) 
The problem of obtaining a point in the common solution set Ω of the VIPs for two inverse-strongly monotone mappings and the FPP of an asymptotically nonexpansive mapping in the work by the authors of [35], is extendable to the development of our problem of obtaining a point of i = 0 N Fix ( T i ) VI ( C , A ) where T 0 : = T is asymptotically nonexpansive and { T i } i = 1 N is a pool of nonexpansive maps. The viscosity implicit rule involving a modified extragradient method in the work by the authors of [35] (i.e., iterative scheme (4) in this paper), is extendable to the development of our mildly inertial subgradient algorithms with linesearch process for solving the VIP and CFPP. Moreover, the conditions n = 1 | α n + 1 α n | < and n = 1 T n + 1 y n T n y n < imposed on (4), are deleted where n = 1 T n + 1 y n T n y n < is weakened to the assumption T n + 1 x n T n x n 0 ( n ) .

4. Applications

In this section, our main theorems are used to deal with the VIP and CFPP in an illustrating example. The initial point x 0 = x 1 is randomly chosen in R . Take ν f ( x ) = F ( x ) = 1 2 x , γ = l = μ = 1 2 , σ n = α n = 1 n + 1 , β n = 1 3 , γ n = 1 2 , ν = 3 4 , f = 2 3 I and ρ = 2 . Then, we know that α n + γ n 1 n 1 , ν δ = κ = η = 1 2 , and
τ = 1 1 ρ ( 2 η ρ κ 2 ) = 1 1 2 ( 2 · 1 2 2 ( 1 2 ) 2 ) = 1 ( 0 , 1 ] .
We first provide an example of a Lipschitzian, pseudomonotone operator A, asymptotically nonexpansive operator T, and nonexpansive operator T 1 with Ω = Fix ( T ) Fix ( T 1 ) VI ( C , A ) . Let C = [ 1 , 3 ] and H = R with the inner product a , b = a b and induced norm · = | · | . Let A , T , T 1 , T 1 n : H H be defined as A x : = 1 1 + | sin x | 1 1 + | x | , T x : = 4 5 sin x , T x : = sin x and T 1 n x : = 3 8 x + 5 8 sin x x H , n 1 . Then it is clear that T 1 is a nonexpansive mapping on H. Moreover, from Lemma 5 we know that Fix ( T 1 n ) = Fix ( T 1 ) = { 0 } n 1 . Now, we first show that A is Lipschitzian, pseudomonotone operator with L = 2 . In fact, for all x , y H we get
A x A y = | 1 1 + sin x 1 1 + x 1 1 + sin y + 1 1 + y | | 1 1 + sin x 1 1 + sin y | + | 1 1 + x 1 1 + y | = | sin y sin x ( 1 + sin x ) ( 1 + sin y ) | + | y x ( 1 + x ) ( 1 + y ) | sin x sin y + x y 2 x y .
This means that A is Lipschitzian with L = 2 . We below claim that A is pseudomonotone. For any given x , y H , it is clear that the relation holds:
A x , y x = ( 1 1 + | sin x | 1 1 + | x | ) ( y x ) 0 A y , y x = ( 1 1 + | sin y | 1 1 + | y | ) ( y x ) 0 .
Furthermore, it is easy to see that T is asymptotically nonexpansive with θ n = ( 4 5 ) n n 1 , such that T n + 1 x n T n x n 0 as n . Indeed, we observe that
T n x T n y 4 5 T n 1 x T n 1 y ( 4 5 ) n x y ( 1 + θ n ) x y ,
and
T n + 1 x n T n x n ( 4 5 ) n 1 T 2 x n T x n = ( 4 5 ) n 1 4 5 sin ( T x n ) 4 5 sin x n 2 ( 4 5 ) n 0 ( n ) .
It is clear that Fix ( T ) = { 0 } and
lim n θ n α n = lim n ( 4 / 5 ) n 1 / ( n + 1 ) = 0 .
Therefore, Ω = Fix ( T ) Fix ( T 1 ) VI ( C , A ) = { 0 } . In this case, Algorithm 3 can be rewritten as follows,
u n = x n + 1 n + 1 ( x n x n 1 ) , y n = P C ( u n n A u n ) , z n = 1 3 x n + 2 3 T 1 n P C n ( u n n A y n ) , x n + 1 = 1 n + 1 · 1 2 x n + 1 2 x n + ( n n + 1 1 2 ) T n z n n 1 ,
where for every n 1 , C n and n are picked up as in Algorithm 3. Then, by Theorem 1, we know that { x n } converges to 0 Ω = Fix ( T ) Fix ( T 1 ) VI ( C , A ) if and only if | x n x n + 1 | + | x n T 1 n x n | 0 as n .
On the other hand, Algorithm 4 can be rewritten as follows,
u n = x n + 1 n + 1 ( x n x n 1 ) , y n = P C ( u n n A u n ) , z n = 1 3 x n + 2 3 T 1 n P C n ( u n n A y n ) , x n + 1 = 1 n + 1 · 1 2 x n + 1 2 u n + ( n n + 1 1 2 ) T n z n n 1 ,
where for every n 1 , C n and n are picked up as in Algorithm 4. Then, by Theorem 2, we know that { x n } converges to 0 Ω = Fix ( T ) Fix ( T 1 ) VI ( C , A ) if and only if | x n x n + 1 | + | x n T 1 n x n | 0 as n .

Author Contributions

These authors contributed equally to this work.

Funding

The first author was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of Ministry of Education of China (20123127110002), and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ceng, L.C.; Petruşel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–501. [Google Scholar] [CrossRef]
  2. Bnouhachem, A.; Ansari, Q.H.; Yao, J.C. An iterative algorithm for hierarchical fixed point problems for a finite family of nonexpansive mappings. Fixed Point Theory Appl. 2015, 2015, 111. [Google Scholar] [CrossRef]
  3. Ceng, L.C.; Petruşel, A.; Yao, J.C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–133. [Google Scholar] [CrossRef]
  4. Qin, X.; An, N.T. Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 2019. [Google Scholar] [CrossRef]
  5. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed]
  6. Takahahsi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
  7. Cho, S.Y.; Li, W.; Kang, S.M. Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. 2013, 2013, 199. [Google Scholar] [CrossRef] [Green Version]
  8. Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malays. Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
  9. Nguyen, L.V.; Qin, X. Some results on strongly pseudomonotone quasi-variational inequalities. Set-Valued Var. Anal. 2019. [Google Scholar] [CrossRef]
  10. Ceng, L.C.; Postolache, M.; Wen, C.F.; Yao, Y. Variational inequalities approaches to minimization problems with constraints of generalized mixed equilibria and variational inclusions. Mathematics 2019, 7, 270. [Google Scholar] [CrossRef]
  11. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Aanl. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
  12. Chen, J.; Kobis, E.; Yao, J.C. Optimality conditions and duality for robust nonsmooth multiobjective optimization problems with constraints. J. Optim. Theory Appl. 2019, 181, 411–436. [Google Scholar] [CrossRef]
  13. Liu, L.; Qin, X.; Agarwal, R.P. Iterative methods for fixed points and zero points of nonlinear mappings with applications. Optimization 2019. [Google Scholar] [CrossRef]
  14. Cho, S.Y. Strong convergence analysis of a hybrid algorithm for nonlinear operators in a Banach space. J. Appl. Anal. Comput. 2018, 8, 19–31. [Google Scholar]
  15. Ansari, Q.H.; Babu, F.; Yao, J.C. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 25. [Google Scholar] [CrossRef]
  16. Zhao, X. Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Optim. 2018, 78, 613–641. [Google Scholar] [CrossRef]
  17. Cho, S.Y.; Kang, S.M. Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32, 1607–1618. [Google Scholar] [CrossRef]
  18. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  19. Chang, S.S.; Wen, C.F.; Yao, J.C. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
  20. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64, 633–642. [Google Scholar] [CrossRef] [Green Version]
  21. Ceng, L.C.; Liu, Z.; Yao, J.C.; Yao, Y. Optimal control of feedback control systems governed by systems of evolution hemivariational inequalities. Filomat 2018, 32, 5205–5220. [Google Scholar] [CrossRef] [Green Version]
  22. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekonomikai Matematicheskie Metody 1976, 12, 747–756. [Google Scholar]
  23. Takahashi, W.; Wen, C.F.; Yao, J.C. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
  24. Vuong, P.T. On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. 2018, 176, 399–409. [Google Scholar] [CrossRef]
  25. Vuong, P.T.; Shehu, Y. Convergence of an extragradient-type method for variational inequality with applications to optimal control problems. Numer. Algorithms 2019, 81, 269–291. [Google Scholar] [CrossRef]
  26. Ceng, L.C.; Ansari, Q.H.; Wong, M.M.; Yao, J.C. Mann type hybrid extragradient method for variational inequalities, variational inclusions and fixed point problems. Fixed Point Theory 2012, 13, 403–422. [Google Scholar]
  27. Ceng, L.C.; Wen, C.F. Systems of variational inequalities with hierarchical variational inequality constraints for asymptotically nonexpansive and pseudocontractive mappings. RACSAM 2019, 113, 2431–2447. [Google Scholar] [CrossRef]
  28. Shehu, Y.; Dong, Q.; Jiang, D. Single projection method for pseudo-monotone variational inequality in Hilbert spaces. Optimization 2019, 68, 385–409. [Google Scholar] [CrossRef]
  29. Shehu, Y.; Iyiola, O.S. Strong convergence result for monotone variational inequalities. Numer. Algorithms 2017, 76, 259–282. [Google Scholar] [CrossRef]
  30. Ceng, L.C.; Hadjisavvas, N.; Wong, N.C. Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 2010, 46, 635–646. [Google Scholar] [CrossRef]
  31. Ceng, L.C.; Teboulle, M.; Yao, J.C. Weak convergence of an iterative method for pseudomonotone variational inequalities and fixed-point problems. J. Optim. Theory Appl. 2010, 146, 19–31. [Google Scholar] [CrossRef]
  32. Ceng, L.C.; Ansari, Q.H.; Wong, N.C.; Yao, J.C. An extragradient-like approximation method for variational inequalities and fixed point problems. Fixed Point Theory Appl. 2011, 2011, 22. [Google Scholar] [CrossRef]
  33. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Relaxed extragradient iterative methods for variational inequalities. Appl. Math. Comput. 2011, 218, 1112–1123. [Google Scholar] [CrossRef]
  34. Ceng, L.C.; Ansari, Q.H.; Schaible, S. Hybrid extragradient-like methods for generalized mixed equilibrium problems, systems of generalized equilibrium problems and optimization problems. J. Glob. Optim. 2012, 53, 69–96. [Google Scholar] [CrossRef]
  35. Cai, G.; Shehu, Y.; Iyiola, O.S. Strong convergence results for variational inequalities and fixed point problems using modified viscosity implicit rules. Numer. Algorithms 2018, 77, 535–558. [Google Scholar] [CrossRef]
  36. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  37. Thong, D.V.; Hieu, D.V. Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Algorithms 2019, 80, 1283–1307. [Google Scholar] [CrossRef]
  38. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  39. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  40. Xue, Z.; Zhou, H.; Cho, Y.J. Iterative solutions of nonlinear equations for m-accretive operators in Banach spaces. J. Nonlinear Convex Anal. 2000, 1, 313–320. [Google Scholar]
  41. Zhou, H. Convergence theorems of fixed points for κ-strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2008, 69, 456–462. [Google Scholar] [CrossRef]
  42. Yamada, I. The hybrid steepest-descent method for variational inequalities problems over the intersection of the fixed point sets of nonexpansive mappings, In Inherently Parallel Algorithms in Feasibility and Optimization and Their Applications; Butnariu, D., Censor, Y., Reich, S., Eds.; North-Holland: Amsterdam, The Netherlands, 2001; pp. 473–504. [Google Scholar]
  43. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory (Cambridge Studies in Advanced Mathematics); Cambridge University Press: Cambridge, UK, 1990; Volume 28. [Google Scholar]

Share and Cite

MDPI and ACS Style

Ceng, L.-C.; Qin, X.; Shehu, Y.; Yao, J.-C. Mildly Inertial Subgradient Extragradient Method for Variational Inequalities Involving an Asymptotically Nonexpansive and Finitely Many Nonexpansive Mappings. Mathematics 2019, 7, 881. https://doi.org/10.3390/math7100881

AMA Style

Ceng L-C, Qin X, Shehu Y, Yao J-C. Mildly Inertial Subgradient Extragradient Method for Variational Inequalities Involving an Asymptotically Nonexpansive and Finitely Many Nonexpansive Mappings. Mathematics. 2019; 7(10):881. https://doi.org/10.3390/math7100881

Chicago/Turabian Style

Ceng, Lu-Chuan, Xiaolong Qin, Yekini Shehu, and Jen-Chih Yao. 2019. "Mildly Inertial Subgradient Extragradient Method for Variational Inequalities Involving an Asymptotically Nonexpansive and Finitely Many Nonexpansive Mappings" Mathematics 7, no. 10: 881. https://doi.org/10.3390/math7100881

APA Style

Ceng, L. -C., Qin, X., Shehu, Y., & Yao, J. -C. (2019). Mildly Inertial Subgradient Extragradient Method for Variational Inequalities Involving an Asymptotically Nonexpansive and Finitely Many Nonexpansive Mappings. Mathematics, 7(10), 881. https://doi.org/10.3390/math7100881

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop