Next Article in Journal
Variational Partitioned Runge–Kutta Methods for Lagrangians Linear in Velocities
Next Article in Special Issue
On Mann Viscosity Subgradient Extragradient Algorithms for Fixed Point Problems of Finitely Many Strict Pseudocontractions and Variational Inequalities
Previous Article in Journal
Applications of Game Theory in Project Management: A Structured Review and Analysis
Previous Article in Special Issue
A Solution for Volterra Fractional Integral Equations by Hybrid Contractions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial-Like Subgradient Extragradient Methods for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive and Strictly Pseudocontractive Mappings

1
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
2
Department of Mathematics, Babes-Bolyai University, Cluj-Napoca 400084, Romania
3
Center for Fundamental Science and Research Center for Nonliear Analysis and Optimization, Kaohsiung Medical University, Kaohsiung 80708, Taiwan
4
Department of Medical Research, Kaohsiung Medical University Hospital, Kaohsiung 80708, Taiwan
5
Research Center for Interneural Computing, China Medical University Hospital, Taichung 40402, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(9), 860; https://doi.org/10.3390/math7090860
Submission received: 18 August 2019 / Revised: 9 September 2019 / Accepted: 9 September 2019 / Published: 17 September 2019
(This article belongs to the Special Issue Applied Functional Analysis and Its Applications)

Abstract

:
Let VIP indicate the variational inequality problem with Lipschitzian and pseudomonotone operator and let CFPP denote the common fixed-point problem of an asymptotically nonexpansive mapping and a strictly pseudocontractive mapping in a real Hilbert space. Our object in this article is to establish strong convergence results for solving the VIP and CFPP by utilizing an inertial-like gradient-like extragradient method with line-search process. Via suitable assumptions, it is shown that the sequences generated by such a method converge strongly to a common solution of the VIP and CFPP, which also solves a hierarchical variational inequality (HVI).

1. Introduction

Throughout this paper we assume that C is a nonempty, convex and closed subset of a real Hilbert space ( H , · ) , whose inner product is denoted by · , · . Moreover, let P C denote the metric projection of H onto C.
Suppose A : H H is a mapping. In this paper, we shall consider the following variational inequality (VI) of finding x * C such that
x x * , A x * 0 , x C .
The set of solutions to Equation (1) is denoted by VI( C , A ). In 1976, Korpelevich [1] first introduced an extragradient method, which is one of the most popular approximation ones for solving Equation (1) till now. That is, for any initial u 0 C , the sequence { u n } is generated by
v n = P C ( u n τ A u n ) , u n + 1 = P C ( u n τ A v n ) , n 0 ,
where τ is a constant in ( 0 , 1 L ) for L > 0 the Lipschitz constant of mapping A. In the case where VI ( C , A ) , the sequence { u n } constructed by Equation (2) is weakly convergent to a point in VI ( C , A ) . Recently, light has been shed on approximation methods for solving problem Equation (1) by many researchers; see, e.g., [2,3,4,5,6,7,8,9,10,11] and references therein, to name but a few.
Let T : C C be a mapping. We denote by Fix ( T ) the set of fixed points of T, i.e., Fix ( T ) = { x C : x = T x } . T is said to be asymptotically nonexpansive if { θ n } [ 0 , + ) such that lim n θ n = 0 and T n u T n v u v + θ n u v , n 1 , u , v C . If θ n 0 , then T is nonexpansive. Also, T is said to be strictly pseudocontractive if ζ [ 0 , 1 ) s.t. T u T v 2 u v 2 + ζ ( I T ) u ( I T ) v 2 , u , v C . If ζ = 0 , then T reduces to a nonexpansive mapping. One knows that the class of strict pseudocontractions strictly includes the class of nonexpansive mappings. Both strict pseudocontractions and nonexpansive mappings have been studied extensively by a large number of authors via iteration approximation methods; see, e.g., [12,13,14,15,16,17,18] and references therein.
Let the mappings A , B : C H be both inverse-strongly monotone and let the mapping T : C C be asymptotically nonexpansive one with a sequence { θ n } . Let f : C C be a δ -contraction with δ [ 0 , 1 ) . By using a modified extragradient method, Cai et al. [19] designed a viscosity implicit rule for finding a point in the common solution set Ω of the VIs for A and B and the FPP of T, i.e., for arbitrarily given x 1 C , { x n } is the sequence constructed by
u n = s n x n + ( 1 s n ) y n , y n = P C ( I λ A ) P C ( u n μ B u n ) , x n + 1 = P C [ ( T n y n α n ρ F T n y n ) + α n f ( x n ) ] ,
where { α n } , { s n } ( 0 , 1 ] . Under appropriate conditions imposed on { α n } , { s n } , they proved that { x n } is convergent strongly to an element x * Ω provided n = 1 T n + 1 y n T n y n < .
In the context of extragradient techniques, one has to compute metric projections two times for each computational step. Without doubt, if C is a general convex and closed set, the computation of the projection onto C might be quite consuming-time. In 2011, inspired by Korpelevich’s extragradient method, Censor et al. [20] first designed the subgradient extragradient method, where a projection onto a half-space is used in place of the second projection onto C. In 2014, Kraikaew and Saejung [21] proposed the Halpern subgradient extragradient method for solving Equation (1), and proved strong convergence of the proposed method to a solution of Equation (1).
In 2018, via the inertial technique, Thong and Hieu [22] studied the inertial subgradient extragradient method, and proved weak convergence of their method to a solution of Equation (1). Very recently, they [23] constructed two inertial subgradient extragradient algorithms with linear-search process for finding a common solution of problem Equation (1) with operator A and the FPP of operator T with demiclosedness property in a real Hilbert space, where A is Lipschitzian and monotone, and T is quasi-nonexpansive. The constructed inertial subgradient extragradient algorithms (Algorithms 1 and 2) are as below:
Algorithm 1: Inertial subgradient extragradient algorithm (I) (see [23], Algorithm 1]).
Initialization: Given u 0 , u 1 H arbitrarily. Let γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) .
Iterative Steps: Compute u n + 1 in what follows:
 Step 1. Put v n = α n ( u n u n 1 ) + u n and calculate y n = P C ( v n τ n A v n ) , where τ n is chosen to be the largest τ { γ , γ l , γ l 2 , } satisfying τ A v n A y n μ v n y n .
 Step 2. Calculate z n = P T n ( v n τ n A y n ) with T n : = { x H : x y n , v n τ n A v n y n 0 } .
 Step 3. Calculate u n + 1 = β n T z n + ( 1 β n ) v n . If  v n = z n = u n + 1 then v n Fix ( T ) VI ( C , A ) .
  Set  n : = n + 1 and go to Step 1.
Algorithm 2: Inertial subgradient extragradient algorithm (II) (see [23], Algorithm 2]).
Initialization: Given u 0 , u 1 H arbitrarily. Let γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) .
Iterative Steps: Calculate u n + 1 as follows:
  Step 1. Put v n = α n ( u n u n 1 ) + u n and calculate y n = P C ( v n τ n A v n ) , where τ n is chosen to be the largest τ { γ , γ l , γ l 2 , } satisfying τ A v n A y n μ v n y n .
  Step 2. Calculate z n = P T n ( v n τ n A y n ) with T n : = { x H : x y n , v n τ n A v n y n 0 } .
  Step 3. Calculate u n + 1 = β n T z n + ( 1 β n ) u n . If  v n = z n = u n = u n + 1 then
u n Fix ( T ) VI ( C , A ) . Set n : = n + 1 and go to Step 1.
Under mild assumptions, they proved that the sequences generated by the proposed algorithms are weakly convergent to a point in Fix ( T ) VI ( C , A ) . Recently, gradient-like methods have been studied extensively by many authors; see, e.g., [24,25,26,27,28,29,30,31,32,33,34,35,36,37,38].
Inspired by the research work of [23], we introduce two inertial-like subgradient algorithms with line-search process for solving Equation (1) with a Lipschitzian and pseudomonotone operator and the common fixed point problem (CFPP) of an asymptotically nonexpansive operator and a strictly pseudocontractive operator in H. The proposed algorithms comprehensively adopt inertial subgradient extragradient method with line-search process, viscosity approximation method, Mann iteration method and asymptotically nonexpansive mapping. Via suitable assumptions, it is shown that the sequences generated by the suggested algorithms converge strongly to a common solution of the VIP and CFPP, which also solves a hierarchical variational inequality (HVI).

2. Preliminaries

Let x H and { x n } H . We use the notation x n x (resp., x n x ) to indicate the strong (resp., weak) convergence of { x n } to x. Recall that a mapping T : C H is said to be:
(i)
L-Lipschitzian (or L-Lipschitz continuous) if T x T y L x y , x , y C for some L > 0 ;
(ii)
monotone if T u T v , u v 0 , u , v C ;
(iii)
pseudomonotone if T u , v u 0 T v , v u 0 , u , v C ;
(iv)
β -strongly monotone if T u T v , u v β u v 2 , u , v C for some β > 0 ;
(v)
sequentially weakly continuous if { u n } C , the relation holds: u n u T u n T u .
For metric projections, it is well known that the following assertions hold:
(i)
P C u P C v , u v P C u P C v 2 , u , v H ;
(ii)
u P C u , v P C u 0 , u H , v C ;
(iii)
u v 2 u P C u 2 + v P C u 2 , u H , v C ;
(iv)
u v 2 = u 2 v 2 2 u v , v , u , v H ;
(v)
τ x + ( 1 τ ) y 2 = τ x 2 + ( 1 τ ) y 2 τ ( 1 τ ) x y 2 , x , y H , τ [ 0 , 1 ] .
Lemma 1.
[39] Assume that A : C H is a continuous pseudomonotone mapping. Then u * C is a solution to the VI A u * , v u * 0 , v C , iff A v , v u * 0 , v C .
Lemma 2.
[40] Let the real sequence { t n } [ 0 , ) satisfy the conditions: t n + 1 ( 1 s n ) t n + s n b n , n 1 , where { s n } and { b n } are sequences in ( , ) such that (i) { s n } [ 0 , 1 ] and n = 1 s n = , and (ii) lim sup n b n 0 or n = 1 | s n b n | < . Then lim n t n = 0 .
Lemma 3.
[33] Let T : C C be a ζ-strict pseudocontraction. If the sequence { u n } C satisfies u n u C and ( I T ) u n 0 , then u Fix ( T ) , where I is the identity operator of H.
Lemma 4.
[33] Let T : C C be a ζ-strictly pseudocontractive mapping. Let the real numbers γ , δ 0 satisfy ( γ + δ ) ζ γ . Then γ ( x y ) + δ ( T x T y ) ( γ + δ ) x y , x , y C .
Lemma 5.
[41] Let the Banach space X admit a weakly continuous duality mapping, the subset C X be nonempty, convex and closed, and the asymptotically nonexpansive mapping T : C C have a fixed point, i.e., Fix ( T ) . Then I T is demiclosed at zero, i.e., if the sequence { u n } C satisfies u n u C and ( I T ) u n 0 , then ( I T ) u = 0 , where I is the identity mapping of X.

3. Main Results

Unless otherwise stated, we suppose the following.
  • T : H H is an asymptotically nonexpansive operator with { θ n } and S : H H is a ζ -strictly pseudocontractive mapping.
  • A : H H is sequentially weakly continuous on C, L-Lipschitzian pseudomonotone on H, and A ( C ) is bounded.
  • f : H C is a δ -contraction with δ [ 0 , 1 2 ) .
  • Ω = Fix ( T ) Fix ( S ) VI ( C , A ) .
  • { σ n } [ 0 , 1 ] and { α n } , { β n } , { γ n } , { δ n } ( 0 , 1 ) such that
    (i)
    sup n 1 σ n α n < and β n + γ n + δ n = 1 , n 1 ;
    (ii)
    n = 1 α n = , lim n α n = lim n θ n α n = 0 ;
    (iii)
    ( γ n + δ n ) ζ γ n < ( 1 2 δ ) δ n , n 1 and lim inf n ( ( 1 2 δ ) δ n γ n ) > 0 ;
    (iv)
    lim sup n β n < 1 , lim inf n β n > 0 and lim inf n δ n > 0 .
We first introduce an inertial-like subgradient extragradient algorithm (Algorithm 3) with line-search process as follows:
Algorithm 3: Inertial-like subgradient extragradient algorithm (I).
Initialization: Given x 0 , x 1 H arbitrarily. Let γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) .
Iterative Steps: Compute x n + 1 in what follows:
  Step 1. Put w n = σ n ( x n x n 1 ) + T n x n and calculate y n = P C ( I τ n A ) w n , where τ n is chosen to be the largest τ { γ , γ l , γ l 2 , } such that
τ A w n A y n μ w n y n .

  Step 2. Calculate z n = ( 1 α n ) P C n ( w n τ n A y n ) + α n f ( x n ) with C n : = { x H : w n τ n A w n y n , x y n 0 } .
  Step 3. Calculate
x n + 1 = γ n P C n ( w n τ n A y n ) + δ n S z n + β n T n x n .

  Again set n : = n + 1 and return to Step 1.
Lemma 6.
In Step 1 of Algorithm 3, the Armijo-like search rule
τ A w n A y n μ w n y n
is well defined, and the inequality holds: min { γ , μ l L } τ n γ .
Proof. 
Since A is L-Lipschitzian, we know that Equation (3) holds for all γ l m μ L and so τ n is well defined. It is clear that τ n γ . Next we discuss two cases. In the case where τ n = γ , the inequality is valid. In the case where τ n < γ , from Equation (3) we derive A w n A P C ( w n τ n l A w n ) > μ τ n l w n P C ( w n τ n l A w n ) . Also, since A is L-Lipschitzian, we get τ n > μ l L . Therefore the inequality is true. □
Lemma 7.
Assume that { w n } , { y n } , { z n } are the sequences constructed by Algorithm 3. Then
z n p 2 [ 1 α n ( 1 δ ) ] x n p 2 + ( 1 α n ) Λ n ( 1 α n ) ( 1 μ ) × × [ w n y n 2 + u n y n 2 ] + 2 α n ( f I ) p , z n p p Ω ,
where u n : = P C n ( w n τ n A y n ) and Λ n : = σ n x n x n 1 [ 2 ( 1 + θ n ) x n p + σ n x n x n 1 ] + θ n ( 2 + θ n ) x n p 2 for all n 1 .
Proof. 
We observe that
2 u n p 2 = 2 P C n ( w n τ n A y n ) P C n p 2 2 u n p , w n τ n A y n p = u n p 2 + w n p 2 u n w n 2 2 u n p , τ n A y n .
So, it follows that w n p 2 u n w n 2 2 u n p , τ n A y n u n p 2 . Since A is pseudomonotone, we deduce from Equation (3) that A y n , y n p 0 and
u n p 2 w n p 2 + 2 τ n ( A y n , p y n + A y n , y n u n ) u n w n 2 w n p 2 + 2 τ n A y n , y n u n u n w n 2 = w n p 2 y n w n 2 + 2 w n τ n A y n y n , u n y n u n y n 2 .
Since u n = P C n ( w n τ n A y n ) with C n : = { x H : 0 τ n A w n w n + y n , y n x } , we have u n y n , w n τ n A w n y n 0 , which together with Equation (3), implies that
2 w n τ n A y n y n , u n y n = 2 w n τ n A w n y n , u n y n + 2 τ n A w n A y n , u n y n 2 μ w n y n u n y n μ ( w n y n 2 + u n y n 2 ) .
Also, from w n = σ n ( x n x n 1 ) + T n x n we get
w n p 2 = σ n ( x n x n 1 ) + T n x n p 2 [ ( 1 + θ n ) x n p + σ n x n x n 1 ] 2 = ( 1 + θ n ) 2 x n p 2 + σ n x n x n 1 [ 2 ( 1 + θ n ) x n p + σ n x n x n 1 ] = x n p 2 + θ n ( 2 + θ n ) x n p 2 + σ n x n x n 1 [ 2 ( 1 + θ n ) x n p + σ n x n x n 1 ] = x n p 2 + Λ n ,
where Λ n : = θ n ( 2 + θ n ) x n p 2 + σ n x n x n 1 [ 2 ( 1 + θ n ) x n p + σ n x n x n 1 ] . Therefore, substituting the last two inequalities for Equation (5), we infer that
u n p 2 w n p 2 ( 1 μ ) w n y n 2 ( 1 μ ) u n y n 2 Λ n ( 1 μ ) w n y n 2 ( 1 μ ) u n y n 2 + x n p 2 , p Ω .
In addition, from Algorithm 3 we have
z n p = ( 1 α n ) ( u n p ) + α n ( f I ) p + α n ( f ( x n ) f ( p ) ) .
Since the function h ( t ) = t 2 , t R is convex, from Equation (6) we have
z n p 2 [ α n δ x n p + ( 1 α n ) u n p ] 2 + 2 α n ( f I ) p , z n p α n δ x n p 2 + ( 1 α n ) [ x n p 2 + Λ n ( 1 μ ) w n y n 2 ( 1 μ ) u n y n 2 ] + 2 α n ( f I ) p , z n p = [ 1 α n ( 1 δ ) ] x n p 2 + ( 1 α n ) Λ n ( 1 α n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] + 2 α n ( f I ) p , z n p .
This completes the proof. □
Lemma 8.
Assume that { x n } , { y n } , { z n } are bounded vector sequences constructed by Algorithm 3. If T n x n T n + 1 x n 0 , x n x n + 1 0 , w n x n 0 , w n z n 0 and { w n k } { w n } such that w n k z H , then z Ω .
Proof. 
In terms of Algorithm 3, we deduce w n x n = T n x n x n + σ n ( x n x n 1 ) , n 1 , and hence T n x n x n w n x n + σ n x n x n 1 w n x n + x n x n 1 . Using the conditions x n x n + 1 0 and w n x n 0 , we get
lim n T n x n x n = 0 .
Combining the assumptions w n x n 0 and w n z n 0 yields
z n x n w n z n + w n x n 0 , ( n ) .
Then, from Equation (4) it follows that
( 1 α n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] [ 1 α n ( 1 δ ) ] x n p 2 + ( 1 α n ) Λ n z n p 2 + 2 α n ( f I ) p , z n p x n p 2 z n p 2 + Λ n + 2 α n ( f I ) p z n p x n z n ( x n p + z n p ) + Λ n + 2 α n ( f I ) p z n p ,
where Λ n : = θ n ( 2 + θ n ) x n p 2 + σ n x n x n 1 [ 2 ( 1 + θ n ) x n p + σ n x n x n 1 ] . Since α n 0 , Λ n 0 and x n z n 0 , from the boundedness of { x n } , { z n } we get
lim n w n y n = 0 and lim n u n y n = 0 .
Thus as n ,
w n u n w n y n + y n u n 0 and x n u n x n w n + w n u n 0 .
Furthermore, using Algorithm 3 we have x n + 1 z n = γ n ( u n z n ) + δ n ( S z n z n ) + β n ( T n x n z n ) , which hence implies
δ n S z n z n = x n + 1 z n β n ( T n x n z n ) γ n ( u n z n ) = x n + 1 x n + δ n ( x n z n ) γ n ( u n x n ) β n ( T n x n x n ) x n + 1 x n + x n z n + u n x n + T n x n x n .
Note that x n x n + 1 0 , z n x n 0 , x n u n 0 , x n T n x n 0 and lim inf n δ n > 0 . So we obtain
lim n z n S z n = 0 .
Noticing y n = P C ( I τ n A ) w n , we have x y n , w n τ n A w n y n 0 , x C , and hence
w n y n , x y n + τ n A w n , y n w n τ n A w n , x w n , x C .
Since A is Lipschitzian, we infer from the boundedness of { w n k } that { A w n k } is bounded. From w n y n 0 , we get the boundedness of { y n k } . Taking into account τ n min { γ , μ l L } , from Equation (9) we have lim inf k A w n k , x w n k 0 , x C . Moreover, note that A y n , x y n = A y n A w n , x w n + A w n , x w n + A y n , w n y n . Since A is L-Lipschitzian, from w n y n 0 we get A w n A y n 0 . According to Equation (9) we have lim inf k A y n k , x y n k 0 , x C .
We claim x n T x n 0 below. Indeed, note that
T x n x n T x n T n + 1 x n + T n + 1 x n T n x n + T n x n x n ( 2 + θ 1 ) x n T n x n + T n + 1 x n T n x n .
Hence from Equation (7) and the assumption T n x n T n + 1 x n 0 we get
lim n x n T x n = 0 .
We now choose a sequence { ε k } ( 0 , 1 ) such that ε k 0 as k . For each k 1 , we denote by m k the smallest natural number satisfying
A y n j , x y n j + ε k 0 , j m k .
From the decreasing property of { ε k } , it is easy to see that { m k } is increasing. Considering that { y m k } C implies A y m k 0 , k 1 , we put
μ m k = A y m k A y m k 2 .
So we have A y m k , μ m k = 1 , k 1 . Thus, from Equation (9), we have x + ε k μ m k y m k , A y m k 0 , k 1 . Also, since A is pseudomonotone, we get
A ( x + ε k μ m k ) , x + ε k μ m k y m k 0 , k 1 .
Consequently,
x y m k , A x x + ε k μ m k y m k , A x A ( x + ε k μ m k ) ε k μ m k , A x , k 1 .
We show lim k ε k μ m k = 0 . In fact, since w n k z and w n y n 0 , we get y n k z . So, { y n } C guarantees z C . Also, since A is sequentially weakly continuous on C, we deduce that A y n k A z . So, we get A z 0 . It follows that 0 < A z lim inf k A y n k . Since { y m k } { y n k } and ε k 0 as k , we obtain that
0 lim sup k ε k μ m k = lim sup k ε k A y m k lim sup k ε k lim inf k A y n k = 0 .
Thus ε k μ m k 0 .
The last step is to show z Ω . Indeed, we have x n k z . From Equation (10) we also have x n k T x n k 0 . Note that Lemma 5 yields the demiclosedness of I T at zero. Thus z Fix ( T ) . Moreover, since w n z n 0 and w n k z , we have z n k z . From Equation (8) we get z n k S z n k 0 . By Lemma 5 we know that I S is demiclosed at zero, and hence we have ( I S ) z = 0 , i.e., z Fix ( S ) . In addition, taking k , we infer that the right hand side of Equation (11) converges to zero by the Lipschitzian property of A, the boundedness of { y m k } , { μ m k } , and the limit lim k ε k μ m k = 0 . Therefore, A x , x z = lim inf k A x , x y m k 0 , x C . From Lemma 3 we get z VI ( C , A ) , and hence z Ω . This completes the proof. □
Theorem 1.
Let { x n } be the sequence constructed by Algorithm 3. Suppose that T n x n T n + 1 x n 0 . Then
x n x * Ω x n x n + 1 0 , x n T n x n 0 , sup n 1 ( T n f ) x n < ,
where x * Ω is only a solution of the HVI: ( f I ) x * , p x * 0 , p Ω .
Proof. 
Without loss of generality, we may assume that { β n } [ a , b ] ( 0 , 1 ) . We can claim that P Ω f is a contractive map. Banach’s Contraction Principle ensures that it has a unique fixed point, i.e., P Ω f ( x * ) = x * . So, there exists a unique solution x * Ω to the HVI
( I f ) x * , p x * 0 , p Ω .
It is clear that the necessity of the theorem is valid. In fact, if x n x * Ω , then as n , we obtain that x n x n + 1 0 , x n T n x n x n x * + x * T n x n ( 2 + θ n ) x n x * 0 , and
sup n 1 T n x n f ( x n ) sup n 1 ( T n x n x * + x * f ( x * ) + f ( x * ) f ( x n ) ) sup n 1 [ ( 1 + θ n ) x n x * + x * f ( x * ) + δ x * x n ] sup n 1 [ ( 2 + θ n ) x n x * + x * f ( x * ) ] < .
We now assume that lim n ( x n x n + 1 + x n T n x n ) = 0 and sup n 1 ( T n f ) x n < , and prove the sufficiency by the following steps.
Step 1. We claim the boundedness of { x n } . In fact, take a fixed p Ω arbitrarily. From Equation (6) we get
w n p 2 ( 1 μ ) w n y n 2 ( 1 μ ) u n y n 2 u n p 2 ,
which hence yields
w n p u n p , n 1 .
By the definition of w n , we have
w n p ( 1 + θ n ) x n p + σ n x n x n 1 = ( 1 + θ n ) x n p + α n · σ n α n x n x n 1 .
From sup n 1 σ n α n < and sup n 1 x n x n 1 < , we deduce that sup n 1 σ n α n x n x n 1 < , which immediately implies that M 1 > 0 s.t.
M 1 σ n α n x n x n 1 , n 1 .
From Equations (14)–(16), we obtain
u n p w n p ( 1 + θ n ) x n p + α n M 1 , n 1 .
Note that A ( C ) is bounded, y n = P C ( I τ n ) A w n , f ( H ) C C n and u n = P C n ( w n τ n A y n ) . Hence, we know that { A y n } is a bounded sequence. So, from sup n 1 ( T n f ) x n < , it follows that
u n f ( x n ) = P C n ( w n τ n A y n ) P C n f ( x n ) w n τ n A y n f ( x n ) w n T n x n + T n x n f ( x n ) + τ n A y n x n x n 1 + ( T n f ) x n + γ A y n M 0 ,
where sup n 1 ( x n x n 1 + ( T n f ) x n + γ A y n ) M 0 for some M 0 > 0 . Taking into account lim n θ n ( 2 + θ n ) α n ( 1 β n ) = 0 , we know that n 0 1 such that
θ n ( 2 + θ n ) α n ( 1 β n ) ( 1 δ ) 2 ( α n ( 1 δ ) 2 ) , n n 0 .
So, from Algorithm 3 and Equation (17) it follows that for all n n 0 ,
z n p α n δ x n p + ( 1 α n ) u n p + α n ( f I ) p [ 1 α n ( 1 δ ) + θ n ] x n p + α n ( M 1 + ( f I ) p ) [ 1 α n ( 1 δ ) 2 ] x n p + α n ( M 1 + ( f I ) p ) ,
which together with Lemma 4 and ( γ n + δ n ) ζ γ n , implies that for all n n 0 ,
x n + 1 p = β n ( T n x n p ) + γ n ( z n p ) + δ n ( S z n p ) + γ n ( u n z n ) β n ( 1 + θ n ) x n p + ( 1 β n ) z n p + γ n α n u n f ( x n ) β n ( 1 + θ n ) x n p + ( 1 β n ) [ ( 1 α n ( 1 δ ) 2 ) x n p + α n ( M 0 + M 1 + ( f I ) p ) ] [ 1 α n ( 1 β n ) ( 1 δ ) 2 + β n α n ( 1 β n ) ( 1 δ ) 2 ] x n p + α n ( 1 β n ) ( M 0 + M 1 + ( f I ) p ) = [ 1 α n ( 1 β n ) 2 ( 1 δ ) 2 ] x n p + α n ( 1 β n ) 2 ( 1 δ ) 2 · 2 ( M 0 + M 1 + ( f I ) p ) ( 1 δ ) ( 1 β n ) .
By induction, we obtain x n p max { x n 0 p , 2 ( M 0 + M 1 + ( f I ) p ) ( 1 δ ) ( 1 b ) } , n n 0 . Therefore, we derive the boundedness of { x n } and hence the one of sequences { u n } , { w n } , { y n } , { z n } , { f ( x n ) } , { S z n } , { T n x n } .
Step 2. We claim that M 4 > 0 s.t.
( 1 α n ) ( 1 β n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] x n p 2 x n + 1 p 2 + α n M 4 , n n 0 .
In fact, using Lemmas 4 and 7 and the convexity of · 2 , we get
x n + 1 p 2 = β n ( T n x n p ) + γ n ( z n p ) + δ n ( S z n p ) + γ n ( u n z n ) 2 β n T n x n p 2 + ( 1 β n ) 1 1 β n [ γ n ( z n p ) + δ n ( S z n p ) ] 2 + 2 ( 1 β n ) α n u n f ( x n ) x n + 1 p β n T n x n p 2 + ( 1 β n ) { [ 1 α n ( 1 δ ) ] x n p 2 + ( 1 α n ) Λ n ( 1 α n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] + 2 α n ( f I ) p , z n p } + 2 ( 1 β n ) α n u n f ( x n ) x n + 1 p β n T n x n p 2 + ( 1 β n ) { [ 1 α n ( 1 δ ) ] x n p 2 + ( 1 α n ) Λ n ( 1 α n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] + α n M 2 } ,
where
Λ n : = θ n ( 2 + θ n ) x n p 2 + σ n x n x n 1 [ 2 ( 1 + θ n ) x n p + σ n x n x n 1 ] ,
and
sup n 1 2 ( ( f I ) p z n p + u n f ( x n ) x n + 1 p ) M 2
for some M 2 > 0 . Also, from Equation (16) we have
Λ n = θ n ( 2 + θ n ) x n p 2 + σ n x n x n 1 [ 2 ( 1 + θ n ) x n p + σ n x n x n 1 ] θ n ( 2 + θ n ) x n p 2 + α n M 1 [ 2 ( 1 + θ n ) x n p + α n M 1 ] = α n { θ n α n ( 2 + θ n ) x n p 2 + M 1 [ 2 ( 1 + θ n ) x n p + α n M 1 ] } α n M 3 ,
where
sup n 1 { θ n α n ( 2 + θ n ) x n p 2 + M 1 [ 2 ( 1 + θ n ) x n p + α n M 1 ] } M 3
for some M 3 > 0 . Note that
θ n ( 2 + θ n ) α n ( 1 β n ) ( 1 δ ) 2 , n n 0 .
Substituting Equation (19) for Equation (18), we obtain that for all n n 0 ,
x n + 1 p 2 β n ( 1 + θ n ) 2 x n p 2 + ( 1 β n ) { [ 1 α n ( 1 δ ) ] x n p 2 + ( 1 α n ) α n M 3 ( 1 α n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] + α n M 2 } [ 1 α n ( 1 β n ) ( 1 δ ) 2 ] x n p 2 + α n M 3 ( 1 α n ) ( 1 β n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] + α n M 2 x n p 2 ( 1 α n ) ( 1 β n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] + α n M 4 ,
where M 4 : = M 2 + M 3 . This immediately implies that for all n n 0 ,
( 1 α n ) ( 1 β n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] x n p 2 x n + 1 p 2 + α n M 4 .
Step 3. We claim that M > 0 s.t.
x n + 1 p 2 [ 1 ( 1 2 δ ) δ n γ n 1 α n γ n α n ] x n p 2 + [ ( 1 2 δ ) δ n γ n ] α n 1 α n γ n · { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( p ) p , x n p + γ n + δ n ( 1 2 δ ) δ n γ n ( θ n α n · 2 M 2 1 b + σ n α n x n x n 1 3 M ) } .
In fact, we get
w n p 2 [ ( 1 + θ n ) x n p + σ n x n x n 1 ] 2 = x n p 2 + θ n ( 2 + θ n ) x n p 2 + σ n x n x n 1 [ 2 ( 1 + θ n ) x n p + σ n x n x n 1 ] x n p 2 + θ n 2 M 2 + σ n x n x n 1 3 M ,
where M sup n 1 { ( 1 + θ n ) x n p , σ n x n x n 1 } for some M > 0 . From Algorithm 3 and the convexity of · 2 , we have
x n + 1 p 2 = β n ( T n x n p ) + γ n ( z n p ) + δ n ( S z n p ) + γ n ( u n z n ) 2 β n ( T n x n p ) + γ n ( z n p ) + δ n ( S z n p ) 2 + 2 γ n α n u n f ( x n ) , x n + 1 p β n T n x n p 2 + ( 1 β n ) 1 1 β n [ γ n ( z n p ) + δ n ( S z n p ) ] 2 + 2 γ n α n u n p , x n + 1 p + 2 γ n α n p f ( x n ) , x n + 1 p ,
which together with Lemma 4, leads to
x n + 1 p 2 β n ( 1 + θ n ) 2 x n p 2 + ( 1 β n ) z n p 2 + 2 γ n α n u n p x n + 1 p + 2 γ n α n p f ( x n ) , x n + 1 p β n ( 1 + θ n ) 2 x n p 2 + ( 1 β n ) [ ( 1 α n ) u n p 2 + 2 α n f ( x n ) p , z n p ] + γ n α n ( u n p 2 + x n + 1 p 2 ) + 2 γ n α n p f ( x n ) , x n + 1 p .
From Equations (17) and (21) we know that
u n p 2 x n p 2 + θ n 2 M 2 + σ n x n x n 1 3 M .
Hence, we have
x n + 1 p 2 [ 1 α n ( 1 β n ) ] x n p 2 + β n θ n 2 M 2 + ( 1 β n ) ( 1 α n ) ( θ n 2 M 2 + σ n x n x n 1 3 M ) + 2 α n δ n f ( x n ) p , z n p + γ n α n ( x n p 2 + x n + 1 p 2 ) + ( 1 β n ) α n ( θ n 2 M 2 + σ n x n x n 1 3 M ) + 2 γ n α n f ( x n ) p , z n x n + 1 [ 1 α n ( 1 β n ) ] x n p 2 + 2 γ n α n f ( x n ) p z n x n + 1 + 2 α n δ n δ x n p 2 + 2 α n δ n f ( p ) p , x n p + 2 α n δ n f ( x n ) p z n x n + γ n α n ( x n p 2 + x n + 1 p 2 ) + ( 1 β n ) ( θ n 2 M 2 1 β n + σ n x n x n 1 3 M ) ,
which immediately yields
x n + 1 p 2 [ 1 ( 1 2 δ ) δ n γ n 1 α n γ n α n ] x n p 2 + [ ( 1 2 δ ) δ n γ n ] α n 1 α n γ n · { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( p ) p , x n p + γ n + δ n ( 1 2 δ ) δ n γ n ( θ n α n · 2 M 2 1 b + σ n α n x n x n 1 3 M ) } .
Step 4. We claim the strong convergence of { x n } to a unique solution x * Ω to the HVI Equation (12). In fact, setting p = x * , from Equation (22) we know that
x n + 1 x * 2 [ 1 ( 1 2 δ ) δ n γ n 1 α n γ n α n ] x n x * 2 + [ ( 1 2 δ ) δ n γ n ] α n 1 α n γ n · { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) x * z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) x * z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( x * ) x * , x n x * + γ n + δ n ( 1 2 δ ) δ n γ n ( θ n α n · 2 M 2 1 b + σ n α n x n x n 1 3 M ) } .
According to Lemma 4, it is sufficient to prove that lim sup n ( f I ) x * , x n x * 0 . Since x n x n + 1 0 , α n 0 and { β n } [ a , b ] ( 0 , 1 ) , from Equation (20) we get
lim sup n ( 1 α n ) ( 1 b ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] lim sup n [ x n p 2 x n + 1 p 2 + α n M 4 ] lim sup n ( x n p + x n + 1 p ) x n x n + 1 = 0 ,
which hence leads to
lim n w n y n = lim n u n y n = 0 .
Obviously, the assumptions x n x n + 1 0 and x n T n x n 0 guarantee that w n x n T n x n x n + x n x n 1 0 ( n ) . Thus,
x n y n x n w n + w n y n 0 , ( n ) .
Since z n = ( 1 α n ) u n + α n f ( x n ) with u n : = P C n ( w n τ n A y n ) , from Equation (23) and the boundedness of { x n } , { u n } , we get
z n y n α n ( f ( x n ) + u n ) + u n y n 0 , ( n ) ,
and hence
z n x n z n y n + y n x n 0 , ( n ) .
Obviously, combining Equations (23) and (24) guarantees that
w n z n w n y n + y n z n 0 , ( n ) .
Since { x n } is bounded, we know that { x n k } { x n } s.t.
lim sup n ( f I ) x * , x n x * = lim k ( f I ) x * , x n k x * .
Next, we may suppose that x n k x ˜ . Hence from Equation (25) we get
lim sup n ( f I ) x * , x n x * = lim k ( f I ) x * , x n k x * = ( f I ) x * , x ˜ x * .
From w n x n 0 and x n k x ˜ it follows that w n k x ˜ .
Since T n x n T n + 1 x n 0 , x n x n + 1 0 , w n x n 0 , w n z n 0 and w n k x ˜ , from Lemma 8 we conclude that x ˜ Ω . Therefore, from Equations (12) and (26) we infer that
lim sup n ( f I ) x * , x n x * = ( f I ) x * , x ˜ x * 0 .
Note that
n = 0 ( 1 2 δ ) δ n γ n 1 α n γ n α n = .
It is clear that
lim sup n { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) x * z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) x * z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( x * ) x * , x n x * + γ n + δ n ( 1 2 δ ) δ n γ n ( θ n α n · 2 M 2 1 b + σ n α n x n x n 1 3 M ) } 0 .
Consequently, all conditions of Lemma 4 are satisfied, and hence we immediately deduce that x n x * . This completes the proof. □
Next, we introduce another inertial-like subgradient extragradient algorithm (Algorithm 4) with line-search process as the following.
It is remarkable that Lemmas 6–8 are still valid for Algorithm 4.
Algorithm 4: Inertial-like subgradient extragradient algorithm (II).
Initialization: Given x 0 , x 1 H arbitrarily. Let γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) .
Iterative Steps: Compute x n + 1 in what follows:
  Step 1. Put w n = σ n ( x n x n 1 ) + T n x n and calculate y n = P C ( w n τ n A w n ) , where τ n is chosen to be the largest τ { γ , γ l , γ l 2 , } such that
τ A w n A y n μ w n y n .

  Step 2. Calculate z n = ( 1 α n ) P C n ( w n τ n A y n ) + α n f ( x n ) with C n : = { x H : w n τ n A w n y n , x y n 0 } .
  Step 3. Calculate
x n + 1 = γ n P C n ( w n τ n A y n ) + δ n S z n + β n T n w n .

  Again set n : = n + 1 and return to Step 1.
Theorem 2.
Let { x n } be the sequence constructed by Algorithm 4. Suppose that T n x n T n + 1 x n 0 . Then
x n x * Ω x n x n + 1 0 , x n T n x n 0 , sup n 1 ( T n f ) x n < ,
where x * Ω is only a solution of the HVI: ( I f ) x * , p x * 0 , p Ω .
Proof. 
Using the same reasoning as in the proof of Theorem 1, we know that there is only a solution x * Ω of Equation (12), and that the necessity of the theorem is true.
We claim the sufficiency of the theorem below. For the purpose, we suppose that lim n ( x n x n + 1 + x n T n x n ) = 0 and sup n 1 ( T n f ) x n < . Then we prove the sufficiency by the following steps.
Step 1. We claim the boundedness of { x n } . In fact, using the same reasoning as in Step 1 of the proof of Theorem 1, we obtain that inequalities Equations (13)–(17) hold. Noticing lim n θ n ( 2 + θ n ) α n ( 1 β n ) = 0 , we infer that n 0 1 s.t.
θ n ( 2 + θ n ) α n ( 1 β n ) ( 1 δ ) 2 ( α n ( 1 δ ) 2 ) , n n 0 .
So, from Algorithm 4 and Equation (17) it follows that for all n n 0 ,
z n p α n δ x n p + ( 1 α n ) [ ( 1 + θ n ) x n p + α n M 1 ] + α n ( f I ) p [ 1 α n ( 1 δ ) 2 ] x n p + α n ( M 1 + ( f I ) p ) ,
which together with Lemma 4 and ( γ n + δ n ) ζ γ n , implies that for all n n 0 ,
x n + 1 p = β n ( T n w n p ) + γ n ( z n p ) + δ n ( S z n p ) + γ n ( u n z n ) β n ( 1 + θ n ) w n p + ( 1 β n ) z n p + γ n α n u n f ( x n ) [ 1 α n ( 1 β n ) ( 1 δ ) 2 + β n θ n ( 2 + θ n ) ] x n p + β n ( 1 + θ n ) α n M 1 + α n ( 1 β n ) ( M 0 + M 1 + ( f I ) p ) [ 1 α n ( 1 β n ) ( 1 δ ) 2 + β n α n ( 1 β n ) ( 1 δ ) 2 ] x n p + α n ( 1 β n ) ( M 0 + M 1 1 + θ n 1 β n + ( f I ) p ) = [ 1 α n ( 1 β n ) 2 ( 1 δ ) 2 ] x n p + α n ( 1 β n ) 2 ( 1 δ ) 2 · 2 ( M 0 + M 1 1 + θ n 1 β n + ( f I ) p ) ( 1 δ ) ( 1 β n ) .
Hence,
x n p max { x n 0 p , 2 ( M 0 + M 1 2 1 b + ( f I ) p ) ( 1 δ ) ( 1 b ) } , n n 0 .
Thus, sequence { x n } is bounded.
Step 2. We claim that for all n n 0 ,
x n p 2 x n + 1 p 2 + α n M 4 ( 1 α n ) ( 1 β n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] ,
with constant M 4 > 0 . Indeed, utilizing Lemmas 4 and 7 and the convexity of · 2 , one reaches
x n + 1 p 2 = β n ( T n w n p ) + γ n ( z n p ) + δ n ( S z n p ) + γ n ( u n z n ) 2 β n T n w n p 2 + ( 1 β n ) 1 1 β n [ γ n ( z n p ) + δ n ( S z n p ) ] 2 + 2 ( 1 β n ) α n u n f ( x n ) x n + 1 p β n ( 1 + θ n ) 2 w n p 2 + ( 1 β n ) { [ 1 α n ( 1 δ ) ] x n p 2 + ( 1 α n ) Λ n ( 1 α n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] + 2 α n ( f I ) p , z n p } + 2 ( 1 β n ) α n u n f ( x n ) x n + 1 p β n ( 1 + θ n ) 2 ( x n p 2 + Λ n ) + ( 1 β n ) { [ 1 α n ( 1 δ ) ] x n p 2 + ( 1 α n ) Λ n ( 1 α n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] + α n M 2 } ,
where Λ n : = θ n ( 2 + θ n ) x n p 2 + σ n x n x n 1 [ 2 ( 1 + θ n ) x n p + σ n x n x n 1 ] , and sup n 1 2 ( ( f I ) p z n p + u n f ( x n ) x n + 1 p ) M 2 for some M 2 > 0 . Also, from Equation (16) we have
Λ n = θ n ( 2 + θ n ) x n p 2 + σ n x n x n 1 [ 2 ( 1 + θ n ) x n p + σ n x n x n 1 ] α n { θ n α n ( 2 + θ n ) x n p 2 + M 1 [ 2 ( 1 + θ n ) x n p + α n M 1 ] } α n M 3 ,
where sup n 1 { θ n α n ( 2 + θ n ) x n p 2 + M 1 [ 2 ( 1 + θ n ) x n p + α n M 1 ] } M 3 for some M 3 > 0 . Note that θ n ( 2 + θ n ) α n ( 1 β n ) ( 1 δ ) 2 , n n 0 . Substituting Equation (28) for Equation (27), we obtain that for all n n 0 ,
x n + 1 p 2 [ 1 α n ( 1 β n ) ( 1 δ ) + β n θ n ( 2 + θ n ) ] x n p 2 + β n ( 1 + θ n ) 2 α n M 3 + ( 1 β n ) ( 1 α n ) α n M 3 ( 1 α n ) ( 1 β n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] + ( 1 β n ) α n M 2 x n p 2 ( 1 α n ) ( 1 β n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] + α n M 4 ,
where M 4 : = M 2 + 4 M 3 . This immediately implies that for all n n 0 ,
( 1 α n ) ( 1 β n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] x n p 2 x n + 1 p 2 + α n M 4 .
Step 3. We claim that M > 0 s.t.
x n + 1 p 2 [ 1 ( 1 2 δ ) δ n γ n 1 α n γ n α n ] x n p 2 + [ ( 1 2 δ ) δ n γ n ] α n 1 α n γ n · { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( p ) p , x n p + γ n + δ n ( 1 2 δ ) δ n γ n ( θ n α n · 2 M 2 ( 1 + b ( 1 + θ n ) 2 ) 1 b + σ n α n x n x n 1 3 M ( 1 + b θ n ( 2 + θ n ) ) 1 b ) } .
In fact, we get
w n p 2 [ ( 1 + θ n ) x n p + σ n x n x n 1 ] 2 x n p 2 + θ n 2 M 2 + σ n x n x n 1 3 M ,
where M > 0 s.t. sup n 1 { ( 1 + θ n ) x n p , σ n x n x n 1 } M . From Algorithm 4 and the convexity of · 2 , we have
x n + 1 p 2 = β n ( T n w n p ) + γ n ( z n p ) + δ n ( S z n p ) + γ n ( u n z n ) 2 β n T n w n p 2 + ( 1 β n ) 1 1 β n [ γ n ( z n p ) + δ n ( S z n p ) ] 2 + 2 γ n α n u n p , x n + 1 p + 2 γ n α n p f ( x n ) , x n + 1 p ,
which together with Lemma 4, leads to
x n + 1 p 2 β n ( 1 + θ n ) 2 w n p 2 + ( 1 β n ) z n p 2 + 2 γ n α n u n p x n + 1 p + 2 γ n α n p f ( x n ) , x n + 1 p β n ( 1 + θ n ) 2 ( x n p 2 + θ n 2 M 2 + σ n x n x n 1 3 M ) + ( 1 β n ) [ ( 1 α n ) u n p 2 + 2 α n f ( x n ) p , z n p ] + γ n α n ( u n p 2 + x n + 1 p 2 ) + 2 γ n α n p f ( x n ) , x n + 1 p .
By Step 3 of Algorithm 4, and from Equation (30) we know that u n p 2 x n p 2 + θ n 2 M 2 + σ n x n x n 1 3 M . Hence, we have
x n + 1 p 2 [ 1 α n ( 1 β n ) ] x n p 2 + β n θ n 2 M 2 + ( 1 β n ) ( 1 α n ) ( θ n 2 M 2 + σ n x n x n 1 3 M ) + 2 α n δ n f ( x n ) p , z n p + γ n α n ( x n p 2 + x n + 1 p 2 ) + ( 1 β n ) α n ( θ n 2 M 2 + σ n x n x n 1 3 M ) + 2 γ n α n f ( x n ) p , z n x n + 1 + β n ( 1 + θ n ) 2 ( θ n 2 M 2 + σ n x n x n 1 3 M ) [ 1 α n ( 1 β n ) ] x n p 2 + 2 γ n α n f ( x n ) p z n x n + 1 + 2 α n δ n f ( x n ) p , x n p + 2 α n δ n f ( x n ) p , z n x n + γ n α n ( x n p 2 + x n + 1 p 2 ) + ( 1 β n ) [ θ n 2 M 2 ( 1 + β n ( 1 + θ n ) 2 ) 1 β n + σ n x n x n 1 3 M ( 1 + β n θ n ( 2 + θ n ) ) 1 β n ] [ 1 α n ( 1 β n ) ] x n p 2 + 2 γ n α n f ( x n ) p z n x n + 1 + 2 α n δ n δ x n p 2 + 2 α n δ n f ( p ) p , x n p + 2 α n δ n f ( x n ) p z n x n + γ n α n ( x n p 2 + x n + 1 p 2 ) + ( 1 β n ) [ θ n 2 M 2 ( 1 + b ( 1 + θ n ) 2 ) 1 b + σ n x n x n 1 3 M ( 1 + b θ n ( 2 + θ n ) ) 1 b ] ,
which immediately yields Equation (29).
Step 4. We claim the strong convergence of { x n } to a unique solution x * Ω of HVI Equation (12). In fact, using the same reasoning as in Step 4 of the proof of Theorem 1, we derive the desired conclusion. This completes the proof. □
Next, we shall show how to solve the VIP and CFPP in the following illustrating example.
The initial point x 0 = x 1 is randomly chosen in R = ( , ) . Take f ( x ) = 1 4 sin x , γ = l = μ = 1 2 , σ n = α n = 1 n + 1 , β n = 1 3 , γ n = 1 6 , and δ n = 1 2 . Then we know that δ = 1 4 and f ( R ) [ 1 4 , 1 4 ] .
We first provide an example of Lipschitz continuous and pseudomonotone mapping A, asymptotically nonexpansive mapping T and strictly pseudocontractive mapping S with Ω = Fix ( T ) Fix ( S ) VI ( C , A ) . Let C = [ 1.5 , 1 ] and H = R with the inner product a , b = a b and induced norm · = | · | . Let A , T , S : H H be defined as A x : = 1 1 + | sin x | 1 1 + | x | , T x : = 4 5 sin x and S x : = 1 3 x + 1 2 sin x for all x H . Now, we first show that A is pseudomonotone and Lipschitz continuous with L = 2 such that A ( C ) is bounded. Indeed, it is clear that A ( C ) is bounded. Moreover, for all x , y H we have
A x A y = | 1 1 + sin x 1 1 + x 1 1 + sin y + 1 1 + y | | sin y sin x ( 1 + sin x ) ( 1 + sin y ) | + | y x ( 1 + x ) ( 1 + y ) | sin x sin y + x y 2 x y .
This implies that A is Lipschitz continuous with L = 2 . Next, we show that A is pseudomonotone. For any given x , y H , it is clear that the relation holds:
A x , y x = ( 1 1 + | sin x | 1 1 + | x | ) ( y x ) 0 A y , y x = ( 1 1 + | sin y | 1 1 + | y | ) ( y x ) 0 .
Furthermore, it is easy to see that T is asymptotically nonexpansive with θ n = ( 4 5 ) n , n 1 , such that T n + 1 x n T n x n 0 as n . Indeed, we observe that
T n x T n y 4 5 T n 1 x T n 1 y ( 4 5 ) n x y ( 1 + θ n ) x y ,
and
T n + 1 x n T n x n ( 4 5 ) n 1 T 2 x n T x n = ( 4 5 ) n 1 4 5 sin ( T x n ) 4 5 sin x n 2 ( 4 5 ) n 0 , ( n ) .
It is clear that Fix ( T ) = { 0 } and
lim n θ n α n = lim n ( 4 / 5 ) n 1 / ( n + 1 ) = 0 .
Moreover, it is readily seen that sup n 1 | ( T n f ) x n | = sup n 1 | 4 5 sin ( T n 1 x n ) 1 4 sin x n | 21 20 < . In addition, it is clear that S is strictly pseudocontractive with constant ζ = 1 4 . Indeed, we observe that for all x , y H ,
S x S y 2 [ 1 3 x y + 1 2 sin x sin y ] 2 x y 2 + 1 4 ( I S ) x ( I S ) y 2 .
It is clear that ( γ n + δ n ) ζ = ( 1 6 + 1 2 ) · 1 4 1 6 = γ n < ( 1 2 δ ) δ n = ( 1 2 · 1 4 ) · 1 2 = 1 4 for all n 1 . Therefore, Ω = Fix ( T ) Fix ( S ) VI ( C , A ) = { 0 } . In this case, Algorithm 3 can be rewritten as follows:
w n = T n x n + 1 n + 1 ( x n x n 1 ) , y n = P C ( w n τ n A w n ) , z n = 1 n + 1 f ( x n ) + n n + 1 P C n ( w n τ n A y n ) , x n + 1 = 1 3 T n x n + 1 6 P C n ( w n τ n A y n ) + 1 2 S z n , n 1 ,
where C n and τ n are picked up as in Algorithm 3. Thus, by Theorem 1, we know that { x n } converges to 0 Ω if and only if | x n x n + 1 | + | x n T n x n | 0 , ( n ) .
On the other hand, Algorithm 4 can be rewritten as follows:
w n = T n x n + 1 n + 1 ( x n x n 1 ) , y n = P C ( w n τ n A w n ) , z n = 1 n + 1 f ( x n ) + n n + 1 P C n ( w n τ n A y n ) , x n + 1 = 1 3 T n w n + 1 6 P C n ( w n τ n A y n ) + 1 2 S z n , n 1 ,
where C n and τ n are picked up as in Algorithm 4. Thus, by Theorem 2, we know that { x n } converges to 0 Ω if and only if | x n x n + 1 | + | x n T n x n | 0 , ( n ) .

Author Contributions

The authors made equal contributions to this paper. Conceptualization, methodology, formal analysis and investigation: L.-C.C., A.P., C.-F.W. and J.-C.Y.; writing—original draft preparation: L.-C.C. and A.P.; writing—review and editing: C.-F.W. and J.-C.Y.

Funding

This research was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of Ministry of Education of China (20123127110002) and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100). This research was also supported by the Ministry of Science and Technology, Taiwan [grant number: 107-2115-M-037-001].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metod. 1976, 12, 747–756. [Google Scholar]
  2. Bin Dehaish, B.A. Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 2015, 16, 1321–1336. [Google Scholar]
  3. Bin Dehaish, B.A. A regularization projection algorithm for various problems with nonlinear mappings in Hilbert spaces. J. Inequal. Appl. 2015, 2015, 51. [Google Scholar] [CrossRef] [Green Version]
  4. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Aanl. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
  5. Ceng, L.C.; Guu, S.M.; Yao, J.C. Finding common solutions of a variational inequality, a general system of variational inequalities, and a fixed-point problem via a hybrid extragradient method. Fixed Point Theory Appl. 2011, 2011, 626159. [Google Scholar] [CrossRef]
  6. Ceng, L.C.; Ansari, Q.H.; Wong, N.C.; Yao, J.C. An extragradient-like approximation method for variational inequalities and fixed point problems. Fixed Point Theory Appl. 2011, 2011, 22. [Google Scholar] [CrossRef]
  7. Liu, L.; Qin, X. Iterative methods for fixed points and zero points of nonlinear mappings with applications. Optimization 2019. [Google Scholar] [CrossRef]
  8. Nguyen, L.V.; Qin, X. Some results on strongly pseudomonotone quasi-variational inequalities. Set-Valued Var. Anal. 2019. [Google Scholar] [CrossRef]
  9. Ansari, Q.H.; Babu, F.; Yao, J.C. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 25. [Google Scholar] [CrossRef]
  10. Ceng, L.C.; Guu, S.M.; Yao, J.C. Hybrid iterative method for finding common solutions of generalized mixed equilibrium and fixed point problems. Fixed Point Theory Appl. 2012, 2012, 92. [Google Scholar] [CrossRef] [Green Version]
  11. Takahashi, W.; Wen, C.F.; Yao, J.C. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
  12. Chang, S.S.; Wen, C.F.; Yao, J.C. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
  13. Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malays. Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
  14. Zhao, X.; Ng, K.F.; Li, C.; Yao, J.C. Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Optim. 2018, 78, 613–641. [Google Scholar] [CrossRef]
  15. Latif, A.; Ceng, L.C.; Ansari, Q.H. Multi-step hybrid viscosity method for systems of variational inequalities defined over sets of solutions of an equilibrium problem and fixed point problems. Fixed Point Theory Appl. 2012, 2012, 186. [Google Scholar] [CrossRef] [Green Version]
  16. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64, 633–642. [Google Scholar] [CrossRef] [Green Version]
  17. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 2012, 75, 2116–2125. [Google Scholar] [CrossRef]
  18. Qin, X.; Cho, S.Y.; Wang, L. Strong convergence of an iterative algorithm involving nonlinear mappings of nonexpansive and accretive type. Optimization 2018, 67, 1377–1388. [Google Scholar] [CrossRef]
  19. Cai, G.; Shehu, Y.; Iyiola, O.S. Strong convergence results for variational inequalities and fixed point problems using modified viscosity implicit rules. Numer. Algorithms 2018, 77, 535–558. [Google Scholar] [CrossRef]
  20. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef]
  21. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  22. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  23. Thong, D.V.; Hieu, D.V. Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Algorithms 2019, 80, 1283–1307. [Google Scholar] [CrossRef]
  24. Cho, S.Y.; Qin, X. On the strong convergence of an iterative process for asymptotically strict pseudocontractions and equilibrium problems. Appl. Math. Comput. 2014, 235, 430–438. [Google Scholar] [CrossRef]
  25. Ceng, L.C.; Yao, J.C. Relaxed and hybrid viscosity methods for general system of variational inequalities with split feasibility problem constraint. Fixed Point Theory Appl. 2013, 2013, 43. [Google Scholar] [CrossRef]
  26. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–502. [Google Scholar] [CrossRef]
  27. Ceng, L.C.; Yuan, Q. Hybrid Mann viscosity implicit iteration methods for triple hierarchical variational inequalities, systems of variational inequalities and fixed point problems. Mathematics 2019, 7, 338. [Google Scholar] [CrossRef]
  28. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–134. [Google Scholar] [CrossRef] [Green Version]
  29. Ceng, L.C.; Latif, A.; Ansari, Q.H.; Yao, J.C. Hybrid extragradient method for hierarchical variational inequalities. Fixed Point Theory Appl. 2014, 2014, 222. [Google Scholar] [CrossRef] [Green Version]
  30. Takahahsi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
  31. Ceng, L.C.; Latif, A.; Yao, J.C. On solutions of a system of variational inequalities and fixed point problems in Banach spaces. Fixed Point Theory Appl. 2013, 2013, 176. [Google Scholar] [CrossRef] [Green Version]
  32. Ceng, L.C.; Shang, M. Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 2019. [Google Scholar] [CrossRef]
  33. Yao, Y.; Liou, Y.C.; Kang, S.M. Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59, 3472–3480. [Google Scholar] [CrossRef] [Green Version]
  34. Ceng, L.C.; Petruşel, A.; Yao, J.C. Composite viscosity approximation methods for equilibrium problem, variational inequality and common fixed points. J. Nonlinear Convex Anal. 2014, 15, 219–240. [Google Scholar]
  35. Ceng, L.C.; Kong, Z.R.; Wen, C.F. On general systems of variational inequalities. Comput. Math. Appl. 2013, 66, 1514–1532. [Google Scholar] [CrossRef]
  36. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Relaxed extragradient iterative methods for variational inequalities. Appl. Math. Comput. 2011, 218, 1112–1123. [Google Scholar] [CrossRef]
  37. Ceng, L.C.; Wen, C.F.; Yao, Y. Iterative approaches to hierarchical variational inequalities for infinite nonexpansive mappings and finding zero points of m-accretive operators. J. Nonlinear Var. Anal. 2017, 1, 213–235. [Google Scholar]
  38. Zaslavski, A.J. Numerical Optimization with Computational Errors; Springer: Cham, Switzerland, 2016. [Google Scholar]
  39. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  40. Xu, H.K.; Kim, T.H. Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119, 185–201. [Google Scholar] [CrossRef]
  41. Ceng, L.C.; Xu, H.K.; Yao, J.C. The viscosity approximation method for asymptotically nonexpansive mappings in Banach spaces. Nonlinear Anal. 2008, 69, 1402–1412. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Ceng, L.-C.; Petruşel, A.; Wen, C.-F.; Yao, J.-C. Inertial-Like Subgradient Extragradient Methods for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive and Strictly Pseudocontractive Mappings. Mathematics 2019, 7, 860. https://doi.org/10.3390/math7090860

AMA Style

Ceng L-C, Petruşel A, Wen C-F, Yao J-C. Inertial-Like Subgradient Extragradient Methods for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive and Strictly Pseudocontractive Mappings. Mathematics. 2019; 7(9):860. https://doi.org/10.3390/math7090860

Chicago/Turabian Style

Ceng, Lu-Chuan, Adrian Petruşel, Ching-Feng Wen, and Jen-Chih Yao. 2019. "Inertial-Like Subgradient Extragradient Methods for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive and Strictly Pseudocontractive Mappings" Mathematics 7, no. 9: 860. https://doi.org/10.3390/math7090860

APA Style

Ceng, L. -C., Petruşel, A., Wen, C. -F., & Yao, J. -C. (2019). Inertial-Like Subgradient Extragradient Methods for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive and Strictly Pseudocontractive Mappings. Mathematics, 7(9), 860. https://doi.org/10.3390/math7090860

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop