Next Article in Journal
A Mathematical Model for Early HBV and -HDV Kinetics during Anti-HDV Treatment
Next Article in Special Issue
On a Periodic Boundary Value Problem for Fractional Quasilinear Differential Equations with a Self-Adjoint Positive Operator in Hilbert Spaces
Previous Article in Journal
Finite-Time Passivity Analysis of Neutral-Type Neural Networks with Mixed Time-Varying Delays
Previous Article in Special Issue
An Improved Alternating CQ Algorithm for Solving Split Equality Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Mann-Type Subgradient-like Extragradient Method with Linear-Search Process for Hierarchical Variational Inequalities for Asymptotically Nonexpansive Mappings

1
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
2
Research Center for Interneural Computing, China Medical University, Taichung 40402, Taiwan
3
Department of Applied Mathematics, National Sun Yat-sen University, Kaohsiung 80424, Taiwan
4
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(24), 3322; https://doi.org/10.3390/math9243322
Submission received: 24 November 2021 / Revised: 8 December 2021 / Accepted: 14 December 2021 / Published: 20 December 2021
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications II)

Abstract

:
We propose two Mann-type subgradient-like extra gradient iterations with the line-search procedure for hierarchical variational inequality (HVI) with the common fixed-point problem (CFPP) constraint of finite family of nonexpansive mappings and an asymptotically nonexpansive mapping in a real Hilbert space. Our methods include combinations of the Mann iteration method, subgradient extra gradient method with the line-search process, and viscosity approximation method. Under suitable assumptions, we obtain the strong convergence results of sequence of iterates generated by our methods for a solution to HVI with the CFPP constraint.

1. Introduction

Let · , · be the inner product and · induced norm of a real Hilbert space H. Given a convex closed set C H with C . Let P C be the nearest point projection from H onto C. Given T : C H , we denote the set Fix ( T ) = { x C : x = T x } by Fix ( T ) the fixed points set of T. We say that S : C C is asymptotically nonexpansive if there exists a sequence { θ n } [ 0 , + ) with lim n θ n = 0 such that the following is the case.
S n x S n y ( 1 + θ n ) x y n 1 , x , y C .
S is called nonexpansive if θ n = 0 n 1 .
Suppose A : H H is a continuous mapping. The variational inequality problem (VIP) is to find x * C such that A x * , x x * 0 x C . We denote by VI( C , A ) the set of solutions to VIP. One of the popular methods for solving VIP is the extragradient method [1]: x 0 C ,
y n = P C ( x n τ A x n ) , x n + 1 = P C ( x n τ A y n ) n 0 ,
with τ ( 0 , 1 L ) , where L is the Lipschitz constant of A. If  VI ( C , A ) , then { x n } generated by (2) converges weakly. Extragradient method (2) has been studied by many authors (see, e.g., [2,3,4,5,6,7,8,9,10,11,12,13] and references therein).
In (2), one needs to compute projections onto C twice for each iteration, and hence a drawback emerges. In [3], Censor et al. modified (2) and introduced the subgradient extragradient method:
y n = P C ( x n τ A x n ) , C n = { x H : x n τ A x n y n , x y n 0 } , x n + 1 = P C n ( x n τ A y n ) n 0 ,
with τ ( 0 , 1 L ) , where L is the Lipschitz constant of A. In 2018, by virtue of the inertial technique, Thong and Hieu [9] studied an inertial subgradient extragradient method, x 0 , x 1 H :
w n = x n + α n ( x n x n 1 ) , y n = P C ( w n τ A w n ) , C n = { x H : w n τ A w n y n , x y n 0 } , x n + 1 = P C n ( w n τ A y n ) n 1 ,
with τ ( 0 , 1 L ) , where L is the Lipschitz constant of A. Under some conditions, a weak convergence of { x n } was obtained. Ceng and Shang in [11] introduced the hybrid inertial subgradient extragradient method with a linear-search process to solve VIP in which A is pseudomonotone and Lipschitz continuous and the common fixed-point problem (CFPP) of nonexpansive mappings { T i } i = 1 N and an asymptotically nonexpansive mapping T in a real Hilbert space H are present. Given a contraction f : H H with constant δ [ 0 , 1 ) , and an η -strongly monotone and κ -Lipschitzian mapping F : H H with δ < τ : = 1 1 ρ ( 2 η ρ κ 2 ) for ρ ( 0 , 2 η / κ 2 ) . Let { α n } [ 0 , 1 ] and { β n } , { γ n } ( 0 , 1 ) with β n + γ n < 1 n 1 . Moreover, one writes T n : = T n mod N for integer n 1 with the mod function taking values in the set { 1 , 2 , , N } , i.e., if n = j N + q for some integers j 0 and 0 q < N , then T n = T N if q = 0 and T n = T q if 0 < q < N . Their algorithm is formulated below.
Under appropriate conditions, they proved the strong convergence of Algorithm 1 to an element of Ω = i = 0 N Fix ( T i ) VI ( C , A ) with T 0 : = T . Meanwhile, Reich et al. [12] suggested the modified projection-type method for solving the VIP with the pseudomonotone and uniformly continuous mapping A given a sequence { α n } ( 0 , 1 ) and a contraction f : C C with constant δ [ 0 , 1 ) . Their algorithm is formulated below.
Algorithm 1 (see [11]). Initialization: Choose γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) . Take x 0 , x 1 H .
Iterative Steps: Compute x n + 1 in this manner:
Step 1. Set w n = T n x n + α n ( T n x n T n x n 1 ) and compute y n = P C ( w n τ n A w n ) ,
where τ n is chosen to be the largest τ { γ , γ l , γ l 2 , } satisfying
τ A w n A y n μ w n y n .
Step 2. Compute z n = P C n ( w n τ n A y n ) with C n : = { x H : w n τ n A w n y n , x y n 0 } .
Step 3. Compute x n + 1 = β n f ( x n ) + γ n x n + ( ( 1 γ n ) I β n ρ F ) T n z n .
Again set n : = n + 1 and go to Step 1.
Under mild conditions, strong convergence of Algorithm 2 to an element of VI ( C , A ) was given. Inspired by the above research works, we propose two Mann-type subgradient-like extragradient algorithms with linear-search process for solving a hierarchical variational inequality (HVI) with the common fixed-point problem (CFPP) constraint of family nonexpansive mappings and an asymptotically nonexpansive mapping in Hilbert spaces. We combine the Mann iteration method, subgradient extragradient method with linear-search process, and viscosity approximation method and provide strong convergence results under suitable conditions. We provide an illustration of our theory with an example.
We organize the paper as follows: Some definitions and preliminary results are given in Section 2. In Section 3, we provide convergence analysis of the proposed algorithms. In Section 4, our main results are applied to solve the VIP and CFPP in an illustrated example. Finally, some concluding remarks are given in Section 5.
Algorithm 2(see [12]).
Initialization: Given μ > 0 , l ( 0 , 1 ) , λ ( 0 , 1 μ ) . Let x 1 C be arbitrary.
Iterative Steps: Given the current iterate x n , calculate x n + 1 as follows:
Step 1. Compute y n = P C ( x n λ A x n ) and r λ ( x n ) : = x n y n . If  r λ ( x n ) = 0 , then stop; x n is a solution of VI ( C , A ) . Otherwise,
Step 2. Compute w n = x n τ n r λ ( x n ) , where τ n : = l j n and j n is the smallest nonnegative integer j satisfying
A x n A ( x n l j r λ ( x n ) ) , r λ ( x n ) μ 2 r λ ( x n ) 2 .
Step 3. Compute x n + 1 = α n f ( x n ) + ( 1 α n ) P C n ( x n ) , where C n : = { x C : h n ( x n ) 0 } and h n ( x ) = F w n , x x n + τ n 2 λ r λ ( x n ) 2 .
Again set n : = n + 1 and go to Step 1.

2. Preliminaries

A mapping T : C H is called the following:
(a)
L-Lipschitz continuous (or L-Lipschitzian) if L > 0 such that T x T y L x y x , y C ;
(b)
Monotone if T x T y , x y 0 x , y C ;
(c)
Pseudomonotone if T x , y x 0 T y , y x 0 x , y C ;
(d)
α -strongly monotone if α > 0 such that T x T y , x y α x y 2 x , y C ;
(e)
Sequentially weakly continuous if { x n } C , the relation holds: x n x T x n T x .
It is known that every monotone operator is pseudomonotone. However, the converse fails. For each x H , there exists a unique nearest point in C such that x P C x x y y C . Such a point is denoted by P C x , called a metric projection of H onto C. According to [14], we know that the following holds:
(a)
x y , P C x P C y P C x P C y 2 x , y H ;
(b)
x P C x , y P C x 0 x H , y C ;
(c)
x y 2 x P C x 2 + y P C x 2 x H , y C ;
(d)
x y 2 = x 2 y 2 2 x y , y x , y H ;
(e)
λ x + μ y 2 = λ x 2 + μ y 2 λ μ x y 2 x , y H , λ , μ [ 0 , 1 ] with λ + μ = 1 .
Lemma 1
(see [13]). Let H 1 and H 2 be two real Hilbert spaces. Suppose that A : H 1 H 2 is uniformly continuous on bounded subsets of H 1 and M is a bounded subset of H 1 , then A ( M ) is bounded.
    The following inequality is an immediate consequence of the subdifferential inequality of the function 1 2 · 2 :
x + y 2 x 2 + 2 y , x + y x , y H .
Lemma 2
(see [15]). Let h be a real-valued function on H and define K : = { x C : h ( x ) 0 } . If K is nonempty and h is Lipschitz continuous on C with modulus θ > 0 , then dist ( x , K ) θ 1 max { h ( x ) , 0 } x C , where dist ( x , K ) denotes the distance of x to K.
Lemma 3
(see [3], Lemma 1). Let A : C H be pseudomonotone and continuous. Then, x * C is a solution to the VIP A x * , x x * 0 x C , if and only if A x , x x * 0 x C .
Lemma 4
(see [16]). Let { a n } be a sequence of nonnegative numbers satisfying the following conditions: a n + 1 ( 1 λ n ) a n + λ n γ n n 1 , where { λ n } and { γ n } are sequences of real numbers such that (i) { λ n } [ 0 , 1 ] and n = 1 λ n = , and (ii) lim   sup n γ n 0 or n = 1 | λ n γ n | < . Then lim n a n = 0 .
Lemma 5
(see [17]). Let X be a Banach space that admits a weakly continuous duality mapping, let C be a nonempty closed convex subset of X, and let  T : C C be an asymptotically nonexpansive mapping with Fix ( T ) . Then, I T is demiclosed at zero, i.e., if { x n } is a sequence in C such that x n x C and ( I T ) x n 0 , then ( I T ) x = 0 , where I is the identity mapping of X.

3. Main Results

In this section, we assume the following.
T : C C is an asymptotically nonexpansive mapping and T i : C C is a nonexpansive mapping for i = 1 , , N such that the sequence { T n } n = 1 is defined as in Algorithm 1.
A : H H is pseudomonotone and uniformly continuous on C, s.t. A z lim   inf n A x n for each { x n } C with x n z .
f : C C is a contraction with constant δ [ 0 , 1 ) , and  Ω = i = 0 N Fix ( T i ) VI ( C , A ) with T 0 : = T .
{ σ n } [ 0 , 1 ] and { α n } , { β n } , { γ n } ( 0 , 1 ) such that the following is the case:
(i)
α n + β n + γ n = 1 and 0 < lim   inf n γ n ;
(ii)
lim n α n = 0 and n = 1 α n = ;
(iii)
0 < lim   inf n σ n and lim n ( θ n / α n ) = 0 .
Lemma 6.
The Armijo-type search rule (5) is well defined, and the inequality holds: A w n , r λ ( w n ) λ 1 r λ ( w n ) 2 . Recall that the Armijo-type search rule is a backtracking line search that determines the amount to move along a given search direction and involves starting with a relatively large estimate of the step size for movement along the search direction and iteratively shrinking the step size until a decrease as given in (5) is observed.
Proof. 
Since l ( 0 , 1 ) and A are uniformly continuous on C, one has lim j A w n A ( w n l j r λ ( w n ) ) , r λ ( w n ) = 0 . If  r λ ( w n ) = 0 , then it is clear that j n = 0 . If  r λ ( w n ) 0 , then there exists an integer j n 0 satisfying (5).  □
Since P C is firmly nonexpansive, one knows that x P C y 2 x y , x P C y x C , y H . Placing y = w n λ A w n and x = w n , one obtains w n P C ( w n λ A w n ) 2 λ A w n , w n P C ( w n λ A w n ) , and hence A w n , r λ ( w n ) λ 1 r λ ( w n ) 2 .
Lemma 7.
Let p Ω and let the function h n be defined by (6). Then, h n ( w n ) = τ n 2 λ r λ ( w n ) 2 and h n ( p ) 0 . In particular, if  r λ ( w n ) 0 , then h n ( w n ) > 0 .
Proof. 
The first assertion of Lemma 6 is obvious. In what follows, let us show the second assertion. Indeed, let p Ω . Then, by Lemma 3 one has A t n , t n p 0 . Thus, the following is the case.
h n ( p ) = A t n , p w n + τ n 2 λ r λ ( w n ) 2 = A t n , w n t n A t n , t n p + τ n 2 λ r λ ( w n ) 2 τ n A t n , r λ ( w n ) + τ n 2 λ r λ ( w n ) 2 .
On the other hand, by (5) one has the following.
A w n A t n , r λ ( w n ) μ 2 r λ ( w n ) 2 .
Thus, by Lemma 6, we obtain the following.
A t n , r λ ( w n ) A w n , r λ ( w n ) μ 2 r λ ( w n ) 2 ( 1 λ μ 2 ) r λ ( w n ) 2 .
Combining (7) and (8), we obtain the following.
h n ( p ) τ n 2 ( 1 λ μ ) r λ ( w n ) 2 .
Consequently, h n ( p ) 0 , as asserted.  □
Lemma 8.
Let { w n } , { x n } , { y n } , { z n } be bounded sequences generated by Algorithm 3. If  x n x n + 1 0 , w n x n 0 , w n y n 0 , w n z n and T n z n T n + 1 z n 0 and { w n k } { w n } such that w n k z C , then z Ω .
Algorithm 3 Initialization: Given μ > 0 , l ( 0 , 1 ) , λ ( 0 , 1 μ ) . Pick x 1 C .
Iterative Steps: Given the current iterate x n , calculate x n + 1 as follows:
Step 1. Set w n = ( 1 σ n ) x n + σ n T n x n , and compute y n = P C ( w n λ A w n ) and r λ ( w n ) : = w n y n .
Step 2. Compute t n = w n τ n r λ ( w n ) , where τ n : = l j n and j n is the smallest nonnegative integer j satisfying
A w n A ( w n l j r λ ( w n ) ) , w n y n μ 2 r λ ( w n ) 2 .
Step 3. Compute z n = P C n ( w n ) and x n + 1 = α n f ( x n ) + β n x n + γ n T n z n , where C n : = { x C : h n ( x ) 0 } and
h n ( x ) = A t n , x w n + τ n 2 λ r λ ( w n ) 2 .
Set n : = n + 1 and return to Step 1.
Proof. 
By Algorithm 3, w n x n = σ n ( T n x n x n ) n 1 , and hence w n x n = σ n T n x n x n . Utilizing the assumptions lim   inf n σ n > 0 and w n x n 0 , we have the following.
lim n x n T n x n = 0 .
By Algorithm 3, we obtain x n + 1 z n = α n ( f ( x n ) z n ) + β n ( x n z n ) + γ n ( T n z n z n ) , which immediately yields the following.
γ n T n z n z n x n + 1 z n + α n ( f ( x n ) + z n ) + β n x n z n x n + 1 x n + 2 ( x n w n + w n z n ) + α n ( f ( x n ) + z n ) .
Since x n x n + 1 0 , w n x n 0 , w n z n 0 , α n 0 , lim   inf n γ n > 0 and { x n } , { z n } are bounded, we obtain lim n z n T n z n = 0 , which together with T n z n T n + 1 z n 0 implies the following.
z n T z n z n T n z n + T n z n T n + 1 z n + T n + 1 z n T z n z n T n z n + T n z n T n + 1 z n + ( 1 + θ 1 ) T n z n z n = ( 2 + θ 1 ) z n T n z n + T n z n T n + 1 z n 0 ( n ) .
Moreover, from  y n = P C ( w n λ A w n ) , we have w n λ A w n y n , x y n 0 x C , and hence the following is the case.
1 λ w n y n , x y n + A w n , y n w n A w n , x w n x C .
According to the uniform continuity of A on C, one knows that { A w n } is bounded (due to Lemma 1). Note that { y n } is bounded as well. Thus, from (12), we obtain lim   inf k A w n k , x w n k 0 x C . Meantime, observe that A y n , x y n = A y n A w n , x w n + A w n , x w n + A y n , w n y n . Since w n y n 0 , from the uniform continuity of A we obtain A w n A y n 0 , which together with (12) yields lim   inf k A y n k , x y n k 0 x C . □
Next we show that lim n x n T r x n = 0 for r = 1 , , N . Indeed, note that for i = 1 , , N , the following is the case.
x n T n + i x n x n x n + i + x n + i T n + i x n + i + T n + i x n + i T n + i x n 2 x n x n + i + x n + i T n + i x n + i .
Hence, from (10) and the assumption x n x n + 1 0 , we obtain lim n x n T n + i x n = 0 for i = 1 , , N . This immediately implies that the following is the case.
lim n x n T r x n = 0 for r = 1 , , N .
We now take a sequence { ε k } ( 0 , 1 ) satisfying ε k 0 as k . For each k 1 , we denote by m k the smallest positive integer such that the following is the case.
A y n j , x y n j + ε k 0 j m k .
Since { ε k } is decreasing, it is clear that { m k } is increasing. Noticing that { y m k } C guarantees A y m k 0 k 1 , we set u m k = A y m k A y m k 2 , and we obtain A y m k , u m k = 1 k 1 . Thus, from (14), we obtain A y m k , x + ε k u m k y m k 0 k 1 . Again from the pseudomonotonicity of A, we have A ( x + ε k u m k ) , x + ε k u m k y m k 0 k 1 . This immediately results in
A x , x y m k A x A ( x + ε k u m k ) , x + ε k u m k y m k ε k A x , u m k k 1 .
We claim that lim k ε k u m k = 0 . Indeed, from  w n k z C and w n y n 0 , we obtain y n k z . Using the assumption on A, instead of the sequentially weak continuity of A, we obtained 0 < A z lim   inf k A y n k (otherwise, if  A z = 0 , then z is a solution). Note that { y m k } { y n k } and ε k 0 as k . Thus, it follows that 0 lim   sup k ε k u m k = lim   sup k ε k A y m k lim   sup k ε k lim   inf k A y n k = 0 . Hence, we obtain ε k u m k 0 as k .
Next, we show that z Ω . Indeed, from  w n x n 0 and w n k z , we obtain x n k z . From (13), we have x n k T r x n k 0 for r = 1 , , N . Note that Lemma 5 guarantees the demiclosedness of I T r at zero for r = 1 , , N . Thus, z Fix ( T r ) . Since r is an arbitrary element in the finite set { 1 , , N } , we obtain z r = 1 N Fix ( T r ) . Simultaneously, from  w n z n 0 and w n k z , we obtain z n k z . From (11), we have z n k T z n k 0 . From Lemma 5, it follows that I T is demiclosed at zero, and hence we obtain ( I T ) z = 0 , i.e.,  z Fix ( T ) . On the other hand, letting k , we deduce that the right-hand side of (15) tends to zero by the uniform continuity of A, the boundedness of { y m k } , { u m k } , and the limit lim k ε k u m k = 0 . Thus, we obtain A x , x z = lim   inf k A x , x y m k 0 x C . By Lemma 3, we have z VI ( C , A ) . Therefore, z i = 0 N Fix ( T i ) VI ( C , A ) = Ω .
Lemma 9.
Let { w n } be the sequence constructed by Algorithm 3. Then, the following is the case.
lim n τ n r λ ( w n ) 2 = 0 lim n w n y n = 0 .
Proof. 
To show the conclusion, we consider two cases. In the case when lim   inf n τ n > 0 , we might assume that there exists a constant τ > 0 such that τ n τ > 0 n 1 , which hence yields the following.
w n y n 2 = 1 τ n τ n w n y n 2 1 τ · τ n w n y n 2 = 1 τ · τ n r λ ( w n ) 2 .
This together with lim n τ n r λ ( w n ) 2 = 0 results in lim n w n y n = 0 .  □
In the case, when lim   inf n τ n = 0 , we might pick a subsequence { n k } of { n } such that the following is the case.
lim k τ n k = 0 and lim k w n k y n k = a > 0 .
Let υ n k = 1 l τ n k y n k + ( 1 1 l τ n k ) w n k . Then, υ n k = w n k 1 l τ n k ( w n k y n k ) . Since lim n τ n r λ ( w n ) 2 = 0 , we have the following.
lim k υ n k w n k 2 = lim k 1 l 2 τ n k · τ n k w n k y n k 2 = 0 .
From the step size rule (5) and the definition of υ n k , it follows that the following is the case.
A w n k A υ n k , w n k y n k > μ 2 w n k y n k 2 .
Since A is uniformly continuous on bounded subsets of C, (18) ensures the following.
lim k A w n k A υ n k = 0 .
This, however, contradicts with (17). Thus, it follows that lim n w n y n = 0 .
Theorem 1.
Let { x n } be the sequence constructed by Algorithm 3. Assume that T n z n T n + 1 z n 0 . Then, the following is the case:
x n x * Ω x n x n + 1 0 , x n y n 0
where x * Ω is the unique solution of the VIP:   ( I f ) x * , p x * 0 p Ω .
Proof. 
Since 0 < lim   inf n γ n and lim n θ n α n = 0 , we may assume, without loss of generality, that { γ n } [ a , 1 ) ( 0 , 1 ) and θ n α n ( 1 δ ) 2 n 1 . We claim that P Ω f : C C is a contraction. Indeed, it is clear that P Ω f ( x ) P Ω f ( y ) δ x y x , y C , which implies that P Ω f is a contraction. Banach’s Contraction Mapping Principle guarantees that P Ω f has a unique fixed point. Say x * C , that is, x * = P Ω f ( x * ) . Thus, there exists a unique solution x * Ω = i = 0 N Fix ( T i ) VI ( C , A ) of the VIP
( I f ) x * , p x * 0 p Ω .
 □
If x n x * Ω = i = 0 N Fix ( T i ) VI ( C , A ) , then x * = T i x * for i = 0 , 1 , , N and x * = P C ( x * λ A x * ) , together with Algorithm 3, imply the following.
w n x * = ( 1 σ n ) ( x n x * ) + σ n ( T n x n T n x * ) x n x * 0 ( n ) .
Hence, using the continuity of A on C, we obtain that A w n A x * 0 and the following is the case.
y n x n y n x * + x n x * = P C ( w n λ A w n ) P C ( x * λ A x * ) + x n x * w n x * + λ A w n A x * + x n x * 0 ( n ) .
In addition, it is clear that the following is obtained.
x n x n + 1 x n x * + x n + 1 x * 0 ( n ) .
Next, we show the sufficiency of the theorem. To this aim, we assume lim n ( x n x n + 1 + x n y n ) = 0 and divide the proof of sufficiency into several steps.
Step 1. We show that { x n } is bounded. Indeed, take an arbitrary p Ω = i = 0 N Fix ( T i ) VI ( C , A ) . Then, T p = p and T n p = p n 1 . We claim that the following inequality holds.
z n p 2 w n p 2 dist 2 ( w n , C n ) p Ω .
Indeed, one has the following.
z n p 2 = P C n w n p 2 w n p 2 P C n w n w n 2 = w n p 2 dist 2 ( w n , C n ) .
Thus, the following is the case.
z n p w n p n 1 .
Then, the following is obtained:
w n p ( 1 σ n ) x n p + σ n T n x n p x n p .
which together with (22) yields the following.
z n p w n p x n p n 1 .
Thus, from (23) and α n + β n + γ n = 1 n 1 , the following is the case.
x n + 1 p = α n f ( x n ) + β n x n + γ n T n z n p α n f ( x n ) p + β n x n p + γ n T n z n p α n ( f ( x n ) f ( p ) + f ( p ) p ) + β n x n p + γ n ( 1 + θ n ) z n p α n ( δ x n p + f ( p ) p ) + β n x n p + γ n z n p + θ n z n p α n ( δ x n p + f ( p ) p ) + β n x n p + γ n x n p + α n ( 1 δ ) 2 x n p = [ 1 α n ( 1 δ ) 2 ] x n p + α n f ( p ) p = [ 1 α n ( 1 δ ) 2 ] x n p + α n ( 1 δ ) 2 · 2 f ( p ) p 1 δ max { x n p , 2 f ( p ) p 1 δ } .
Therefore, we obtain x n p max { x 1 p , 2 f ( p ) p 1 δ } n 1 . Thus, { x n } is bounded, and so are the sequences { w n } , { y n } , { z n } , { f ( x n ) } , { A t n } , { T n z n } , { T n x n } .
   Step 2. We show that the following is the case.
γ n z n w n 2 x n p 2 x n + 1 p 2 + θ n M 1 + 2 α n f ( x n ) p , x n + 1 p .
To prove this, we first note that the following is the case:
x n + 1 p 2 = α n ( f ( x n ) p ) + β n ( x n p ) + γ n ( T n z n p ) 2 β n ( x n p ) + γ n ( T n z n p ) 2 + 2 α n f ( x n ) p , x n + 1 p β n x n p 2 + γ n ( 1 + θ n ) 2 z n p 2 + 2 α n f ( x n ) p , x n + 1 p β n x n p 2 + γ n z n p 2 + θ n ( 2 + θ n ) z n p 2 + 2 α n f ( x n ) p , x n + 1 p β n x n p 2 + γ n z n p 2 + θ n M 1 + 2 α n f ( x n ) p , x n + 1 p ,
where sup n 1 ( 2 + θ n ) z n p 2 M 1 for some M 1 > 0 . On the other hand, from (23) one has the following.
z n p 2 = P C n w n p 2 w n p 2 z n w n 2 x n p 2 z n w n 2 .
Substituting (24) into (25), one obtains the following.
x n + 1 p 2 β n x n p 2 + γ n ( x n p 2 z n w n 2 ) + θ n M 1 + 2 α n f ( x n ) p , x n + 1 p = ( 1 α n ) x n p 2 γ n z n w n 2 + θ n M 1 + 2 α n f ( x n ) p , x n + 1 p x n p 2 γ n z n w n 2 + θ n M 1 + 2 α n f ( x n ) p , x n + 1 p .
This immediately implies the following.
γ n z n w n 2 x n p 2 x n + 1 p 2 + θ n M 1 + 2 α n f ( x n ) p , x n + 1 p .
Step 3. We show the following.
γ n [ τ n 2 λ L r λ ( w n ) 2 ] 2 x n p 2 x n + 1 p 2 + α n f ( x n ) p 2 + θ n M 1 .
Indeed, we claim that for some L > 0 , the following obtains.
z n p 2 w n p 2 [ τ n 2 λ L r λ ( w n ) 2 ] 2 .
Since the sequence { A t n } is bounded, there exists L > 0 such that A t n L n 1 . This ensures that for all u , v C n , the following is the case:
| h n ( u ) h n ( v ) | = | A t n , u v | A t n u v L u v ,
which hence implies that h n ( · ) is L-Lipschitz continuous on C n . By Lemmas 2 and 7, we obtain
dist ( w n , C n ) 1 L h n ( w n ) = τ n 2 λ L r λ ( w n ) 2 .
Combining (21) and (27), we obtain the following.
z n p 2 w n p 2 [ τ n 2 λ L r λ ( w n ) 2 ] 2 .
From Algorithm 3, (23), and (26), the following is obtained.
x n + 1 p 2 α n f ( x n ) p 2 + β n x n p 2 + γ n T n z n p 2 α n f ( x n ) p 2 + β n x n p 2 + γ n ( 1 + θ n ) 2 z n p 2 α n f ( x n ) p 2 + β n x n p 2 + γ n z n p 2 + θ n ( 2 + θ n ) z n p 2 α n f ( x n ) p 2 + β n x n p 2 + γ n [ w n p 2 [ τ n 2 λ L r λ ( w n ) 2 ] 2 ] + θ n M 1 α n f ( x n ) p 2 + ( 1 α n ) x n p 2 γ n τ n 2 λ L r λ ( w n ) 2 ] 2 + θ n M 1 α n f ( x n ) p 2 + θ n M 1 + x n p 2 γ n τ n 2 λ L r λ ( w n ) 2 ] 2 .
This immediately yields the following.
γ n [ τ n 2 λ L r λ ( w n ) 2 ] 2 x n p 2 x n + 1 p 2 + α n f ( x n ) p 2 + θ n M 1 .
Step 4. Let us obtain the following.
x n + 1 p 2 ( 1 α n ( 1 δ ) ) x n p 2 + α n ( 1 δ ) [ 2 f ( p ) p , x n + 1 p 1 δ + θ n α n · M 1 1 δ ] .
Indeed, from Algorithm 3 and (23), one obtains the following.
x n + 1 p 2 = α n ( f ( x n ) f ( p ) ) + β n ( x n p ) + γ n ( T n z n p ) + α n ( f ( p ) p ) 2 α n ( f ( x n ) f ( p ) ) + β n ( x n p ) + γ n ( T n z n p ) 2 + 2 α n f ( p ) p , x n + 1 p α n f ( x n ) f ( p ) 2 + β n x n p 2 + γ n ( 1 + θ n ) 2 z n p 2 + 2 α n f ( p ) p , x n + 1 p δ α n x n p 2 + β n x n p 2 + γ n z n p 2 + θ n ( 2 + θ n ) z n p 2 + 2 α n f ( p ) p , x n + 1 p δ α n x n p 2 + β n x n p 2 + γ n x n p 2 + θ n M 1 + 2 α n f ( p ) p , x n + 1 p = [ 1 α n ( 1 δ ) ] x n p 2 + θ n M 1 + 2 α n f ( p ) p , x n + 1 p = ( 1 α n ( 1 δ ) ) x n p 2 + α n ( 1 δ ) [ 2 f ( p ) p , x n + 1 p 1 δ + θ n α n · M 1 1 δ ] .
Step 5. Let p = x * , we deduce from (28) that the following is the case.
x n + 1 x * 2 ( 1 α n ( 1 δ ) ) x n x * 2 + α n ( 1 δ ) [ 2 f ( x * ) x * , x n + 1 x * 1 δ + θ n α n · M 1 1 δ ] .
We need to show that lim   sup n f ( x * ) x * , x n + 1 x * 0 . Substituting p = x * , from Step 2, we obtain the following.
γ n z n w n 2 x n x * 2 x n + 1 x * 2 + θ n M 1 + 2 α n f ( x n ) x * , x n + 1 x * x n x n + 1 ( x n x * + x n + 1 x * ) + θ n M 1 + 2 α n f ( x n ) x * x n + 1 x * .
Since 0 < lim   inf n γ n , θ n 0 , α n 0 and x n x n + 1 0 . From the boundedness of { x n } , one obtains the following.
lim n w n z n = 0 .
Substituting p = x * , from Step 3, we obtain the following.
γ n [ τ n 2 λ L r λ ( w n ) 2 ] 2 x n x * 2 x n + 1 x * 2 + α n f ( x n ) x * 2 + θ n M 1 x n x n + 1 ( x n x * + x n + 1 x * ) + θ n M 1 + α n f ( x n ) x * 2 .
Since 0 < lim   inf n γ n , θ n 0 , α n 0 , and x n x n + 1 0 (due to the assumption), from the boundedness of { x n } , one obtains the following.
lim n [ τ n 2 λ L r λ ( w n ) 2 ] 2 = 0 .
Hence, by Lemma 9, we deduce the following.
lim n w n y n = 0 .
Obviously, assumption x n y n 0 together with (31) implies the following.
w n x n w n y n + y n x n 0 ( n ) .
From the boundedness of { x n } , it follows that there exists a subsequence { x n k } of { x n } such that the following is the case.
lim   sup n f ( x * ) x * , x n x * = lim k f ( x * ) x * , x n k x * .
Since H is reflexive and { x n } is bounded, we may assume, without loss of generality, that x n k x ˜ . Thus, from (33), one obtains the following.
lim   sup n f ( x * ) x * , x n x * = lim k f ( x * ) x * , x n k x * = f ( x * ) x * , x ˜ x * .
Thus, it follows from w n x n 0 (due to (32)) and x n k x ˜ that w n k x ˜ . Since x n x n + 1 0 , w n x n 0 , w n y n 0 , w n z n 0 and w n k x ˜ , by Lemma 8, we infer that x ˜ Ω . Hence, from (20) and (34), one obtains the following:
lim   sup n f ( x * ) x * , x n x * = f ( x * ) x * , x ˜ x * 0 ,
which immediately results in the following.
lim   sup n f ( x * ) x * , x n + 1 x * = lim   sup n [ f ( x * ) x * , x n + 1 x n + f ( x * ) x * , x n x * ] lim   sup n [ f ( x * ) x * x n + 1 x n + f ( x * ) x * , x n x * ] 0 .
Note that { α n ( 1 δ ) } [ 0 , 1 ] , n = 1 α n ( 1 δ ) = , and the following is the case.
lim   sup n [ 2 f ( x * ) x * , x n + 1 x * 1 δ + θ n α n · M 1 1 δ ] 0 .
Consequently, by applying Lemma 4 to (29), one has lim n x n x * = 0 . This completes the proof.
Theorem 2.
Let T : C C be nonexpansive and the sequence { x n } be constructed by the modified version of Algorithm 3; that is, for any initial x 1 C , the following is the case:
w n = ( 1 σ n ) x n + σ n T n x n , y n = P C ( w n λ A w n ) , t n = ( 1 τ n ) w n + τ n y n , z n = P C n ( w n ) , x n + 1 = α n f ( x n ) + β n x n + γ n T z n n 1 ,
where for each n 1 , C n , and τ n are chosen as in Algorithm 3. Then, the following is the case:
x n x * Ω x n x n + 1 0 , x n y n 0
where x * Ω is the unique solution of the VIP:   ( I f ) x * , p x * 0 p Ω .
Proof. 
The necessity is obvious. Thus, we show the sufficiency. Assume lim n ( x n x n + 1 + x n y n ) = 0 and divide the rest of the proof into several steps.    □
Step 1. { x n } is bounded: Indeed, using the same argument as in Step 1 of the proof of Theorem 1, we obtain the desired assertion.
Step 2. We obtain the following:
γ n z n w n 2 x n p 2 x n + 1 p 2 + θ n M 1 + 2 α n f ( x n ) p , x n + 1 p ,
for some M 1 > 0 . Indeed, using the same arguments as in Step 2 of the proof of Theorem 1, we obtain the desired assertion.
Step 3. We prove that the following is the case.
γ n [ τ n 2 λ L r λ ( w n ) 2 ] 2 x n p 2 x n + 1 p 2 + α n f ( x n ) p 2 + θ n M 1 .
Indeed, similar arguments similar to those in Step 3 of the proof of Theorem 1 provide the assertion.
Step 4. We show the following.
x n + 1 p 2 ( 1 α n ( 1 δ ) ) x n p 2 + α n ( 1 δ ) [ 2 f ( p ) p , x n + 1 p 1 δ + θ n α n · M 1 1 δ ] .
By Step 4 of the proof of Theorem 1, we obtain the desired conclusion.
Step 5.  { x n } converges strongly to the unique solution x * Ω of the VIP (20): substitute p = x * and we deduce from Step 4 that the following is the case.
x n + 1 x * 2 ( 1 α n ( 1 δ ) ) x n x * 2 + α n ( 1 δ ) [ 2 f ( x * ) x * , x n + 1 x * 1 δ + θ n α n · M 1 1 δ ] .
We show that lim   sup n f ( x * ) x * , x n + 1 x * 0 . Using the same arguments as those of (30) and (31), we obtain the following.
lim n w n z n = 0 and lim n w n y n = 0 .
Now, the following is obtained.
x n + 1 x n = α n ( f ( x n ) x n ) + γ n ( T z n x n ) = α n ( f ( x n ) x n ) + γ n ( T z n z n + z n w n + w n y n + y n x n ) = α n ( f ( x n ) x n ) + γ n ( T z n z n ) + γ n ( z n w n + w n y n + y n x n ) .
From (38), α n 0 , x n x n + 1 0 , x n y n 0 , { γ n } [ a , 1 ) ( 0 , 1 ) , and the boundedness of { x n } , { f ( x n ) } , it follows that, as n , the following is the case.
T z n z n = 1 γ n x n + 1 x n α n ( f ( x n ) x n ) γ n ( z n w n + w n y n + y n x n ) 1 a [ x n + 1 x n + α n ( f ( x n ) + x n ) + z n w n + w n y n + y n x n ) ] 0 .
Obviously, combining (38) and x n y n 0 guarantees the following.
w n x n w n y n + y n x n 0 ( n ) .
The rest of the proof is similar to the arguments in Step 5 of the proof of Theorem 1.
Next, we introduce modified Mann-type subgradient-like extragradient algorithm.
Note that Lemmas 6–9 are valid for Algorithm 4.
Algorithm 4Initialization: Given μ > 0 , l ( 0 , 1 ) , λ ( 0 , 1 μ ) . Let x 1 C be arbitrary.
Iterative Steps: Given the current iterate x n , calculate x n + 1 as follows:
Step 1. Set w n = ( 1 σ n ) x n + σ n T n x n , and compute y n = P C ( w n λ A w n ) and r λ ( w n ) : = w n y n .
Step 2. Compute t n = w n τ n r λ ( w n ) , where τ n : = l j n and j n is the smallest nonnegative integer j satisfying
A w n A ( w n l j r λ ( w n ) ) , w n y n μ 2 r λ ( w n ) 2 .
Step 3. Compute z n = P C n ( w n ) and x n + 1 = α n f ( x n ) + β n w n + γ n T n z n , where C n : = { x C : h n ( x ) 0 } and
h n ( x ) = A t n , x w n + τ n 2 λ r λ ( w n ) 2 .
Again set n : = n + 1 and go to Step 1.
Theorem 3.
Let { x n } be the sequence constructed by Algorithm 4. Assume that T n z n T n + 1 z n 0 . Then, the following is the case:
x n x * Ω x n x n + 1 0 , x n y n 0
where x * Ω is the unique solution of the VIP: ( I f ) x * , p x * 0 p Ω .
Proof. 
Using the same arguments as in the proof of Theorem 1, we deduce that there exists a unique solution x * Ω = i = 0 N Fix ( T i ) VI ( C , A ) of the VIP (20) and that the necessity of the theorem is valid. □
For sufficiency, assume lim n ( x n x n + 1 + x n y n ) = 0 and consider these steps.
Step 1. We show that { x n } is bounded. Indeed, using the same arguments as in Step 1 of the proof of Theorem 1, we obtain that inequalities (21)–(23) hold. Thus, from (23) and α n + β n + γ n = 1 n 1 , the following is the case.
x n + 1 p α n ( f ( x n ) f ( p ) + f ( p ) p ) + β n w n p + γ n ( 1 + θ n ) z n p α n ( δ x n p + f ( p ) p ) + β n w n p + γ n z n p + θ n z n p α n ( δ x n p + f ( p ) p ) + β n x n p + γ n x n p + α n ( 1 δ ) 2 x n p = [ 1 α n ( 1 δ ) 2 ] x n p + α n ( 1 δ ) 2 · 2 f ( p ) p 1 δ max { x n p , 2 f ( p ) p 1 δ } .
Hence, x n p max { x 1 p , 2 f ( p ) p 1 δ } n 1 . Thus, { x n } is bounded.
Step 2. We show the following.
γ n z n w n 2 x n p 2 x n + 1 p 2 + θ n M 1 + 2 α n f ( x n ) p , x n + 1 p .
To prove this, we first note that the following is the case.
x n + 1 p 2 = α n ( f ( x n ) p ) + β n ( w n p ) + γ n ( T n z n p ) 2 β n ( w n p ) + γ n ( T n z n p ) 2 + 2 α n f ( x n ) p , x n + 1 p β n w n p 2 + γ n ( 1 + θ n ) 2 z n p 2 + 2 α n f ( x n ) p , x n + 1 p β n x n p 2 + γ n z n p 2 + θ n ( 2 + θ n ) z n p 2 + 2 α n f ( x n ) p , x n + 1 p β n x n p 2 + γ n z n p 2 + θ n M 1 + 2 α n f ( x n ) p , x n + 1 p ,
The desired conclusion follows from Step 2 of the proof of Theorem 1.
Step 3. We show the following.
γ n [ τ n 2 λ L r λ ( w n ) 2 ] 2 x n p 2 x n + 1 p 2 + α n f ( x n ) p 2 + θ n M 1 .
Indeed, using the same argument as that of (28), we obtain that for some L > 0 , the following is the case.
z n p 2 w n p 2 [ τ n 2 λ L r λ ( w n ) 2 ] 2 .
From Algorithm 4, (23), and (44), the following is obtained.
x n + 1 p 2 α n f ( x n ) p 2 + β n w n p 2 + γ n z n p 2 + θ n ( 2 + θ n ) z n p 2 α n f ( x n ) p 2 + β n w n p 2 + γ n [ w n p 2 [ τ n 2 λ L r λ ( w n ) 2 ] 2 ] + θ n M 1 α n f ( x n ) p 2 + θ n M 1 + x n p 2 γ n τ n 2 λ L r λ ( w n ) 2 ] 2 .
By rearranging, we obtain the desired inequality.
Step 4. We show the following.
x n + 1 p 2 ( 1 α n ( 1 δ ) ) x n p 2 + α n ( 1 δ ) [ 2 f ( p ) p , x n + 1 p 1 δ + θ n α n · M 1 1 δ ] .
Indeed, from Algorithm 4 and (23), one obtains the following:
x n + 1 p 2 α n ( f ( x n ) f ( p ) ) + β n ( w n p ) + γ n ( T n z n p ) 2 + 2 α n f ( p ) p , x n + 1 p δ α n x n p 2 + β n w n p 2 + γ n z n p 2 + θ n ( 2 + θ n ) z n p 2 + 2 α n f ( p ) p , x n + 1 p δ α n x n p 2 + β n x n p 2 + γ n x n p 2 + θ n M 1 + 2 α n f ( p ) p , x n + 1 p = [ 1 α n ( 1 δ ) ] x n p 2 + θ n M 1 + 2 α n f ( p ) p , x n + 1 p ,
which, hence, results in the desired assertion.
Step 5. We show that { x n } converges strongly to the unique solution x * Ω of the VIP (20). Indeed, Step 5 of the proof of Theorem 1 provides the result.
Theorem 4.
Let T : C C be nonexpansive and the sequence { x n } be constructed by x 1 C :
w n = ( 1 σ n ) x n + σ n T n x n , y n = P C ( w n λ A w n ) , t n = ( 1 τ n ) w n + τ n y n , z n = P C n ( w n ) , x n + 1 = α n f ( x n ) + β n w n + γ n T z n n 1 ,
where for each n 1 , C n and τ n are chosen in Algorithm 4. Then, the following is the case:
x n x * Ω x n x n + 1 0 , x n y n 0
where x * Ω is the unique solution of the VIP: ( I f ) x * , p x * 0 p Ω .
Proof. 
Similar arguments as in the proof of Theorem 2 and Step 5 of Theorem 3 provide the conclusions. □
Remark 1.
Our results complement the results in Kraikaew and Saejung [10], Ceng and Shang [11], and Reich et al. [12] in the following ways:
(i)
The problem of finding an element of VI ( C , A ) in [10] is extended to develop our problem of finding an element of i = 0 N Fix ( T i ) VI ( C , A ) where T i is nonexpansive for i = 1 , , N , and T 0 = T is asymptotically nonexpansive. The Halpern subgradient extragradient method for solving VIP in [10] is extended to develop our Mann-type subgradient-like extragradient method with a line-search process for solving VIP and CFPP, which is based on Mann iteration method, subgradient extragradient method with line-search process, and viscosity approximation method.
(ii)
The results in [12] are extended to finding an element of i = 0 N Fix ( T i ) VI ( C , A ) . The modified projection-type method with linear-search process for solving the VIP in [12] is extended to develop our Mann-type subgradient-like extragradient method with line-search process for solving the VIP and CFPP, which is based on the Mann iteration method, subgradient extragradient method with line-search process, and viscosity approximation method.
(iii)
The problem of finding an element of i = 0 N Fix ( T i ) VI ( C , A ) with Lipschitz continuity and sequentially weak continuity mapping A in [11] is extended to finding an element of i = 0 N Fix ( T i ) VI ( C , A ) where A is uniformly continuous such that A z lim   inf n A x n for each { x n } C with x n z C . The hybrid inertial subgradient extragradient method with line-search process in [11] is generalized to Mann-type subgradient-like extragradient method with line-search process, e.g., the original inertial approach w n = T n x n + α n ( T n x n T n x n 1 ) " is replaced by our Mann iteration method w n = ( 1 σ n ) x n + σ n T n x n " , and the original iterative step x n + 1 = β n f ( x n ) + γ n x n + ( ( 1 γ n ) I β n ρ F ) T n z n " is replaced by our simpler iterative one x n + 1 = α n f ( x n ) + β n x n + γ n T n z n " . It is worth mentioning that the definition of z n in the former formulation of x n + 1 is very different from the definition of z n in the latter formulation of x n + 1 .
(iv)
The method in [10] involves a combination of Halpern approximation method, subgradient extragradient method, and Mann iteration to find a common solution to variational inequalities and common fixed point problem involving quasi-nonexpansive mapping with strong convergence results obtained. The method in [11] solves a problem of finding a common solution to variational inequalities and common fixed point problem in which one of the operators is asymptotically nonexpansive and others are nonexpansive mappings. The method of [11] is a combination of the subgradient extragradient method, viscosity approximation and hybrid steepest-descent method, and strong convergence results obtained. In [12], a strongly convergent method that is a combination of projection-type method and viscosity approximation method is proposed to solve variational inequalities. Our proposed methods in this paper are proposed to solve variational inequalities and common fixed point problem for which one of the operators is asymptotically nonexpansive and others are nonexpansive, and A in the variational inequality is pseudomonotone and uniformly continuous (unlike [11] where A is Lipschitz continuous). One method involves a combination of the method proposed in [12] and viscosity approximation. In essence, our results in this paper reduce to the results in [12] when the operators in the common fixed point problem are identity mappings. Furthermore, our method does not involve the hybrid steepest-descent method and subgradient extragradient method used in [11]. Our results also serve as extensions of the results obtained in [10] in the setting of variational inequalities.

4. Applications

In this section, our main results are applied to solve the VIP and CFPP in an illustrated example. Substitute μ = l , l = λ = 1 3 , σ n = 1 2 , α n = 1 2 ( n + 1 ) , β n = n 2 ( n + 1 ) and γ n = 1 2 .
We first provide an example of Lipschitz continuous and monotone mapping A, asymptotically nonexpansive mapping T, and nonexpansive mapping T 1 with Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) . Let C = [ 3 , 4 ] and H = R with the inner product a , b = a b and induced norm · = | · | . The initial point x 1 is randomly chosen in C. Take f ( x ) = 1 2 x x C with δ = 1 2 . Let A : H H and T , T 1 : C C be defined as A x : = 1 1 + | sin x | 1 1 + | x | , T x : = 3 4 sin x , and T 1 x : = sin x for all x C . Now, we first show that A is pseudomonotone and Lipschitz continuous. Indeed, for all x , y H , we have the following.
A x A y = | 1 1 + sin x 1 1 + x 1 1 + sin y + 1 1 + y | | y x ( 1 + x ) ( 1 + y ) | + | sin y sin x ( 1 + sin x ) ( 1 + sin y ) | x y + sin x sin y 2 x y .
This implies that A is Lipschitz continuous. Next, we show that A is pseudomonotone. For each x , y H , it is easy to see that the following is the case.
A x , y x = ( 1 1 + | sin x | 1 1 + | x | ) ( y x ) 0 A y , y x = ( 1 1 + | sin y | 1 1 + | y | ) ( y x ) 0 .
Furthermore, it is easy to see that T is asymptotically nonexpansive with θ n = ( 3 4 ) n n 1 , such that T n + 1 z n T n z n 0 as n . Indeed, we observe that the following is the case:
T n x T n y 3 4 T n 1 x T n 1 y ( 3 4 ) n x y ( 1 + θ n ) x y ,
and the following obtains.
T n + 1 z n T n z n ( 3 4 ) n 1 T 2 z n T z n = ( 3 4 ) n 1 3 4 sin ( T z n ) 3 4 sin z n 2 ( 3 4 ) n 0 ( n ) .
It is clear that Fix ( T ) = { 0 } and
lim n θ n α n = lim n ( 3 / 4 ) n 1 / 2 ( n + 1 ) = 0 .
In addition, it is clear that T 1 is nonexpansive and Fix ( T 1 ) = { 0 } . Therefore, Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) = { 0 } . In this case, Algorithm 3 can be rewritten as follows:
w n = 1 2 x n + 1 2 T 1 x n , y n = P C ( w n 1 3 A w n ) , t n = ( 1 τ n ) w n + τ n y n , z n = P C n ( w n ) , x n + 1 = 1 2 ( n + 1 ) · 1 2 x n + n 2 ( n + 1 ) x n + 1 2 T n z n n 1 ,
where for each n 1 , C n and τ n are chosen as in Algorithm 3. Then, by Theorem 1, we know that { x n } converges to 0 Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) if and only if | x n x n + 1 | + | x n y n | 0 as n .
In particular, since T x : = 3 4 sin x is also nonexpansive, we consider the modified version of Algorithm 3:
w n = 1 2 x n + 1 2 T 1 x n , y n = P C ( w n 1 3 A w n ) , t n = ( 1 τ n ) w n + τ n y n , z n = P C n ( w n ) , x n + 1 = 1 2 ( n + 1 ) · 1 2 x n + n 2 ( n + 1 ) x n + 1 2 T z n n 1 ,
where for each n 1 , C n and τ n are chosen as stated above. Then, by Theorem 2, we know that { x n } converges to 0 Ω = Fix ( T 1 ) Fix ( T ) VI ( C , A ) if and only if | x n x n + 1 | + | x n y n | 0 as n .

5. Conclusions

We have introduced two Mann-type subgradient-like extra gradient algorithms that combine projection-type method, viscosity approximation, and Armijo-type line-search procedure to solve variational inequalities and common fixed-point problem of finitely many nonexpansive mappings and an asymptotically nonexpansive mapping in a real Hilbert space. We obtained strong convergence results of the sequences of iterates generated by our proposed methods under some standard conditions. We also gave some illustrative example to justify the theoretical analysis. Part of our future research is aimed to obtain strong convergence results for modifications of our proposed methods with Nesterov inertial extrapolation step and self-adaptive step sizes.

Author Contributions

All authors contributed equally to this work, which included mathematical theory and analysis. All authors have read and agreed to the published version of the manuscript.

Funding

Lu-Chuan Ceng is partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of Ministry of Education of China (20123127110002), and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100). The research of Jen-Chih Yao was supported by the grant MOST 108-2115-M-039- 005-MY3.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Mat. Metod. 1976, 12, 747–756. [Google Scholar]
  2. Yao, Y.; Liou, Y.C.; Kang, S.M. Approach to common elements of variational inequality problems and fixed point problems via a relaxed extragradient method. Comput. Math. Appl. 2010, 59, 3472–3480. [Google Scholar] [CrossRef] [Green Version]
  3. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Denisov, S.V.; Semenov, V.V.; Chabak, L.M. Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 2015, 51, 757–765. [Google Scholar] [CrossRef]
  5. Yang, J.; Liu, H.; Liu, Z. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
  6. Vuong, P.T. On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. 2018, 176, 399–409. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Thong, D.V.; Hieu, D.V. Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Algorithm. 2019, 80, 1283–1307. [Google Scholar] [CrossRef]
  8. Thong, D.V.; Dong, Q.L.; Liu, L.L.; Triet, N.A.; Lan, N.P. Two new inertial subgradient extragradient methods with variable step sizes for solving pseudomonotone variational inequality problems in Hilbert spaces. J. Comput. Appl. Math. 2021, 9, 5. [Google Scholar]
  9. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  10. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  11. Ceng, L.C.; Shang, M.J. Hybrid inertial subgradient extragradient methods for variational inequalities and fixed point problems involving asymptotically nonexpansive mappings. Optimization 2021, 70, 715–740. [Google Scholar] [CrossRef]
  12. Reich, S.; Thong, D.V.; Dong, Q.L.; Li, X.H.; Dung, V.T. New algorithms and convergence theorems for solving variational inequalities with non-Lipschitz mappings. Numer. Algorithm. 2021, 87, 527–549. [Google Scholar] [CrossRef]
  13. Iusem, A.N.; Nasri, M. Korpelevich’s method for variational inequality problems in Banach spaces. J. Glob. Optim. 2011, 50, 59–76. [Google Scholar] [CrossRef]
  14. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  15. He, Y.R. A new double projection algorithm for variational inequalities. J. Comput. Appl. Math. 2006, 185, 166–173. [Google Scholar] [CrossRef] [Green Version]
  16. Xu, H.K.; Kim, T.H. Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119, 185–201. [Google Scholar] [CrossRef]
  17. Ceng, L.C.; Xu, H.K.; Yao, J.C. The viscosity approximation method for asymptotically nonexpansive mappings in Banach spaces. Nonlinear Anal. 2008, 69, 1402–1412. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ceng, L.-C.; Yao, J.-C.; Shehu, Y. On Mann-Type Subgradient-like Extragradient Method with Linear-Search Process for Hierarchical Variational Inequalities for Asymptotically Nonexpansive Mappings. Mathematics 2021, 9, 3322. https://doi.org/10.3390/math9243322

AMA Style

Ceng L-C, Yao J-C, Shehu Y. On Mann-Type Subgradient-like Extragradient Method with Linear-Search Process for Hierarchical Variational Inequalities for Asymptotically Nonexpansive Mappings. Mathematics. 2021; 9(24):3322. https://doi.org/10.3390/math9243322

Chicago/Turabian Style

Ceng, Lu-Chuan, Jen-Chih Yao, and Yekini Shehu. 2021. "On Mann-Type Subgradient-like Extragradient Method with Linear-Search Process for Hierarchical Variational Inequalities for Asymptotically Nonexpansive Mappings" Mathematics 9, no. 24: 3322. https://doi.org/10.3390/math9243322

APA Style

Ceng, L. -C., Yao, J. -C., & Shehu, Y. (2021). On Mann-Type Subgradient-like Extragradient Method with Linear-Search Process for Hierarchical Variational Inequalities for Asymptotically Nonexpansive Mappings. Mathematics, 9(24), 3322. https://doi.org/10.3390/math9243322

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop