Next Article in Journal
Line and Subdivision Graphs Determined by T 4 -Gain Graphs
Next Article in Special Issue
Informal Norm in Hyperspace and Its Topological Structure
Previous Article in Journal
On a q—Analog of a Singularly Perturbed Problem of Irregular Type with Two Complex Time Variables
Previous Article in Special Issue
Inertial-Like Subgradient Extragradient Methods for Variational Inequalities and Fixed Points of Asymptotically Nonexpansive and Strictly Pseudocontractive Mappings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Mann Viscosity Subgradient Extragradient Algorithms for Fixed Point Problems of Finitely Many Strict Pseudocontractions and Variational Inequalities

1
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
2
Department of Mathematics Babeş-Bolyai University, Cluj-Napoca 400084, Romania
3
Academy of Romanian Scientists, Bucharest 050044, Romania
4
Research Center for Interneural Computing, China Medical University Hospital, Taichung 40447, Taiwan
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(10), 925; https://doi.org/10.3390/math7100925
Submission received: 21 August 2019 / Revised: 24 September 2019 / Accepted: 26 September 2019 / Published: 4 October 2019
(This article belongs to the Special Issue Applied Functional Analysis and Its Applications)

Abstract

:
In a real Hilbert space, we denote CFPP and VIP as common fixed point problem of finitely many strict pseudocontractions and a variational inequality problem for Lipschitzian, pseudomonotone operator, respectively. This paper is devoted to explore how to find a common solution of the CFPP and VIP. To this end, we propose Mann viscosity algorithms with line-search process by virtue of subgradient extragradient techniques. The designed algorithms fully assimilate Mann approximation approach, viscosity iteration algorithm and inertial subgradient extragradient technique with line-search process. Under suitable assumptions, it is proven that the sequences generated by the designed algorithms converge strongly to a common solution of the CFPP and VIP, which is the unique solution to a hierarchical variational inequality (HVI).

1. Introduction and Preliminaries

Throughout this article, we suppose that the real vector space H is a Hilbert one and the nonempty subset C of H is a convex and closed one. An operator S : C H is called:
(i) L-Lipschitzian if there exists L > 0 such that S u S v L u v u , v C ;
(ii) sequentially weakly continuous if for any { w n } C , the following implication holds: w n w S w n S w ;
(iii) pseudomonotone if S u , u v 0 S v , u v 0 u , v C ;
(iv) monotone if S u S v , v u 0 u , v C ;
(v) γ -strongly monotone if γ > 0 s.t. S u S w , u w γ u w 2 u , w C .
It is not difficult to observe that monotonicity ensures the pseudomonotonicity. A self-mapping S : C C is called a η -strict pseudocontraction if the relation holds: S u S v , u v u v 2 1 η 2 ( I S ) u ( I S ) v 2 u , v C for some η [ 0 , 1 ) . By [1] we know that, in the case where S is η -strictly pseudocontractive, S is Lipschitzian, i.e., S u S v 1 + η 1 η u v u , v C . It is clear that the class of strict pseudocontractions includes the class of nonexpansive operators, i.e., S u S v u v u , v C . Both classes of nonlinear operators received much attention and many numerical algorithms were designed for calculating their fixed points in Hilbert or Banach spaces; see e.g., [2,3,4,5,6,7,8,9,10,11].
Let A be a self-mapping on H. The classical variational inequality problem (VIP) is to find z C such that A z , y z 0 y C . The solution set of such a VIP is indicated by VI( C , A ). To the best of our knowledge, one of the most effective methods for solving the VIP is the gradient-projection method. Recently, many authors numerically investigated the VIP in finite dimensional spaces, Hilbert spaces or Banach spaces; see e.g., [12,13,14,15,16,17,18,19,20].
In 2014, Kraikaew and Saejung [21] suggested a Halpern-type gradient-like algorithm to deal with the VIP
v k = P C ( u k A u k ) , C k = { v H : u k A u k v k , v k v 0 } , w k = P C k ( u n A v k ) , u k + 1 = ϱ k u 0 + ( 1 ϱ k ) w k k 0 ,
where ( 0 , 1 L ) , { ϱ k } ( 0 , 1 ) , lim k ϱ k = 0 , k = 1 ϱ k = + , and established strong convergence theorems for approximation solutions in Hilbert spaces. Later, Thong and Hieu [22] designed an inertial algorithm, i.e., for arbitrarily given u 0 , u 1 H , the sequence { u k } is constructed by
z k = u k + ϱ k ( u k u k 1 ) , v k = P C ( z k A z k ) , C k = { v H : z k A z k v k , v k v 0 } , u k + 1 = P C k ( z n A v k ) k 1 ,
with ( 0 , 1 L ) . Under mild assumptions, they proved that { u k } converge weakly to a point of VI ( C , A ) . Very recently, Thong and Hieu [23] suggested two inertial algorithms with linear-search process, to solve the VIP for Lipschitzian, monotone operator A and the FPP for a quasi-nonexpansive operator S satisfying a demiclosedness property in H. Under appropriate assumptions, they proved that the sequences constructed by the suggested algorithms converge weakly to a point of Fix ( S ) VI ( C , A ) . Further research on common solutions problems, we refer the readers to [24,25,26,27,28,29,30,31,32,33,34,35,36,37,38].
In this paper, we first introduce Mann viscosity algorithms via subgradient extragradient techniques, and then establish some strong convergence theorems in Hilbert spaces. It is remarkable that our algorithms involve line-search process.
The following lemmas are useful for the convergence analysis of our algorithms in the sequel.
Lemma 1.
[39] Let the operator A be pseudomonotone and continuous on C. Given a point w C . Then the relation holds: A w , w y 0 y C A y , w y 0 y C .
Lemma 2.
[40] Suppose that { s k } is a sequence in [ 0 , + ) such that s k + 1 t k b k + ( 1 t k ) s k k 1 , where { t k } and { b k } lie in real line R : = ( , ) , such that:
(a) { t k } [ 0 , 1 ] and k = 1 t k = ;
(b) lim sup k b k 0 or k = 1 | t k b k | < . Then s k 0 as k .
From Ceng et al. [2] it is not difficult to find that the following lemmas hold.
Lemma 3.
Let Γ be an η-strictly pseudocontractive self-mapping on C. Then I Γ is demiclosed at zero.
Lemma 4.
For l = 1 , , N , let Γ l be an η l -strictly pseudocontractive self-mapping on C. Then for l = 1 , , N , the mapping Γ l is an η-strict pseudocontraction with η = max { η l : 1 l N } , such that
Γ l u Γ l v 1 + η 1 η u v u , v C .
Lemma 5.
Let Γ be an η-strictly pseudocontractive self-mapping on C. Given two reals γ , β [ 0 , + ) . If  ( γ + β ) η γ , then γ ( u v ) + β ( Γ u Γ v ) ( γ + β ) u v u , v C .

2. Main Results

Our first algorithm is specified below.
Algorithm 1
Initial Step: Given x 0 , x 1 H arbitrarily. Let γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) .
Iteration Steps: Compute x n + 1 below:
Step 1. Put v n = x n σ n ( x n 1 x n ) and calculate u n = P C ( v n n A v n ) , where n is picked to be the largest { γ , γ l , γ l 2 , } s.t.
A v n A u n μ v n u n .
Step 2. Calculate z n = ( 1 α n ) P C n ( v n n A u n ) + α n f ( x n ) with C n : = { v H : v n n A v n u n , u n v 0 } .
Step 3. Calculate
x n + 1 = γ n P C n ( v n n A u n ) + δ n T n z n + β n x n .
Update n : = n + 1 and return to Step 1.
In this section, we always suppose that the following hypotheses hold:
T k is a ζ k -strictly pseudocontractive self-mapping on H for k = 1 , , N s.t. ζ [ 0 , 1 ) with ζ = max { ζ k : 1 k N } .
A is L-Lipschitzian, pseudomonotone self-mapping on H, and sequentially weakly continuous on C, such that Ω : = k = 1 N Fix ( T k ) VI ( C , A ) .
f : H C is a δ -contraction with δ [ 0 , 1 2 ) .
{ σ n } [ 0 , 1 ] and { α n } , { β n } , { γ n } , { δ n } ( 0 , 1 ) are such that:
(i) β n + γ n + δ n = 1 and sup n 1 σ n α n < ;
(ii) ( 1 2 δ ) δ n > γ n ( γ n + δ n ) ζ n 1 and lim inf n ( ( 1 2 δ ) δ n γ n ) > 0 ;
(iii) lim n α n = 0 and n = 1 α n = ;
(iv) lim inf n β n > 0 , lim inf n δ n > 0 and lim sup n β n < 1 .
Following Xu and Kim [40], we denote T n : = T n mod N , n 1 , where the mod function takes values in { 1 , 2 , , N } , i.e., whenever n = j N + q for some j 0 and 0 q < N , we obtain that T n = T N in the case of q = 0 and T n = T q in the case of 0 < q < N .
Lemma 6.
The Armijo-like search rule (1) is well defined, and min { γ , μ l L } n γ .
Proof. 
Obviously, (1) holds for all γ l m μ L . So, n is well defined and n γ . In the case of n = γ , the inequality is true. In the case of n < γ , (1) ensures A v n A P C ( v n n l A v n ) > μ n l v n P C ( v n n l A v n ) . The L-Lipschitzian property of A yields n > μ l L .  □
Lemma 7.
Let { v n } , { u n } and { z n } be the sequences constructed by Algorithm 1. Then
z n ω 2 ( 1 α n ) v n ω 2 + α n δ x n ω 2 ( 1 α n ) ( 1 μ ) [ v n u n 2 + h n u n 2 ] + 2 α n f ω ω , z n ω ω Ω ,
where h n : = P C n ( v n n A u n ) n 1 .
Proof. 
First, taking an arbitrary p Ω C C n , we observe that
2 h n p 2 2 h n p , v n n A u n p = h n p 2 + v n p 2 h n v n 2 2 n A u n , h n p .
So, it follows that v n p 2 2 h n p , n A u n h n v n 2 h n p 2 , which together with (1), we deduce that 0 p u n , A u n and
h n p 2 v n p 2 h n v n 2 + 2 n ( A u n , p u n + A u n , u n h n ) v n p 2 u n h n 2 v n u n 2 + 2 u n v n + n A u n , u n h n .
Since h n = P C n ( v n n A u n ) with C n : = { v H : u n v n + n A v n , u n v 0 } , we have u n v n + n A v n , u n h n 0 , which together with (1), implies that
2 u n v n + n A u n , u n h n = 2 u n v n + n A v n , u n h n + 2 n A v n A u n , h n u n 2 μ u n v n u n h n μ ( v n u n 2 + h n u n 2 ) .
Therefore, substituting the last inequality for (4), we infer that
h n p 2 v n p 2 ( 1 μ ) v n u n 2 ( 1 μ ) h n u n 2 p Ω .
In addition, we have
z n p = ( 1 α n ) ( h n p ) + α n ( f I ) p + α n ( f ( x n ) f ( p ) ) .
Using the convexity of the function h ( t ) = t 2 t R , from (5) we get
z n p 2 [ α n δ x n p + ( 1 α n ) h n p ] 2 + 2 α n ( f I ) p , z n p α n δ x n p 2 + ( 1 α n ) h n p 2 + 2 α n ( f I ) p , z n p α n δ x n p 2 + ( 1 α n ) v n p 2 ( 1 α n ) ( 1 μ ) [ v n u n 2 + h n u n 2 ] + 2 α n ( f I ) p , z n p .
 □
Lemma 8.
Let { x n } , { u n } , and { z n } be bounded sequences constructed by Algorithm 1. If x n x n + 1 0 , v n u n 0 , v n z n 0 and { v n i } { v n } s.t. v n i z H , then z Ω .
Proof. 
According to Algorithm 1, we get σ n ( x n x n 1 ) = v n x n n 1 , and hence x n x n 1 v n x n . Using the assumption x n x n + 1 0 , we have
lim n v n x n = 0 .
So,
z n x n v n z n + v n x n 0 .
Since { x n } is bounded, from v n = x n σ n ( x n 1 x n ) we know that { v n } is a bounded vector sequence. According to (5), we obtain that h n : = P C n ( v n n A u n ) is a bounded vector sequence. Also, by Algorithm 1 we get α n f ( x n ) + h n x n α n h n = z n x n . So, the boundedness of { x n } , { h n } guarantees that as n ,
h n x n = z n x n α n f ( x n ) + α n h n z n x n + α n ( f ( x n ) + h n ) 0 .
It follows that
x n + 1 z n = γ n ( h n x n ) + δ n ( T n z n z n ) + ( 1 δ n ) ( x n z n ) ,
which immediately yields
δ n T n z n z n = x n + 1 x n + x n z n ( 1 δ n ) ( x n z n ) γ n ( h n x n ) = x n + 1 x n + δ n ( x n z n ) γ n ( h n x n ) x n + 1 x n + x n z n + h n x n .
Since x n x n + 1 0 , z n x n 0 , h n x n 0 and lim inf n δ n > 0 , we obtain z n T n z n 0 as n . This further implies that
x n T n x n x n z n + z n T n z n + 1 + ζ 1 ζ z n x n 2 1 ζ x n z n + z n T n z n 0 ( n ) .
We have v n n A v n u n , v u n 0 v C , and
v n u n , v u n + n A v n , u n v n n A v n , v v n v C .
Note that n min { γ , μ l L } . So, lim inf i A v n i , v v n i 0 v C . This yields lim inf i A u n i , v u n i 0 v C . Since v n x n 0 and v n i z , we get x n i z . We may assume k = n i mod N for all i. By the assumption x n x n + k 0 , we have x n i + j z for all j 1 . Hence, x n i + j T k + j x n i + j = x n i + j T n i + j x n i + j 0 . Then the demiclosedness principle implies that z Fix ( T k + j ) for all j. This ensures that
z k = 1 N Fix ( T k ) .
We now take a sequence { ς i } ( 0 , 1 ) satisfying ς i 0 as i . For all i 1 , we denote by m i the smallest natural number satisfying
A u n j , v u n j + ς i 0 j m i .
Since { ς i } is decreasing, it is clear that { m i } is increasing. Noticing that { u m i } C ensures A u m i 0 i 1 , we set e m i = A u m i A u m i 2 , we get A u m i , e m i = 1 i 1 . So, from (10) we get A u m i , v + ς i e m i u m i 0 i 1 . Also, the pseudomonotonicity of A implies A ( v + ς i e m i ) , v + ς i e m i u m i 0 i 1 . This immediately leads to
A v A ( v + ς i h m i ) , v + ς i e m i u m i ς i A v , h m i A v , v u m i i 1 .
We claim lim i ς i e m i = 0 . Indeed, from v n i z and v n u n 0 , we obtain u n i z . So, { u n } C ensures z C . Also, the sequentially weak continuity of A guarantees that A u n i A z . Thus, we have A z 0 (otherwise, z is a solution). Moreover, the sequentially weak lower semicontinuity of · ensures 0 < A z lim inf i A u n i . Since { u m i } { u n i } and ς i 0 as i , we deduce that 0 lim sup i ς i e m i = lim sup i ς i A u m i lim sup i ς i lim inf i A u n i = 0 . Hence we get ς i e m i 0 .
Finally we claim z Ω . In fact, letting i , we conclude that the right hand side of (11) tends to zero by the Lipschitzian property of A, the boundedness of { u m i } , { h m i } and the limit lim i ς i e m i = 0 . Thus, we get A v , v z = lim inf i A v , v u m i 0 v C . So, z VI ( C , A ) . Therefore, from (9) we have z k = 1 N Fix ( T k ) VI ( C , A ) = Ω .  □
Theorem 1.
Assume A ( C ) is bounded. Let { x n } be constructed by Algorithm 1. Then
x n x * Ω x n x n + 1 0 , sup n 1 x n f x n <
where x * Ω is the unique solution to the hierarchical variational inequality (HVI):   ( I f ) x * , x * ω 0 , ω Ω .
Proof. 
Taking into account condition (iv) on { γ n } , we may suppose that { β n } [ a , b ] ( 0 , 1 ) . Applying Banach’s Contraction Principle, we obtain existence and uniqueness of a fixed point x * H for the mapping P Ω f , which means that x * = P Ω f ( x * ) . Hence, the HVI
( I f ) x * , x * ω 0 , ω Ω
has a unique solution x * Ω : = k = 1 N Fix ( T k ) VI ( C , A )
It is now obvious that the necessity of the theorem is true. In fact, if x n x * Ω , then we get sup n 1 x n f ( x n ) sup n 1 ( x n x * + x * f ( x * ) + f ( x * ) f ( x n ) ) < and
x n x n + 1 x n x * + x n + 1 x * 0 ( n ) .
For the sufficient condition, let us suppose x n x n + 1 0 and sup n 1 ( I f ) x n < . The sufficiency of our conclusion is proved in the following steps.  □
Step 1. We show the boundedness of { x n } . In fact, let p be an arbitrary point in Ω . Then T n p = p n 1 , and
v n p 2 ( 1 μ ) h n u n 2 ( 1 μ ) v n u n 2 h n p 2 ,
which hence leads to
v n p h n p n 1 .
By the definition of v n , we have
v n p x n p + σ n x n x n 1 x n p + α n · σ n α n x n x n 1 .
Noticing sup n 1 σ n α n < and sup n 1 x n x n 1 < , we obtain that sup n 1 σ n α n x n x n 1 < . This ensures that M 1 > 0 s.t.
σ n α n x n x n 1 M 1 n 1 .
Combining (14)–(16), we get
h n p v n p x n p + α n M 1 n 1 .
Note that A ( C ) is bounded, u n = P C ( v n n A v n ) , f ( H ) C C n and h n = P C n ( v n n A u n ) . Hence we know that { A u n } is bounded. So, from sup n 1 ( I f ) x n < , it follows that
h n f ( x n ) v n n A u n f ( x n ) x n x n 1 + x n f ( x n ) + γ A u n M 0 ,
where M 0 > 0 s.t. M 0 sup n 1 ( x n x n 1 + x n f ( x n ) + γ A u n ) (due to the assumption x n x n + 1 0 ). Consequently,
z n p α n δ x n p + ( 1 α n ) h n p + α n ( f I ) p ( 1 α n ( 1 δ ) ) x n p + α n ( M 1 + ( f I ) p ) ,
which together with ( γ n + δ n ) ζ γ n , yields
x n + 1 p β n x n p + ( 1 β n ) 1 1 β n [ γ n ( z n p ) + δ n ( T n z n p ) ] + γ n h n z n β n x n p + ( 1 β n ) [ ( 1 α n ( 1 δ ) ) x n p + α n ( M 0 + M 1 + ( f I ) p ) ] = [ 1 α n ( 1 β n ) ( 1 δ ) ] x n p + α n ( 1 β n ) ( 1 δ ) M 0 + M 1 + ( f I ) p 1 δ .
This shows that x n p max { x 1 p , M 0 + M 1 + ( I f ) p 1 δ } n 1 . Thus, { x n } is bounded, and so are the sequences { h n } , { v n } , { u n } , { z n } , { T n z n } .
Step 2. We show that M 4 > 0 s.t.
( 1 α n ) ( 1 β n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] x n p 2 x n + 1 p 2 + α n M 4 .
In fact, using Lemma 7 and the convexity of · 2 , we get
x n + 1 p 2 β n ( x n p ) + γ n ( z n p ) + δ n ( T n z n p ) 2 + 2 γ n α n h n f ( x n ) , x n + 1 p β n x n p 2 + ( 1 β n ) z n p 2 + 2 ( 1 β n ) α n h n f ( x n ) x n + 1 p β n x n p 2 + ( 1 β n ) { α n δ x n p 2 + ( 1 α n ) v n p 2 ( 1 α n ) ( 1 μ ) [ v n u n 2 + h n u n 2 ] + α n M 2 } ,
where M 2 > 0 s.t. M 2 sup n 1 2 ( ( f I ) p z n p + u n f ( x n ) x n + 1 p ) . Also,
v n p 2 x n p 2 + α n ( 2 M 1 x n p + α n M 1 2 ) x n p 2 + α n M 3 ,
where M 3 > 0 s.t. M 3 sup n 1 ( 2 M 1 x n p + β n M 1 2 ) . Substituting (19) for (18), we have
x n + 1 p 2 β n x n p 2 + ( 1 β n ) { ( 1 α n ( 1 δ ) ) x n p 2 + ( 1 α n ) α n M 3 ( 1 α n ) ( 1 μ ) [ v n u n 2 + h n u n 2 ] + α n M 2 } x n p 2 ( 1 α n ) ( 1 β n ) ( 1 μ ) [ v n u n 2 + h n u n 2 ] + α n M 4 ,
where M 4 : = M 2 + M 3 . This immediately implies that
( 1 α n ) ( 1 β n ) ( 1 μ ) [ v n u n 2 + h n u n 2 ] x n p 2 x n + 1 p 2 + α n M 4 .
Step 3. We show that M > 0 s.t.
x n + 1 p 2 [ 1 ( 1 2 δ ) δ n γ n 1 α n γ n α n ] x n p 2 + [ ( 1 2 δ ) δ n γ n ] α n 1 α n γ n · { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( p ) p , x n p + γ n + δ n ( 1 2 δ ) δ n γ n · σ n α n x n x n 1 3 M } .
In fact, we get
v n p 2 x n p 2 + σ n x n x n 1 ( 2 x n p + σ n x n x n 1 ) x n p 2 + σ n x n x n 1 3 M ,
where M > 0 s.t. M sup n 1 { x n p , σ n x n x n 1 } . By Algorithm 1 and the convexity of · 2 , we have
x n + 1 p 2 β n ( x n p ) + γ n ( z n p ) + δ n ( T n z n p ) 2 + 2 γ n α n h n f ( x n ) , x n + 1 p β n x n p 2 + ( 1 β n ) 1 1 β n [ γ n ( z n p ) + δ n ( T n z n p ) ] 2 + 2 γ n α n h n p , x n + 1 p + 2 γ n α n p f ( x n ) , x n + 1 p ,
which leads to
x n + 1 p 2 β n x n p 2 + ( 1 β n ) [ ( 1 α n ) h n p 2 + 2 α n f ( x n ) p , z n p ] + γ n α n ( h n p 2 + x n + 1 p 2 ) + 2 γ n α n p f ( x n ) , x n + 1 p .
Using (17) and (22) we obtain that h n p 2 x n p 2 + σ n x n x n 1 3 M . Hence,
x n + 1 p 2 [ 1 α n ( 1 β n ) ] x n p 2 + ( 1 β n ) ( 1 α n ) σ n x n x n 1 3 M + 2 α n δ n f ( x n ) p , z n p + γ n α n ( x n p 2 + x n + 1 p 2 ) + ( 1 β n ) α n σ n x n x n 1 3 M + 2 γ n α n f ( x n ) p , z n x n + 1 [ 1 α n ( 1 β n ) ] x n p 2 + 2 γ n α n f ( x n ) p z n x n + 1 + 2 α n δ n f ( x n ) p , x n p + 2 α n δ n f ( x n ) p , z n x n + γ n α n ( x n p 2 + x n + 1 p 2 ) + ( 1 β n ) σ n x n x n 1 3 M [ 1 α n ( 1 β n ) ] x n p 2 + 2 γ n α n f ( x n ) p z n x n + 1 + 2 α n δ n δ x n p 2 + 2 α n δ n f ( p ) p , x n p + 2 α n δ n f ( x n ) p z n x n + γ n α n ( x n p 2 + x n + 1 p 2 ) + ( 1 β n ) σ n x n x n 1 3 M ,
which immediately yields
x n + 1 p 2 [ 1 ( 1 2 δ ) δ n γ n 1 α n γ n α n ] x n p 2 + [ ( 1 2 δ ) δ n γ n ] α n 1 α n γ n · { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( p ) p , x n p + γ n + δ n ( 1 2 δ ) δ n γ n · σ n α n x n x n 1 3 M } .
Step 4. We show that x n x * Ω , where x * is the unique solution of (12). Indeed, putting p = x * , we infer from (23) that
x n + 1 x * 2 [ 1 ( 1 2 δ ) δ n γ n 1 α n γ n α n ] x n x * 2 + [ ( 1 2 δ ) δ n γ n ] α n 1 α n γ n · { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) x * z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) x * z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( x * ) x * , x n x * + γ n + δ n ( 1 2 δ ) δ n γ n · σ n α n x n x n 1 3 M } .
It is sufficient to show that lim sup n ( f I ) x * , x n x * 0 . From (21), x n x n + 1 0 , α n 0 and { β n } [ a , b ] ( 0 , 1 ) , we get
lim sup n ( 1 α n ) ( 1 b ) ( 1 μ ) [ v n u n 2 + h n u n 2 ] lim sup n [ ( x n p + x n + 1 p ) x n x n + 1 + α n M 4 ] = 0 .
This ensures that
lim n v n u n = 0 and lim n h n u n = 0 .
Consequently,
x n u n x n v n + v n u n 0 ( n ) .
Since z n = α n f ( x n ) + ( 1 α n ) h n with h n : = P C n ( v n n A u n ) , we get
z n u n = α n f ( x n ) α n h n + h n u n α n ( f ( x n ) + h n ) + h n u n 0 ( n ) ,
and hence
z n x n z n u n + u n x n 0 ( n ) .
Obviously, combining (25) and (26), guarantees that
v n z n v n u n + u n z n 0 ( n ) .
From the boundedness of { x n } , it follows that { x n i } { x n } s.t.
lim sup n ( f I ) x * , x n x * = lim i ( f I ) x * , x n i x * .
Since { x n } is bounded, we may suppose that x n i x ˜ . Hence from (28) we get
lim sup n ( f I ) x * , x n x * = lim i ( f I ) x * , x n i x * = ( f I ) x * , x ˜ x * .
It is easy to see from v n x n 0 and x n i x ˜ that v n i x ˜ . Since x n x n + 1 0 , v n u n 0 , v n z n 0 and v n i x ˜ , by Lemma 8 we infer that x ˜ Ω . Therefore, from (12) and (29) we conclude that
lim sup n ( f I ) x * , x n x * = ( f I ) x * , x ˜ x * 0 .
Note that lim inf n ( 1 2 δ ) δ n γ n 1 α n γ n > 0 . It follows that n = 0 ( 1 2 δ ) δ n γ n 1 α n γ n α n = . It is clear that
lim sup n { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) x * z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) x * z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( x * ) x * , x n x * + γ n + δ n ( 1 2 δ ) δ n γ n · σ n α n x n x n 1 3 M } 0 .
Therefore, by Lemma 2 we immediately deduce that x n x * .
Next, we introduce another Mann viscosity algorithm with line-search process by the subgradient extragradient technique.
Algorithm 2
Initial Step: Given x 0 , x 1 H arbitrarily. Let γ > 0 , l ( 0 , 1 ) , μ ( 0 , 1 ) .
Iteration Steps: Compute x n + 1 below:
Step 1. Put v n = x n σ n ( x n 1 x n ) and calculate u n = P C ( v n n A v n ) , where n is picked to be the largest { γ , γ l , γ l 2 , } s.t.
A v n A u n μ v n u n .
Step 2. Calculate z n = ( 1 α n ) P C n ( v n n A u n ) + α n f ( x n ) with C n : = { v H : v n n A v n u n , u n v 0 } .
Step 3. Calculate
x n + 1 = γ n P C n ( v n n A u n ) + δ n T n z n + β n v n .
Update n : = n + 1 and return to Step 1.
It is remarkable that Lemmas 6, 7 and 8 remain true for Algorithm 2.
Theorem 2.
Assume A ( C ) is bounded. Let { x n } be constructed by Algorithm 2. Then
x n x * Ω x n x n + 1 0 , sup n 1 ( I f ) x n <
where x * Ω is the unique solution of the HVI: ( I f ) x * , x * ω 0 , ω Ω .
Proof. 
For the necessity of our proof, we can observe that, by a similar approach to that in the proof of Theorem 1, we obtain that there is a unique solution x * Ω of (12).
We show the sufficiency below. To this aim, we suppose x n x n + 1 0 and sup n 1 ( I f ) x n < , and prove the sufficiency by the following steps.  □
Step 1. We show the boundedness of { x n } . In fact, by the similar inference to that in Step 1 for the proof of Theorem 1, we obtain that (13)–(17) hold. So, using Algorithm 2 and (17) we obtain
z n p ( 1 α n ( 1 δ ) ) x n p + α n ( M 1 + ( f I ) p ) ,
which together with ( γ n + δ n ) ζ γ n , yields
x n + 1 p β n v n p + ( 1 β n ) 1 1 β n [ γ n ( z n p ) + δ n ( T n z n p ) ] + γ n h n z n β n ( x n p + α n M 1 ) + ( 1 β n ) [ ( 1 α n ( 1 δ ) ) x n p + α n ( M 0 + M 1 + ( f I ) p ) ] = [ 1 α n ( 1 β n ) ( 1 δ ) ] x n p + α n ( 1 β n ) ( 1 δ ) M 0 + 1 1 β n M 1 + ( f I ) p 1 δ .
Therefore, we get the boundedness of { x n } and hence the one of sequences { h n } , { v n } , { u n } , { z n } , { T n z n } .
Step 2. We show that M 4 > 0 s.t.
( 1 α n ) ( 1 β n ) ( 1 μ ) [ w n y n 2 + u n y n 2 ] x n p 2 x n + 1 p 2 + α n M 4 .
In fact, by Lemma 7 and the convexity of · 2 , we get
x n + 1 p 2 β n ( v n p ) + γ n ( z n p ) + δ n ( T n z n p ) 2 + 2 γ n α n h n f ( x n ) , x n + 1 p β n v n p 2 + ( 1 β n ) z n p 2 + 2 ( 1 β n ) α n h n f ( x n ) x n + 1 p β n v n p 2 + ( 1 β n ) { α n δ x n p 2 + ( 1 α n ) v n p 2 ( 1 α n ) ( 1 μ ) [ v n u n 2 + h n u n 2 ] + α n M 2 } ,
where M 2 > 0 s.t. M 2 sup n 1 2 ( ( f I ) p z n p + u n f ( x n ) x n + 1 p ) . Also,
v n p 2 x n p 2 + α n ( 2 M 1 x n p + α n M 1 2 ) x n p 2 + α n M 3 ,
where M 3 > 0 s.t. M 3 sup n 1 ( 2 M 1 x n p + β n M 1 2 ) . Substituting (35) for (34), we have
x n + 1 p 2 β n x n p 2 + ( 1 β n ) { ( 1 α n ( 1 δ ) ) x n p 2 + ( 1 α n ) α n M 3 ( 1 α n ) ( 1 μ ) [ v n u n 2 + h n u n 2 ] + α n M 2 } + β n α n M 3 = x n p 2 ( 1 α n ) ( 1 β n ) ( 1 μ ) [ v n u n 2 + h n u n 2 ] + α n M 4 ,
where M 4 : = M 2 + M 3 . This ensures that
( 1 α n ) ( 1 β n ) ( 1 μ ) [ v n u n 2 + h n u n 2 ] x n p 2 x n + 1 p 2 + α n M 4 .
Step 3. We show that M > 0 s.t.
x n + 1 p 2 [ 1 ( 1 2 δ ) δ n γ n 1 α n γ n α n ] x n p 2 + [ ( 1 2 δ ) δ n γ n ] α n 1 α n γ n · { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( p ) p , x n p + 1 ( 1 2 δ ) δ n γ n · σ n α n x n x n 1 3 M } .
In fact, we get
v n p 2 x n p 2 + σ n x n x n 1 ( 2 x n p + σ n x n x n 1 ) x n p 2 + σ n x n x n 1 3 M ,
where M > 0 s.t. M sup n 1 { x n p , σ n x n x n 1 } . Using Algorithm 1 and the convexity of · 2 , we get
x n + 1 p 2 β n ( v n p ) + γ n ( z n p ) + δ n ( T n z n p ) 2 + 2 γ n α n h n f ( x n ) , x n + 1 p β n v n p 2 + ( 1 β n ) 1 1 β n [ γ n ( z n p ) + δ n ( T n z n p ) ] 2 + 2 γ n α n h n p , x n + 1 p + 2 γ n α n p f ( x n ) , x n + 1 p ,
which leads to
x n + 1 p 2 β n v n p 2 + ( 1 β n ) [ ( 1 α n ) h n p 2 + 2 α n f ( x n ) p , z n p ] + γ n α n ( h n p 2 + x n + 1 p 2 ) + 2 γ n α n p f ( x n ) , x n + 1 p .
Using (17) and (38) we deduce that h n p 2 v n p 2 x n p 2 + σ n x n x n 1 3 M . Hence,
x n + 1 p 2 [ 1 α n ( 1 β n ) ] x n p 2 + [ 1 α n ( 1 β n ) ] σ n x n x n 1 3 M + 2 α n δ n f ( x n ) p , z n p + γ n α n ( x n p 2 + x n + 1 p 2 ) + ( 1 β n ) α n σ n x n x n 1 3 M + 2 γ n α n f ( x n ) p , z n x n + 1 [ 1 α n ( 1 β n ) ] x n p 2 + 2 γ n α n f ( x n ) p z n x n + 1 + 2 α n δ n f ( x n ) p , x n p + 2 α n δ n f ( x n ) p , z n x n + γ n α n ( x n p 2 + x n + 1 p 2 ) + σ n x n x n 1 3 M [ 1 α n ( 1 β n ) ] x n p 2 + 2 γ n α n f ( x n ) p z n x n + 1 + 2 α n δ n δ x n p 2 + 2 α n δ n f ( p ) p , x n p + 2 α n δ n f ( x n ) p z n x n + γ n α n ( x n p 2 + x n + 1 p 2 ) + σ n x n x n 1 3 M ,
which immediately yields
x n + 1 p 2 [ 1 ( 1 2 δ ) δ n γ n 1 α n γ n α n ] x n p 2 + [ ( 1 2 δ ) δ n γ n ] α n 1 α n γ n · { 2 γ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 1 + 2 δ n ( 1 2 δ ) δ n γ n f ( x n ) p z n x n + 2 δ n ( 1 2 δ ) δ n γ n f ( p ) p , x n p + 1 ( 1 2 δ ) δ n γ n · σ n α n x n x n 1 3 M } .
Step 4. In order to show that x n x * Ω , which is the unique solution of (12), we can follow a similar method to that in Step 4 for the proof of Theorem 1.
Finally, we apply our main results to solve the VIP and common fixed point problem (CFPP) in the following illustrating example.
The starting point x 0 = x 1 is randomly picked in the real line. Put f ( u ) = 1 8 sin u , γ = l = μ = 1 2 , σ n = α n = 1 n + 1 , β n = 1 3 , γ n = 1 6 and δ n = 1 2 .
We first provide an example of Lipschitzian, pseudomonotone self-mapping A satisfying the boundedness of A ( C ) and strictly pseudocontractive self-mapping T 1 with Ω = Fix ( T 1 ) VI ( C , A ) . Let C = [ 1 , 2 ] and H be the real line with the inner product a , b = a b and induced norm · = | · | . Then f is a δ -contractive map with δ = 1 8 [ 0 , 1 2 ) and f ( H ) C because f ( u ) f ( v ) = 1 8 sin u sin v 1 8 u v for all u , v H .
Let A : H H and T 1 : H H be defined as A u : = 1 1 + | sin u | 1 1 + | u | , and T 1 u : = 1 2 u 3 8 sin u for all u H . Now, we first show that A is L-Lipschitzian, pseudomonotone operator with L = 2 , such that A ( C ) is bounded. In fact, for all u , v H we get
A u A v | 1 1 + u 1 1 + v | + | 1 1 + sin u 1 1 + sin v | = | v u ( 1 + u ) ( 1 + v ) | + | sin v sin u ( 1 + sin u ) ( 1 + sin v ) | u v ( 1 + u ) ( 1 + v ) + sin u sin v ( 1 + sin u ) ( 1 + sin v ) 2 u v .
This implies that A is 2-Lipschitzian. Next, we show that A is pseudomonotone. For any given u , v H , it is clear that the relation holds:
A u , u v = ( 1 1 + | sin u | 1 1 + | u | ) ( u v ) 0 A v , u v = ( 1 1 + | sin v | 1 1 + | v | ) ( u v ) 0 .
Furthermore, it is easy to see that T 1 is strictly pseudocontractive with constant ζ 1 = 1 4 . In fact, we observe that for all u , v H ,
T 1 u T 1 v 1 2 u v + 3 8 sin u sin v u v + 1 4 ( I T 1 ) u ( I T 1 ) v .
It is clear that ( γ n + δ n ) ζ 1 = ( 1 6 + 1 2 ) · 1 4 1 6 = γ n < ( 1 2 δ ) δ n = ( 1 2 · 1 8 ) 1 2 = 3 8 for all n 1 . In addition, it is clear that Fix ( T 1 ) = { 0 } and A 0 = 0 because the derivative d ( T 1 u ) / d u = 1 2 3 8 cos u > 0 . Therefore, Ω = { 0 } . In this case, Algorithm 1 can be rewritten below:
v n = x n 1 n + 1 ( x n 1 x n ) , u n = P C ( v n n A v n ) , z n = 1 n + 1 f ( x n ) + n n + 1 P C n ( v n n A u n ) , x n + 1 = 1 3 x n + 1 6 P C n ( v n n A u n ) + 1 2 T 1 z n n 1 ,
with { C n } and { n } , selected as in Algorithm 1. Then, by Theorem 1, we know that x n 0 Ω iff x n x n + 1 0 ( n ) and sup n 1 | x n 1 8 sin x n | < .
On the other hand, Algorithm 2 can be rewritten below:
v n = x n 1 n + 1 ( x n 1 x n ) , u n = P C ( v n n A v n ) , z n = 1 n + 1 f ( x n ) + n n + 1 P C n ( v n n A u n ) , x n + 1 = 1 3 v n + 1 6 P C n ( v n n A u n ) + 1 2 T 1 z n n 1 ,
with { C n } and { n } , selected as in Algorithm 2. Then, by Theorem 2 , we know that x n 0 Ω iff x n x n + 1 0 ( n ) and sup n 1 | x n 1 8 sin x n | < .

Author Contributions

All authors contributed equally to this manuscript.

Funding

This research was partially supported by the Innovation Program of Shanghai Municipal Education Commission (15ZZ068), Ph.D. Program Foundation of Ministry of Education of China (20123127110002) and Program for Outstanding Academic Leaders in Shanghai City (15XD1503100).

Conflicts of Interest

The authors certify that they have no affiliations with or involvement in any organization or entity with any financial or non-financial interest in the subject matter discussed in this manuscript.

References

  1. Browder, F.E.; Petryshyn, W.V. Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 1967, 1967, 197–228. [Google Scholar] [CrossRef]
  2. Ceng, L.C.; Kong, Z.R.; Wen, C.F. On general systems of variational inequalities. Comput. Math. Appl. 2013, 66, 1514–1532. [Google Scholar] [CrossRef]
  3. Nguyen, L.V. Some results on strongly pseudomonotone quasi-variational inequalities. Set-Valued Var. Anal. 2019. [Google Scholar] [CrossRef]
  4. Bin Dehaish, B.A. Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 2015, 16, 1321–1336. [Google Scholar]
  5. Qin, X.; Cho, S.Y.; Wang, L. Strong convergence of an iterative algorithm involving nonlinear mappings of nonexpansive and accretive type. Optimization 2018, 67, 1377–1388. [Google Scholar] [CrossRef]
  6. Liu, L. A hybrid steepest method for solving split feasibility problems inovling nonexpansive mappings. J. Nonlinear Convex Anal. 2019, 20, 471–488. [Google Scholar]
  7. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 2011 74, 5286–5302. [Google Scholar] [CrossRef]
  8. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64, 633–642. [Google Scholar] [CrossRef] [Green Version]
  9. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Relaxed extragradient methods for finding minimum-norm solutions of the split feasibility problem. Nonlinear Anal. 2012, 75, 2116–2125. [Google Scholar] [CrossRef]
  10. Qin, X.; Cho, S.Y.; Yao, J.C. Weak and strong convergence of splitting algorithms in Banach spaces. Optimization 2019. [Google Scholar] [CrossRef]
  11. Cho, S.Y.; Li, W.; Kang, S.M. Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. 2013, 2013, 199. [Google Scholar] [CrossRef] [Green Version]
  12. Cho, S.Y. Strong convergence analysis of a hybrid algorithm for nonlinear operators in a Banach space. J. Appl. Anal. Comput. 2018, 8, 19–31. [Google Scholar]
  13. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–501. [Google Scholar] [CrossRef]
  14. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–134. [Google Scholar] [CrossRef] [Green Version]
  15. Cho, S.Y.; Kang, S.M. Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 2011, 24, 224–228. [Google Scholar] [CrossRef]
  16. Cho, S.Y.; Kang, S.M. Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32, 1607–1618. [Google Scholar] [CrossRef]
  17. Ceng, L.C.; Yuan, Q. Hybrid Mann viscosity implicit iteration methods for triple hierarchical variational inequalities, systems of variational inequalities and fixed point problems. Mathematics 2019, 7, 142. [Google Scholar] [CrossRef]
  18. Qin, X.; Yao, J.C. Projection splitting algorithms for nonself operators. J. Nonlinear Convex Anal. 2017, 18, 925–935. [Google Scholar]
  19. Qin, X.; Yao, J.C. Weak convergence of a Mann-like algorithm for nonexpansive and accretive operators. J. Inequal. Appl. 2016, 2016, 232. [Google Scholar] [CrossRef]
  20. Ceng, L.C.; Wong, M.M.; Yao, J.C. A hybrid extragradient-like approximation method with regularization for solving split feasibility and fixed point problems. J. Nonlinear Convex Anal. 2013, 14, 163–182. [Google Scholar]
  21. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  22. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  23. Thong, D.V.; Hieu, D.V. Inertial subgradient extragradient algorithms with line-search process for solving variational inequality problems and fixed point problems. Numer. Algorithms 2019, 80, 1283–1307. [Google Scholar] [CrossRef]
  24. Takahahsi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
  25. Ansari, Q.H.; Babu, F.; Yao, J.C. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 25. [Google Scholar] [CrossRef]
  26. Qin, X.; Cho, S.Y.; Wang, L. Iterative algorithms with errors for zero points of m-accretive operators. Fixed Point Theory Appl. 2013, 2013, 148. [Google Scholar] [CrossRef] [Green Version]
  27. Takahashi, W.; Wen, C.F.; Yao, J.C. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
  28. Zhao, X.; Ng, K.F.; Li, C.; Yao, J.C. Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Optim. 2018, 78, 613–641. [Google Scholar] [CrossRef]
  29. Cho, S.Y.; Bin Dehaish, B.A. Weak convergence of a splitting algorithm in Hilbert spaces. J. Appl. Anal. Comput. 2017, 7, 427–438. [Google Scholar]
  30. Cho, S.Y.; Qin, X. On the strong convergence of an iterative process for asymptotically strict pseudocontractions and equilibrium problems. Appl. Math. Comput. 2014, 235, 430–438. [Google Scholar] [CrossRef]
  31. Chang, S.S.; Wen, C.F.; Yao, J.C. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
  32. Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malays. Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
  33. Qin, X.; Cho, S.Y.; Wang, L. A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014, 2014, 75. [Google Scholar] [CrossRef] [Green Version]
  34. Ceng, L.C.; Ansari, Q.H.; Wong, N.C.; Yao, J.C. An extragradient-like approximation method for variational inequalities and fixed point problems. Fixed Point Theory Appl. 2011, 2011, 18. [Google Scholar] [CrossRef]
  35. Ceng, L.C.; Petrusel, A.; Yao, J.C. Composite viscosity approximation methods for equilibrium problem, variational inequality and common fixed points. J. Nonlinear Convex Anal. 2014, 15, 219–240. [Google Scholar]
  36. Ceng, L.C.; Plubtieng, S.; Wong, M.M.; Yao, J.C. System of variational inequalities with constraints of mixed equilibria, variational inequalities, and convex minimization and fixed point problems. J. Nonlinear Convex Anal. 2015, 16, 385–421. [Google Scholar]
  37. Ceng, L.C.; Gupta, H.; Ansari, Q.H. Implicit and explicit algorithms for a system of nonlinear variational inequalities in Banach spaces. J. Nonlinear Convex Anal. 2015, 16, 965–984. [Google Scholar]
  38. Ceng, L.C.; Guu, S.M.; Yao, J.C. Hybrid iterative method for finding common solutions of generalized mixed equilibrium and fixed point problems. Fixed Point Theory Appl. 2012, 2012, 92. [Google Scholar] [CrossRef] [Green Version]
  39. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  40. Xu, H.K.; Kim, T.H. Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119, 185–201. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Ceng, L.-C.; Petruşel, A.; Yao, J.-C. On Mann Viscosity Subgradient Extragradient Algorithms for Fixed Point Problems of Finitely Many Strict Pseudocontractions and Variational Inequalities. Mathematics 2019, 7, 925. https://doi.org/10.3390/math7100925

AMA Style

Ceng L-C, Petruşel A, Yao J-C. On Mann Viscosity Subgradient Extragradient Algorithms for Fixed Point Problems of Finitely Many Strict Pseudocontractions and Variational Inequalities. Mathematics. 2019; 7(10):925. https://doi.org/10.3390/math7100925

Chicago/Turabian Style

Ceng, Lu-Chuan, Adrian Petruşel, and Jen-Chih Yao. 2019. "On Mann Viscosity Subgradient Extragradient Algorithms for Fixed Point Problems of Finitely Many Strict Pseudocontractions and Variational Inequalities" Mathematics 7, no. 10: 925. https://doi.org/10.3390/math7100925

APA Style

Ceng, L. -C., Petruşel, A., & Yao, J. -C. (2019). On Mann Viscosity Subgradient Extragradient Algorithms for Fixed Point Problems of Finitely Many Strict Pseudocontractions and Variational Inequalities. Mathematics, 7(10), 925. https://doi.org/10.3390/math7100925

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop