Next Article in Journal
Identification of Source Term for the Time-Fractional Diffusion-Wave Equation by Fractional Tikhonov Method
Next Article in Special Issue
Existence of a Unique Fixed Point for Nonlinear Contractive Mappings
Previous Article in Journal
The Generalized Solutions of the nth Order Cauchy–Euler Equation
Previous Article in Special Issue
Convergence of Two Splitting Projection Algorithms in Hilbert Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Mann Viscosity Implicit Rules for Solving Systems of Variational Inequalities with Constraints of Variational Inclusions and Fixed Point Problems

1
Department of Mathematics, Shanghai Normal University, Shanghai 200234, China
2
College of Science, Shijiazhuang University, Shijiazhuang 266100, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(10), 933; https://doi.org/10.3390/math7100933
Submission received: 1 September 2019 / Revised: 4 October 2019 / Accepted: 7 October 2019 / Published: 10 October 2019
(This article belongs to the Special Issue Variational Inequality)

Abstract

:
In this work, let X be Banach space with a uniformly convex and q-uniformly smooth structure, where 1 < q 2 . We introduce and consider a generalized Mann-like viscosity implicit rule for treating a general optimization system of variational inequalities, a variational inclusion and a common fixed point problem of a countable family of nonexpansive mappings in X. The generalized Mann-like viscosity implicit rule investigated in this work is based on the Korpelevich’s extragradient technique, the implicit viscosity iterative method and the Mann’s iteration method. We show that the iterative sequences governed by our generalized Mann-like viscosity implicit rule converges strongly to a solution of the general optimization system.

1. Introduction

Throughout this work, we always suppose that C is a non-empty, convex and closed subset of a real Banach space X. X * will be used to present the dual space of space X. In this present work, we let the norms of X and X * be presented by the denotation · . Let T be a nonlinear self mapping with fixed points defined on subset C.
One use · , · to denote the duality pairing. The possible set-valued normalized duality mapping J : X 2 X * is defined by
J ( x ) : = { ϕ X * : ϕ , x = ϕ 2 = x 2 } , x X .
Banach space X is said to be a smooth space (has a Gâteaux differentiable norm) if lim t 0 + x + t y - x t exists for all x = y = 1 . J is norm-to-weak * continuous single-valued map in such a space. X is also said to be a uniformly smooth space (has a uniformly Fréchet differentiable norm) if the above limit is attained uniformly for x = y = 1 and J is norm-to-norm uniformly continuous on bounded sets in such a space. X is said to be a strictly convex space if ( 1 - λ ) x + λ y < 1 , λ ( 0 , 1 ) for all x = y = 1 . Space X is said to be uniformly convex if, for each ε ( 0 , 2 ] , we have a constant δ > 0 such that x + y 2 > 1 - δ x - y < ε for all x = y = 1 . A uniformly convex Banach space yields a strictly convex Banach space. Under the reflexive framework, X is strictly convex if and only if X * is smooth.
Next, we suppose X is smooth, i.e., J is single-valued. Let A 1 , A 2 : C X be two nonlinear single-valued mappings. One is concerned with the problem of approximating ( x * , y * ) C × C such that
x * + μ 1 A 1 y * - y * , J ( x * - x ) 0 , x C , y * + μ 2 A 2 x * - x * , J ( y * - x ) 0 , x C ,
with two real positive constants μ 1 and μ 2 . This optimization system is called a general system of variational inequalities (GSVI). In particular, in the case that X = H is Hilbert, then GSVI (1) is reduced to the following GSVI of finding ( x * , y * ) C × C such that
x * + μ 1 A 1 y * - y * , x * - x 0 , x C , y * + μ 2 A 2 x * - x * , y * - x 0 , x C ,
with two real positive constants μ 1 and μ 2 . This was introduced and studied in [1]. Additionally, if A = A 1 = A 2 and x * = y * , then GSVI (1) becomes the variational problem of finding x * C such that A x * , J ( x - x * ) 0 , x C . In 2006, Aoyama, Iiduka and Takahashi [2] proposed an iterative scheme of finding its approximate solutions and claimed the weak convergence of the iterative sequences governed by the proposed algorithm. Recently, many researchers investigated the variational inequality problem through gradient-based or splitting-based methods; see, e.g., [3,4,5,6,7,8,9,10,11,12,13]. Some stability results can be found at [14,15].
In 2013, Ceng, Latif and Yao [16] analyzed and introduced an implicit computing method by using a double-step relaxed gradient idea in the setting of 2-uniformly smooth and uniformly convex space X with 2-uniform smoothness coefficient κ 2 . Let Π C : X C be a retraction, which is both sunny and nonexpansive. Let f : C C be a contraction with constant δ ( 0 , 1 ) . Let the mapping A i : C X be α i -inverse-strongly accretive for i = 1 , 2 . Let { S n } n = 0 be a countable family of nonexpansive self single-valued mappings on C such that Ω = n = 0 Fix ( S n ) GSVI ( C , A 1 , A 2 ) , where GSVI ( C , A 1 , A 2 ) stands for the set of fixed points the mapping G : = Π C ( I - μ 1 A 1 ) Π C ( I - μ 2 A 2 ) . For a arbitrary initial x 0 C , let { x n } be the sequence generated by
y n = α n f ( y n ) + ( 1 - α n ) Π C ( I - μ 1 A 1 ) Π C ( I - μ 2 A 2 ) x n , x n + 1 = β n x n + ( 1 - β n ) S n y n , n 0 ,
with 0 < μ i < 2 α i κ 2 for i = 1 , 2 , where { α n } and { β n } are sequences of real numbers in ( 0 , 1 ) satisfying the restrictions: n = 0 α n = , lim n α n = 0 , lim sup n β n < 1 and lim inf n β n > o . They got convergence analysis of { x n } to x * Ω , which treats the variational inequality: ( I - f ) x * , J ( x * - p ) 0 , p Ω . Recently, projection-like methods, including sunny nonexpansive retractions, have largely studied in Hilbert and Banach spaces; see, e.g., [17,18,19,20,21,22,23,24] and the references therein.

2. Preliminaries

Next, we let X be a space with uniformly convex and q-uniformly smooth structures. Then the following inequality holds:
x + y q κ q y q + x q + q y , J q ( x ) , x , y E ,
where κ q is the smoothness coefficient. Let Π C , A 1 , A 2 , G , { S n } n = 0 be the same mappings as above. Assume that Ω = n = 0 Fix ( S n ) GSVI ( C , A 1 , A 2 ) . Suppose that F : C X is a η -strongly accretive operator with constants k , η > 0 and k-Lipschitzian, and f : C X is L-Lipschitzian mapping. Assume 0 < ρ < ( q η κ q k q ) 1 q - 1 , 0 < μ i < ( q α i κ q ) 1 q - 1 , i = 1 , 2 , and 0 γ L < τ , where τ = ρ ( η - κ q ρ q - 1 k q q ) . Recently, Song and Ceng [25] proposed and considered a very general iterative scheme by the modified relaxed extragradient method, i.e., for arbitrary initial x 0 C , we generate { x n } by
y n = ( 1 - β n ) x n + β n Π C ( I - μ 1 A 1 ) Π C ( I - μ 2 A 2 ) x n , x n + 1 = Π C [ ( ( 1 - γ n ) I - α n ρ F ) S n y n + γ α n f ( x n ) + γ n x n ] , n 0 ,
where { α n } , { β n } , { γ n } ( 0 , 1 ) satisfying the conditions: (i) n = 0 α n = , n = 0 | α n + 1 - α n | < and α n 0 ; (ii) lim sup n γ n < 1 , lim inf n γ n > 0 , n = 0 | γ n + 1 - γ n | < ; and (iii) n = 0 | β n + 1 - β n | < , lim inf n β n > 0 . The authors claimed convergence of { x n } to x * Ω , which deals with the variational inequality: ( ρ F - γ f ) x * , J ( x * - p ) 0 , p Ω .
On the other hand, Let A : X X be an α -inverse-strongly accretive operator, B : X 2 X be an m-accretive operator, f : X X be a contraction with constant δ ( 0 , 1 ) . Assume that the inclusion of finding x * X such that 0 ( A + B ) x * , has a solution, i.e., Ω = ( A + B ) - 1 0 . In 2017, Chang et al. [26] introduced and studied a generalized viscosity implicit rule, i.e., for arbitrary initial x 0 X , we generate { x n } by
x n + 1 = ( 1 - α n ) J λ B ( I - λ A ) ( t n x n + ( 1 - t n ) x n + 1 ) + α n f ( x n ) , n 0 ,
where J λ B = ( I + λ B ) - 1 , { t n } , { α n } [ 0 , 1 ] and λ ( 0 , ) satisfying the conditions: (i) n = 0 α n = and lim n α n = 0 ; (ii) n = 0 | α n + 1 - α n | < ; (iii) 0 < ε t n t n + 1 < 1 , n 0 ; and (iv) 0 < λ ( α q κ q ) 1 q - 1 . These authors studied and proved convergence of { x n } to x * Ω , solving the inequality: ( I - f ) x * , J ( x * - p ) 0 , p Ω . For recent results, we refer the reader to [27,28,29,30,31,32,33,34]. The purpose of this work is to approximate a common solution of GSVI (1), a variational inclusion and a common fixed point problem of a countable family of nonexpansive mappings in spaces with uniformly convex and q-uniformly smooth structures. This paper introduces and considers a generalized Mann viscosity implicit rule, based on the Korpelevich’s extragradient method, the implicit approximation method and the Mann’s iteration method. We investigate norm convergence of the sequences generated by the generalized Mann viscosity implicit rule to a common solution of the GSVI, VI and CFPP, which solves a hierarchical variational inequality. Our results improve and extend the results reported recently, e.g., Ceng et al. [16], Song and Ceng [25] and Chang et al. [26].
Next, for simplicity, we employ x n x (resp., x n x ) to present the weak (resp., strong) convergence of the sequence { x n } to x. It is known that J ( t x ) = t J ( x ) and J ( - x ) = - J ( x ) for all t > 0 and x X . Then the convex modulus of X is defined by δ X ( ϵ ) = inf { 1 - 1 2 ( x + y ) : x , y U , x - y ϵ } , ϵ [ 0 , 2 ] . X is said to be uniformly convex if δ X ( 0 ) = 0 , and δ X ( ϵ ) > 0 for each ϵ ( 0 , 2 ] . Let q be a fixed real number with q > 1 . Then a Banach space X is said to be q-uniformly convex if δ X ( t ) c t q , t ( 0 , 2 ] , where c > 0 . Each Hilbert space H is 2-uniformly convex, while L p and p spaces are max { 2 , p } -uniformly convex for each p > 1 .
Proposition 1.
[35] Let X be space with smooth and uniformly convex structures, and r > 0 . Then g ( 0 ) = 0 and g ( x - y ) x 2 - 2 x , J ( y ) + y 2 for all x , y B r = { y X : y r } , where g : [ 0 , 2 r ] R is a continuous, strictly increasing, and convex function.
Let ρ X : [ 0 , ) [ 0 , ) be the smooth modulus of X defined by
ρ X ( t ) = sup { x + y + x - y - 2 2 : x = 1 , y t } .
A Banach space X is said to be q-uniformly smooth if ρ X ( t ) c t q , t > 0 , where c > 0 . It is known that each Hilbert, L p and p spaces are uniformly smooth where p > 1 . More precisely, each Hilbert space H is 2-uniformly smooth, while L p and p spaces are min { 2 , p } -uniformly smooth for each p > 1 . Let q > 1 . J q : X 2 X * , the duality mapping, is defined by
J q ( x ) : = { ϕ X * : x , ϕ = x q and ϕ = x q - 1 } , x X .
It is quite easy to see that J q ( x ) = J ( x ) x q - 2 , and if X = H , then J 2 = J = I the identity mapping of H.
Proposition 2.
[35] Let q ( 1 , 2 ] a given real number and let X be uniformly smooth with order q. Then x + y q - q y , J q ( x ) x q + κ q y q , x , y X , where κ q is the real smooth constant. In particular, if X is uniformly smooth with order 2, then x + y 2 - 2 y , J ( x ) x 2 + κ 2 y 2 , x , y X .
Using the structures of subdifferentials, we obtain the following tool.
Lemma 1.
Let q > 1 and X be a real normed space with the generalized duality mapping J q . Then, for any given x , y X , x + y q - q y , j q ( x + y ) x q , j q ( x + y ) J q ( x + y ) .
Let D be a set in set C and let Π map C into D. We say that Π is sunny if Π ( x ) = Π [ t ( x - Π ( x ) ) + Π ( x ) ] , whenever Π ( x ) + t ( x - Π ( x ) ) C for x C and t 0 . We say Π is a retraction if Π = Π 2 . We say that a subset D of C is a sunny nonexpansive retract of C if there exists a sunny nonexpansive retraction from C onto D.
Proposition 3.
[36] Let X be smooth, D be a non-empty set in C and Π be a retraction onto D. (i) Π is nonexpansive sunny; (ii) x - y , J ( Π ( x ) - Π ( y ) ) Π ( x ) - Π ( y ) 2 , x , y C ; (iii) x - Π ( x ) , J ( y - Π ( x ) ) 0 , x C , y D . Then the above relations are equivalent to each other.
Let A : C 2 X be a set-valued operator with A x , x C . Let q > 1 . An operator A is accretive if for each x , y C , there exists j q ( x - y ) J q ( x - y ) such that j q ( x - y ) , u - v 0 , u A x , v A y . An accretive operator A is inverse-strongly accretive of order q, i.e., α -inverse-strongly accretive, if for each x , y C , there exist α > 0 such that u - v , j q ( x - y ) α A x - A y q , u A x , v A y , where j q ( x - y ) J q ( x - y ) . In a Hilbert space H, A : C H is called α -inverse-strongly monotone.
Operator A is said to be m-accretive if and only if ( I + λ A ) C = X for all λ > 0 and A is accretive. One defines the mapping J λ A : ( I + λ A ) C C by J λ A = ( I + λ A ) - 1 with real constant λ > 0 . Such J λ A is called the resolvent mapping of A for each λ > 0 .
Lemma 2.
[37] The following statements hold:
(i) 
the resolvent identity: J λ A x = J μ ( μ λ x + ( 1 - μ λ ) J λ A x ) , λ , μ > 0 , x X ;
(ii) 
if J λ A is a resolvent of A for λ > 0 , then J λ A is a single-valued nonexpansive mapping with Fix ( J λ A ) = A - 1 0 , where A - 1 0 = { x C : 0 A x } ;
(iii) 
in a Hilbert space H, an operator A is maximal monotone iff it is m-accretive.
Let A : C X be an α -inverse-strongly accretive mapping and B : C 2 X be an m-accretive operator. In the sequel, one will use the notation T λ : = J λ B ( I - λ A ) = ( I + λ B ) - 1 ( I - λ A ) , λ > 0 . The following statements (see [38]) hold:
(i)
Fix ( T λ ) = ( A + B ) - 1 0 , λ > 0 ;
(ii)
x - T λ x 2 x - T s x for 0 < λ s and x X .
Proposition 4.
[38] Let X be a Banach space with the uniformly convex and q-uniformly smooth structures with 1 < q 2 . Assume that A : C X is a α-inverse-strongly accretive single-valued mapping and B : C 2 X is an m-accretive operator. Then
T λ x - T λ y q x - y q - λ ( α q - λ q - 1 κ q ) A x - A y q - ϕ ( ( I - J λ B ) ( I - λ A ) x - ( I - J λ B ) ( I - λ A ) y ) ,
for all x , y B ˜ r : = { x C : x r } , where ϕ : R + R + with ϕ ( 0 ) = 0 is a convex, strictly increasing and continuous function, λ and r two positive real constants, κ q is the real smooth constant of X, and T λ and J λ B are resolvent operators defined as above. In particular, if 0 < λ ( α q κ q ) 1 q - 1 , then T λ is nonexpansive.
Lemma 3.
[39] Let X be uniformly smooth, T be single-valued nonexpansivitity on C with Fix ( T ) , and f : C C be a any contraction. For each t ( 0 , 1 ) , one employs z t C to present the unique fixed point of the new contraction C z t f ( z ) + ( 1 - t ) T z on C, i.e., z t = ( 1 - t ) T z t + t f ( z t ) . Then { z t } converges to x * Fix ( T ) in norm, which deals with the variational inequality: ( I - f ) x * , J ( x * - p ) 0 , p Fix ( T ) .
Lemma 4.
[25] Let X be a uniformly smooth with order q. Suppose that Π C is a sunny nonexpansive retraction from X onto C. Let the mapping A i : C X be α i -inverse-strongly accretive of order q for i = 1 , 2 . Let the mapping G : C C be defined as G x : = Π C ( I - μ 1 A 1 ) Π C ( I - μ 2 A 2 ) , x C . If 0 < μ i ( q α i κ q ) 1 q - 1 for i = 1 , 2 , then G : C C is nonexpansive. For given ( x * , y * ) C × C , ( x * , y * ) is a solution of GSVI (1) if and only if x * = Π C ( y * - μ 1 A 1 y * ) where y * = Π C ( x * - μ 2 A 2 x * ) , i.e., x * = G x * .
Lemma 5.
[40] Let { S n } n = 0 be a mapping sequence on C. Suppose that n = 1 sup { S n x - S n - 1 x : x C } < . Then { S n x } converges to some point of C in norm for each x C . Besides, we present S, a self-mapping, on C by S x = lim n S n x , x C . Then lim n sup { S n x - S x : x C } = 0 .
Lemma 6.
[41] Let X be Banach space. Let { α n } be a real sequence in ( 0 , 1 ) with lim sup n α n < 1 and lim inf n α n > 0 . Let x n + 1 = α n x n + ( 1 - α n ) y n , n 0 and lim sup n ( y n - y n + 1 - x n + 1 - x n ) 0 , where { x n } and { y n } be bounded sequences in X. Then lim n y n - x n = 0 .
Lemma 7.
[42] Let X be strictly convex, and { T n } n = 0 be a sequence of nonexpansive mappings on C. Suppose that n = 0 Fix ( T n ) . Let { λ n } be a sequence of positive numbers with n = 0 λ n = 1 . Then a mapping S on C defined by S x = n = 0 λ n T n x for x C is defined well, nonexpansive and Fix ( S ) = n = 0 Fix ( T n ) holds.
Lemma 8.
[43] Let { a n } be a non-negative number sequence of with a n + 1 a n ( 1 - λ n ) + λ n γ n , n 1 , where { γ n } and { λ n } are sequences such that (a) n = 1 | λ n γ n | < (or lim sup n γ n 0 ) and (b) { λ n } [ 0 , 1 ] and n = 1 λ n = . Then a n goes to zero as n goes to the infinity.
Lemma 9.
[7,35] Let X be uniformly convex, and the ball B r = { x X : x r } , r > 0 . Then
α x + β y + γ y 2 + α β g ( x - y ) α x 2 + β y 2 + γ z 2
for all x , y , z B r and α , β , γ [ 0 , 1 ] with α + β + γ = 1 , where g : [ 0 , ) [ 0 , ) is a convex, continuous and strictly increasing function.

3. Iterative Algorithms and Convergence Criteria

Space X presents a real Banach space and its topological dual is X * , and C is a non-empty convex and closed set in space X. We are now ready to state and prove the main results in this paper.
Theorem 1.
Let X be uniformly convex and uniformly smooth with the constant 1 < q 2 . Let Π C be a nonexpansive sunny retraction from X onto C. Assume that the mappings A , A i : C X are inverse-strongly accretive of order q and α i -inverse-strongly accretive of order q, respectively for i = 1 , 2 . Let B : C 2 X be an m-accretive operator, and let { S n } n = 0 be a countable family of nonexpansive single-valued self-mappings on C such that Ω = n = 0 Fix ( S n ) GSVI ( C , A 1 , A 2 ) ( A + B ) - 1 0 where GSVI ( C , A 1 , A 2 ) is the fixed point set of G : = Π C ( I - μ 1 A 1 ) Π C ( I - μ 2 A 2 ) with 0 < μ i < ( q α i κ q ) 1 q - 1 for i = 1 , 2 . Let f : C C be a contraction with constant δ ( 0 , 1 ) . For arbitrary initial x 0 C , { x n } is a sequence generated by
y n = α n f ( y n ) + γ n J λ n B ( I - λ n A ) ( t n x n + ( 1 - t n ) y n ) + β n x n , v n = Π C ( I - μ 1 A 1 ) Π C ( y n - μ 2 A 2 y n ) , x n + 1 = ( 1 - δ n ) S n v n + δ n x n , n 0 ,
where { λ n } ( 0 , ( q α κ q ) 1 q - 1 ) and { α n } , { β n } , { γ n } , { δ n } , { t n } ( 0 , 1 ) satisfy
(i) 
n = 0 α n = , lim n α n = 0 and α n + β n + γ n = 1 ;
(ii) 
lim n | β n - β n - 1 | = lim n | γ n - γ n - 1 | = 0 ;
(iii) 
lim n | t n - t n - 1 | = 0 and lim inf n γ n ( 1 - t n ) > 0 ;
(iv) 
lim inf n β n γ n > 0 , lim sup n δ n < 1 and lim inf n δ n > 0 ;
(v) 
0 < λ ¯ λ n , n 0 and lim n λ n = λ < ( q α κ q ) 1 q - 1 .
Assume that n = 1 sup x D ( S n - S n - 1 ) x < for any bounded set D, which is subset of C and let S be a self-mapping S x = lim n S n x , x C and suppose that Fix ( S ) = n = 0 Fix ( S n ) . Then x n x * Ω , which solves ( I - f ) x * , J ( x * - p ) 0 , p Ω uniquely.
Proof. 
Set u n = Π C ( y n - μ 2 A 2 y n ) . It is not hard to find that our scheme can be re-written by
y n = α n f ( y n ) + β n x n + γ n T n ( t n x n + ( 1 - t n ) y n ) , x n + 1 = ( 1 - δ n ) S n G y n + δ n x n , n 0 ,
where T n : = J λ n B ( I - λ n A ) , n 0 . By condition (v) and Proposition 4, one observes that T n : C C is a nonexpansive mapping for each n 0 . Since α n + β n + γ n = 1 , we know that
α n δ + γ n ( 1 - t n ) + β n + γ n t n = α n δ + γ n + β n = 1 - α n ( 1 - δ ) , n 0 .
One first claims that the sequence { x n } generated by (2) is well defined. Indeed, for each fixed x n C , one defines a mapping F n : C C by F n ( x ) = α n f ( x ) + β n x n + γ n T n ( t n x n + ( 1 - t n ) x ) , x C . Then, one gets, for any x , y C ,
F n ( x ) - F n ( y ) α n f ( x ) - f ( y ) + γ n T n ( t n x n + ( 1 - t n ) x ) - T n ( t n x n + ( 1 - t n ) y ) α n δ x - y + γ n ( 1 - t n ) x - y = ( α n δ + γ n ( 1 - t n ) ) x - y ( 1 - α n ( 1 - δ ) ) x - y .
This implies that F n is a strictly contraction operator. Hence the Banach fixed-point theorem ensures that there is a unique fixed point y n C satisfying
y n = α n f ( y n ) + β n x n + γ n T n ( t n x n + ( 1 - t n ) y n ) .
Next, one claims that { x n } is bounded. Indeed, arbitrarily take a fixed p Ω = n = 0 Fix ( S n ) GSVI ( C , A 1 , A 2 ) ( A + B ) - 1 0 . One knows that S n p = p , G p = p and T n p = p . Moreover, by using the nonexpansivity of T n , we have
y n - p α n ( f ( y n ) - f ( p ) + f ( p ) - p ) + β n x n - p + γ n T n ( t n x n + ( 1 - t n ) y n ) - p α n ( δ y n - p + f ( p ) - p ) + β n x n - p + γ n [ t n x n - p + ( 1 - t n ) y n - p ] = ( α n δ + γ n ( 1 - t n ) ) y n - p + ( β n + γ n t n ) x n - p + α n p - f ( p ) ,
which hence implies that
y n - p γ n t n + β n 1 - ( α n δ + γ n ( 1 - t n ) ) x n - p + α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( p ) - p = 1 - α n ( 1 - δ ) - ( α n δ + γ n ( 1 - t n ) ) 1 - ( α n δ + γ n ( 1 - t n ) ) x n - p + α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( p ) - p = ( 1 - α n ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) ) x n - p + α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( p ) - p .
Thus, from (2) and (3), we have
x n + 1 - p δ n x n - p + ( 1 - δ n ) S n G y n - p ( 1 - δ n ) y n - p + δ n x n - p δ n x n - p + ( 1 - δ n ) { ( 1 - α n ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) ) x n - p + α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( p ) - p } = [ 1 - ( 1 - δ n ) ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) α n ] x n - p + ( 1 - δ n ) ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) α n f ( p ) - p 1 - δ .
By induction, we get that { x n } is bounded. Please note that G is non-expansive thanks to Lemma 4. Using (3) and the nonexpansivity of I - μ 1 A 1 , I - μ 2 A 2 , S n , T n and G, it is guaranteed that { u n } , { v n } , { y n } , { G y n } , { S n v n } and { T n z n } are bounded too, where u n : = Π C ( I - μ 2 A 2 ) y n , v n : = Π C ( I - μ 1 A 1 ) u n and z n : = t n x n + ( 1 - t n ) y n for all n 0 . Thanks to (2), we have
z n = t n ( x n - y n ) + y n , z n - 1 = t n - 1 ( x n - 1 - y n - 1 ) + y n - 1 , n 1 ,
and
y n = α n f ( y n ) + β n x n + γ n T n z n , y n - 1 = α n - 1 f ( y n - 1 ) + β n - 1 x n - 1 + γ n - 1 T n - 1 z n - 1 , n 1 ,
Simple calculations show that
z n - z n - 1 = ( t n - t n - 1 ) ( x n - 1 - y n - 1 ) + ( 1 - t n ) ( y n - y n - 1 ) + t n ( x n - x n - 1 ) ,
and
y n - y n - 1 = ( α n - α n - 1 ) f ( y n - 1 ) + α n ( f ( y n ) - f ( y n - 1 ) ) + β n ( x n - x n - 1 ) + ( β n - β n - 1 ) x n - 1 + γ n ( T n z n - T n - 1 z n - 1 ) + ( γ n - γ n - 1 ) T n - 1 z n - 1 .
It follows from the resolvent identity that
T n z n - T n - 1 z n - 1 T n z n - T n z n - 1 + T n z n - 1 - T n - 1 z n - 1 z n - z n - 1 + J λ n B ( I - λ n A ) z n - 1 - J λ n - 1 B ( I - λ n - 1 A ) z n - 1 z n - z n - 1 + J λ n B ( I - λ n A ) z n - 1 - J λ n - 1 B ( I - λ n A ) z n - 1 + J λ n - 1 B ( I - λ n A ) z n - 1 - J λ n - 1 B ( I - λ n - 1 A ) z n - 1 = z n - z n - 1 + J λ n - 1 B ( λ n - 1 λ n I + ( 1 - λ n - 1 λ n ) J λ n B ) ( I - λ n A ) z n - 1 - J λ n - 1 B ( I - λ n A ) z n - 1 + J λ n - 1 B ( I - λ n A ) z n - 1 - J λ n - 1 B ( I - λ n - 1 A ) z n - 1 t n x n - x n - 1 + | t n - t n - 1 | x n - 1 - y n - 1 + ( 1 - t n ) y n - y n - 1 + | 1 - λ n - 1 λ n | J λ n B ( I - λ n A ) z n - 1 - ( I - λ n A ) z n - 1 + | λ n - λ n - 1 | A z n - 1 t n x n - x n - 1 + | t n - t n - 1 | x n - 1 - y n - 1 + ( 1 - t n ) y n - y n - 1 + | λ n - λ n - 1 | M 1 ,
where
sup n 1 { J λ n B ( I - λ n A ) z n - 1 - ( I - λ n A ) z n - 1 λ ¯ + A z n - 1 } M 1
for some M 1 > 0 . This together with (4), implies that
y n - y n - 1 | α n - α n - 1 | f ( y n - 1 ) + β n x n - 1 - x n + α n f ( y n ) - f ( y n - 1 ) + γ n T n z n - T n - 1 z n - 1 + | γ n - γ n - 1 | T n - 1 z n - 1 + | β n - 1 - β n | x n - 1 α n δ y n - y n - 1 + | α n - α n - 1 | f ( y n - 1 ) + β n x n - x n - 1 + | β n - β n - 1 | x n - 1 + γ n { t n x n - x n - 1 + | t n - t n - 1 | x n - 1 - y n - 1 + ( 1 - t n ) y n - y n - 1 + | λ n - λ n - 1 | M 1 } + | γ n - γ n - 1 | T n - 1 z n - 1 ( α n δ + γ n ( 1 - t n ) ) y n - y n - 1 + ( β n + γ n t n ) x n - x n - 1 + ( | α n - α n - 1 | + | γ n - 1 - γ n | + | β n - 1 - β n | + | t n - t n - 1 | + | λ n - λ n - 1 | ) M 2 ,
where
sup n 0 { f ( y n ) + x n + y n + M 1 + T n z n } M 2
for some M 2 > 0 . So it follows that
y n - y n - 1 β n + γ n t n 1 - ( α n δ + γ n ( 1 - t n ) ) x n - 1 - x n + 1 1 - ( α n δ + γ n ( 1 - t n ) ) ( | α n - α n - 1 | + | β n - β n - 1 | + | γ n - 1 - γ n | + | t n - t n - 1 | + | λ n - λ n - 1 | ) M 2 = ( 1 - α n ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) ) x n - 1 - x n + 1 1 - ( α n δ + γ n ( 1 - t n ) ) ( | α n - α n - 1 | + | β n - β n - 1 | + | γ n - γ n - 1 | + | t n - t n - 1 | + | λ n - λ n - 1 | ) M 2 x n - x n - 1 + 1 1 - ( α n δ + γ n ( 1 - t n ) ) ( | α n - α n - 1 | + | β n - β n - 1 | + | γ n - 1 - γ n | + | t n - t n - 1 | + | λ n - λ n - 1 | ) M 2 .
Hence we get
S n G y n - S n - 1 G y n - 1 S n G y n - S n G y n - 1 + S n - 1 G y n - 1 - S n G y n - 1 y n - y n - 1 + S n G y n - 1 - S n - 1 G y n - 1 x n - x n - 1 + 1 1 - ( α n δ + γ n ( 1 - t n ) ) ( | β n - 1 - β n | + | α n - α n - 1 | + | γ n - γ n - 1 | + | t n - 1 - t n | + | λ n - λ n - 1 | ) M 2 + S n G y n - 1 - S n - 1 G y n - 1 .
Consequently,
S n G y n - S n - 1 G y n - 1 - x n - x n - 1 1 1 - ( α n δ + γ n ( 1 - t n ) ) ( | α n - α n - 1 | + | β n - β n - 1 | + | γ n - γ n - 1 | + | t n - t n - 1 | + | λ n - λ n - 1 | ) M 2 + S n G y n - 1 - S n - 1 G y n - 1 .
Since n = 1 sup x D ( S n - S n - 1 ) x < for bounded subset D = { G y n : n 0 } of C (due to the assumption), we know that lim n ( S n G - S n - 1 G ) y n - 1 = 0 . Please note that lim n α n = 0 , lim n λ n = λ and lim inf n γ n ( 1 - t n ) > 0 . Thus, from | β n - β n - 1 | 0 , | γ n - γ n - 1 | 0 and | t n - t n - 1 | 0 as n (due to conditions (ii), (iii)), we get
lim sup n ( S n G y n - S n - 1 G y n - 1 - x n - x n - 1 ) 0 .
So it follows from condition (iv) and Lemma 6 that lim n S n G y n - x n = 0 . Hence we obtain
lim n x n + 1 - x n = lim n ( 1 - δ n ) S n G y n - x n = 0 .
Let p ¯ : = Π C ( I - μ 2 A 2 ) p . Please note that u n = Π C ( I - μ 2 A 2 ) y n and v n = Π C ( I - μ 1 A 1 ) u n . Then v n = G y n . From Proposition 4 (see also Lemma 2.13 in [25]), we have
u n - p ¯ q = Π C ( I - μ 2 A 2 ) y n - Π C ( I - μ 2 A 2 ) p q ( I - μ 2 A 2 ) y n - ( I - μ 2 A 2 ) p q y n - p q - μ 2 ( q α 2 - κ q μ 2 q - 1 ) A 2 y n - A 2 p q ,
and
v n - p q = Π C ( I - μ 1 A 1 ) u n - Π C ( I - μ 1 A 1 ) p ¯ q ( I - μ 1 A 1 ) u n - ( I - μ 1 A 1 ) p ¯ q u n - p ¯ q - μ 1 ( q α 1 - κ q μ 1 q - 1 ) A 1 u n - A 1 p ¯ q .
Substituting (7) to (8), we obtain
v n - p q y n - p q - μ 2 ( q α 2 - κ q μ 2 q - 1 ) A 2 y n - A 2 p q - μ 1 ( q α 1 - κ q μ 1 q - 1 ) A 1 u n - A 1 p ¯ q .
According to Proposition 4, we obtain from (2) that z n - p q t n x n - p q + ( 1 - t n ) y n - p q , and hence
y n - p q = β n ( p - x n ) + α n ( f ( p ) - f ( y n ) ) + γ n ( p - T n z n ) + α n ( p - f ( p ) ) q α n ( f ( y n ) - f ( p ) ) + β n ( x n - p ) + γ n ( T n z n - p ) q + q α n f ( p ) - p , J q ( y n - p ) α n f ( y n ) - f ( p ) q + β n x n - p q + γ n T n z n - p q + q α n f ( p ) - p , J q ( y n - p ) α n δ y n - p q + β n x n - p q + γ n [ t n x n - p q + ( 1 - t n ) y n - p q ] + q α n f ( p ) - p y n - p q - 1 ,
which immediately yields
y n - p q ( 1 - α n ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) ) x n - p q + q α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( p ) - p y n - p q - 1 .
This, together with the convexity of · q and (9), leads to
x n + 1 - p q = δ n ( x n - p ) + ( 1 - δ n ) ( S n G y n - p ) q δ n x n - p q + ( 1 - δ n ) S n v n - p q δ n x n - p q + ( 1 - δ n ) { y n - p q - μ 2 ( q α 2 - κ q μ 2 q - 1 ) A 2 y n - A 2 p q - μ 1 ( q α 1 - κ q μ 1 q - 1 ) A 1 u n - A 1 p ¯ q } δ n x n - p q + ( 1 - δ n ) { ( 1 - α n ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) ) x n - p q + q f ( p ) - p y n - p q - 1 1 - ( α n δ + γ n ( 1 - t n ) ) α n - μ 2 ( q α 2 - κ q μ 2 q - 1 ) A 2 y n - A 2 p q - μ 1 ( q α 1 - κ q μ 1 q - 1 ) A 1 u n - A 1 p ¯ q } = ( 1 - α n ( 1 - δ n ) ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) ) x n - p q + q ( 1 - δ n ) f ( p ) - p y n - p q - 1 1 - ( α n δ + γ n ( 1 - t n ) ) α n - ( 1 - δ n ) [ μ 2 ( q α 2 - κ q μ 2 q - 1 ) A 2 y n - A 2 p q + μ 1 ( q α 1 - κ q μ 1 q - 1 ) A 1 u n - A 1 p ¯ q ] x n - p q - ( 1 - δ n ) [ μ 2 ( q α 2 - κ q μ 2 q - 1 ) A 2 y n - A 2 p q + μ 1 ( q α 1 - κ q μ 1 q - 1 ) A 1 u n - A 1 p ¯ q ] + α n M 3 ,
where
sup n 0 { q ( 1 - δ n ) 1 - ( α n δ + γ n ( 1 - t n ) ) f ( p ) - p y n - p q - 1 } M 3
for some M 3 > 0 . So it follows from (10) and Proposition 2 that
( 1 - δ n ) [ μ 2 ( q α 2 - κ q μ 2 q - 1 ) A 2 y n - A 2 p q + μ 1 ( q α 1 - κ q μ 1 q - 1 ) A 1 u n - A 1 p ¯ q ] x n - p q - x n + 1 - p q + α n M 3 q x n - x n + 1 x n + 1 - p q - 1 + κ q x n - x n + 1 q + α n M 3 .
Since 0 < μ i < ( q α i κ q ) 1 q - 1 for i = 1 , 2 , from conditions (i), (iv) and (6) we get
lim n A 2 y n - A 2 p = 0 and lim n A 1 u n - A 1 p ¯ = 0 .
Using Proposition 1, we have
u n - p ¯ 2 = Π C ( I - μ 2 A 2 ) y n - Π C ( I - μ 2 A 2 ) p 2 ( I - μ 2 A 2 ) y n - ( I - μ 2 A 2 ) p , J ( u n - p ¯ ) = y n - p , J ( u n - p ¯ ) + μ 2 A 2 p - A 2 y n , J ( u n - p ¯ ) 1 2 [ y n - p 2 + u n - p ¯ 2 - g 1 ( y n - u n - ( p - p ¯ ) ) ] + μ 2 A 2 p - A 2 y n u n - p ¯ ,
where g 1 is given by Proposition 1. This yields
u n - p ¯ 2 y n - p 2 - g 1 ( y n - u n - ( p - p ¯ ) ) + 2 μ 2 A 2 p - A 2 y n u n - p ¯ .
In the same way, we derive
v n - p 2 = Π C ( I - μ 1 A 1 ) u n - Π C ( I - μ 1 A 1 ) p ¯ 2 ( I - μ 1 A 1 ) u n - ( I - μ 1 A 1 ) p ¯ , J ( v n - p ) = u n - p ¯ , J ( v n - p ) + μ 1 A 1 p ¯ - A 1 u n , J ( v n - p ) 1 2 [ u n - p ¯ 2 + v n - p 2 - g 2 ( u n - v n + ( p - p ¯ ) ) ] + μ 1 A 1 p ¯ - A 1 u n v n - p ,
where g 2 is given by Proposition 1. This yields
v n - p 2 u n - p ¯ 2 - g 2 ( u n - v n + ( p - p ¯ ) ) + 2 μ 1 A 1 p ¯ - A 1 u n v n - p .
Substituting (12) for (13), we get
v n - p 2 y n - p 2 - g 1 ( y n - u n - ( p - p ¯ ) ) - g 2 ( u n - v n + ( p - p ¯ ) ) + 2 μ 2 A 2 p - A 2 y n u n - p ¯ + 2 μ 1 A 1 p ¯ - A 1 u n v n - p .
Please note that · 2 is convex. Using Proposition 1, Lemmas 1 and 9, one concludes
y n - p 2 β n ( p - x n ) + α n ( f ( p ) - f ( y n ) ) + γ n ( p - T n z n ) 2 + 2 α n f ( p ) - p , J ( y n - p ) α n f ( p ) - f ( y n ) 2 + β n x n - p 2 + γ n T n z n - p 2 - β n γ n g 3 ( x n - T n z n ) + 2 α n f ( p ) - p , J ( y n - p ) α n δ p - y n 2 + β n x n - p 2 + γ n ( t n x n - p 2 + ( 1 - t n ) y n - p 2 ) + 2 α n f ( p ) - p p - y n - β n γ n g 3 ( x n - T n z n ) ,
which immediately sends
y n - p 2 ( 1 - α n ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) ) x n - p 2 + 2 α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( p ) - p y n - p - β n γ n 1 - ( α n δ + γ n ( 1 - t n ) ) g 3 ( x n - T n z n ) .
This together with (2) and (14) leads to
x n + 1 - p 2 = δ n ( x n - p ) + ( 1 - δ n ) ( S n G y n - p ) 2 δ n p - x n 2 + ( 1 - δ n ) S n v n - p 2 δ n p - x n 2 + ( 1 - δ n ) { y n - p 2 - g 1 ( y n - u n - ( p - p ¯ ) ) - g 2 ( u n - v n + ( p - p ¯ ) ) + 2 μ 2 A 2 p - A 2 y n u n - p ¯ + 2 μ 1 A 1 p ¯ - A 1 u n v n - p } δ n x n - p 2 + ( 1 - δ n ) { ( 1 - α n ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) ) x n - p 2 + 2 α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( p ) - p y n - p - β n γ n 1 - ( α n δ + γ n ( 1 - t n ) ) g 3 ( x n - T n z n ) - g 1 ( y n - u n - ( p - p ¯ ) ) - g 2 ( u n - v n + ( p - p ¯ ) ) + 2 μ 2 A 2 p - A 2 y n u n - p ¯ + 2 μ 1 A 1 p ¯ - A 1 u n v n - p } ( 1 - α n ( 1 - δ n ) ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) ) x n - p 2 + 2 α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( p ) - p y n - p - ( 1 - δ n ) [ β n γ n g 3 ( x n - T n z n ) 1 - ( α n δ + γ n ( 1 - t n ) ) + g 1 ( y n - u n - ( p - p ¯ ) ) + g 2 ( u n - v n + ( p - p ¯ ) ) ] + 2 μ 2 A 2 p - A 2 y n u n - p ¯ + 2 μ 1 A 1 p ¯ - A 1 u n v n - p x n - p 2 - ( 1 - δ n ) [ β n γ n g 3 ( x n - T n z n ) 1 - ( α n δ + γ n ( 1 - t n ) ) + g 1 ( y n - u n - ( p - p ¯ ) ) + g 2 ( u n - v n + ( p - p ¯ ) ) ] + 2 μ 2 A 2 p - A 2 y n u n - p ¯ + 2 μ 1 A 1 p ¯ - A 1 u n v n - p + 2 α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( p ) - p y n - p ,
which immediately yields
( 1 - δ n ) [ β n γ n g 3 ( x n - T n z n ) 1 - ( α n δ + γ n ( 1 - t n ) ) + g 1 ( y n - u n - ( p - p ¯ ) ) + g 2 ( u n - v n + ( p - p ¯ ) ) ] x n - p 2 - x n + 1 - p 2 + 2 μ 2 A 2 p - A 2 y n u n - p ¯ + 2 μ 1 A 1 p ¯ - A 1 u n v n - p + 2 α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( p ) - p y n - p ( p - x n + p - x n + 1 ) x n - x n + 1 + 2 μ 2 A 2 p - A 2 y n u n - p ¯ + 2 μ 1 A 1 p ¯ - A 1 u n v n - p + 2 α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( p ) - p y n - p .
Using (6) and (11), from lim inf n β n γ n > 0 , and lim inf n ( 1 - δ n ) > 0 , we have
lim n g 1 ( y n - u n - ( p - p ¯ ) ) = lim n g 2 ( u n - v n + ( p - p ¯ ) ) = lim n g 3 ( x n - T n z n ) = 0 .
Using the properties of g 1 , g 2 and g 3 , we deduce that
lim n y n - u n - ( p - p ¯ ) = lim n u n - v n + ( p - p ¯ ) = lim n x n - T n z n = 0 .
From (15) we get
y n - G y n = y n - v n y n - u n - ( p - p ¯ ) + u n - v n + ( p - p ¯ ) 0 ( n ) .
Meantime, again from (2) we have y n - x n = α n ( f ( y n ) - x n ) + γ n ( T n z n - x n ) . Hence from (15) we get y n - x n α n f ( y n ) - x n + T n z n - x n 0 ( n ) . This together with (16), implies that
x n - G x n x n - y n + y n - G y n + G y n - G x n y n - G y n + 2 x n - y n 0 ( n ) .
Next, one claims that x n - S x n 0 , x n - T λ x n 0 and x n - W x n 0 as n , where S x = lim n S n x , x C , T λ = J λ B ( I - λ A ) and W x = θ 1 S x + θ 2 G x + θ 3 T λ x , x C for constants θ 1 , θ 2 , θ 3 ( 0 , 1 ) satisfying θ 1 + θ 2 + θ 3 = 1 . Indeed, since x n + 1 = δ n x n + ( 1 - δ n ) S n G y n leads to S n G y n - x n = 1 1 - δ n x n + 1 - x n , we deduce from (17), lim inf n ( 1 - δ n ) > 0 and x n - y n 0 that
S n x n - x n S n x n - S n G x n + S n G x n - S n G y n + S n G y n - x n x n - G x n + x n - y n + 1 1 - δ n x n + 1 - x n 0 ( n ) ,
which implies that
S x n - x n S x n - S n x n + S n x n - x n 0 ( n ) .
Furthermore, using the similar arguments to those of (5), we obtain
T n z n - T λ z n | 1 - λ λ n | J λ n B ( I - λ n A ) z n - ( I - λ n A ) z n + | λ n - λ | A z n = | 1 - λ λ n | T n z n - ( I - λ n A ) z n + | λ n - λ | A z n .
Since lim n λ n = λ and the sequences { z n } , { T n z n } , { A z n } are bounded, we get
lim n T n z n - T λ z n = 0 .
Taking into account condition (v), i.e., 0 < λ ¯ λ n , n 0 and lim n λ n = λ < ( q α κ q ) 1 q - 1 , we know that 0 < λ ¯ λ < ( q α κ q ) 1 q - 1 . So it follows from Proposition 4 that Fix ( T λ ) = ( A + B ) - 1 0 and T λ : C C is nonexpansive. Therefore, we deduce from (15), (19) and x n - y n 0 that
T λ x n - x n T λ x n - T λ z n + T λ z n - T n z n + T n z n - x n x n - z n + T λ z n - T n z n + T n z n - x n x n - y n + T λ z n - T n z n + T n z n - x n 0 ( n ) .
We now define the mapping W x = θ 1 S x + θ 2 G x + θ 3 T λ x , x C for constants θ 1 , θ 2 , θ 3 ( 0 , 1 ) satisfying θ 1 + θ 2 + θ 3 = 1 . So by using Lemma 7, we know that Fix ( W ) = Fix ( S ) Fix ( G ) Fix ( T λ ) = Ω . One observes that
x n - W x n = θ 1 ( x n - S x n ) + θ 2 ( x n - G x n ) + θ 3 ( x n - T λ x n ) θ 1 x n - S x n + θ 2 x n - G x n + θ 3 x n - T λ x n .
From (17), (18), (20) and (21), we get
lim n x n - W x n = 0 .
The next step is to claim
lim sup n J ( x n - x * ) , f ( x * ) - x * 0 ,
with x * = s- lim n x t , where x t is a fixed point of x t f ( x ) + ( 1 - t ) W x for each t ( 0 , 1 ) . Please note that the existence of x * ( x * Fix ( W ) ) is from Lemma 3. Indeed, the Banach contraction mapping principle guarantees that for each t ( 0 , 1 ) , x t satifies x t = t f ( x t ) + ( 1 - t ) W x t . Hence we have x t - x n = ( W x t - x n ) ( 1 - t ) + ( f ( x t ) - x n ) t . Using the known subdifferential inequality (see [7]), we conclude that
x n - x t 2 2 t x n - f ( x t ) , J ( x n - x t ) + ( 1 - t ) 2 W x t - x n 2 2 t x n - f ( x t ) , J ( x n - x t ) + ( 1 - t ) 2 ( W x n - x n + W x t - W x n ) 2 2 t x n - f ( x t ) , J ( x n - x t ) + ( 1 - t ) 2 ( x n - x t + x n - W x n ) 2 = ( t 2 - 2 t + 1 ) x n - x t 2 + 2 t x t - f ( x t ) , J ( x n - x t ) + f n ( t ) + 2 t x n - x t 2 ,
where
f n ( t ) = ( x n - W x n + 2 x n - x t ) x n - W x n ( 1 - t ) 2 0 ( n ) .
It follows from (23) that
x t - f ( x t ) , J ( x t - x n ) t 2 x t - x n 2 + 1 2 t f n ( t ) .
Using both (25) and (24), we derive
lim sup n x t - f ( x t ) , J ( x t - x n ) t 2 M 4 ,
where sup { x t - x n 2 : t ( 0 , 1 ) and n 0 } M 4 for some M 4 > 0 . Taking t 0 in (26), we have
lim sup t 0 lim sup n f ( x t ) - x t , J ( x n - x t ) 0 .
On the other hand, we have
f ( x * ) - x * , J ( x n - x * ) = f ( x * ) - x * , J ( x n - x * ) - f ( x * ) - x * , J ( x n - x t ) + f ( x * ) - x * , J ( x n - x t ) - f ( x * ) - x t , J ( x n - x t ) + f ( x * ) - x t , J ( x n - x t ) - f ( x t ) - x t , J ( x n - x t ) + f ( x t ) - x t , J ( x n - x t ) = f ( x * ) - x * , J ( x n - x * ) - J ( x n - x t ) + x t - x * , J ( x n - x t ) + x t - f ( x t ) , J ( x t - x n ) + f ( x * ) - f ( x t ) , J ( x n - x t ) .
So it follows that
lim sup n f ( x * ) - x * , J ( x n - x * ) lim sup n f ( x * ) - x * , J ( x n - x * ) - J ( x n - x t ) + ( 1 + δ ) x t - x * lim sup n x n - x t + lim sup n f ( x t ) - x t , J ( x n - x t ) .
Taking into account that x t x * yield
lim sup n f ( x * ) - x * , J ( x n - x * ) = lim sup t 0 lim sup n f ( x * ) - x * , J ( x n - x * ) lim sup t 0 lim sup n f ( x * ) - x * , J ( x n - x * ) - J ( x n - x t ) .
Using the property on nonlinear mapping J yields (22). Please note that x n - y n 0 implies J ( y n - x * ) - J ( x n - x * ) 0 . Thus, we conclude from (22) that
lim sup n f ( x * ) - x * , J ( y n - x * ) = lim sup n f ( x * ) - x * , J ( x n - x * ) 0 .
One observes that
y n - x * 2 = α n ( f ( y n ) - f ( x * ) ) + β n ( x n - x * ) + γ n ( T n z n - x * ) + α n ( f ( x * ) - x * ) 2 α n f ( y n ) - f ( x * ) 2 + β n x n - x * 2 + γ n z n - x * 2 + 2 α n f ( x * ) - x * , J ( y n - x * ) α n δ y n - x * 2 + β n x n - x * 2 + γ n ( t n x n - x * 2 + ( 1 - t n ) y n - x * 2 ) + 2 α n f ( x * ) - x * , J ( y n - x * ) ,
which hence yields
y n - x * 2 ( 1 - α n ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) ) x n - x * 2 + 2 α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( x * ) - x * , J ( y n - x * ) .
By the convexity of · 2 , the nonexpansivity of S n G and (28), we get
x n + 1 - x * 2 = ( x n - x * ) δ n + ( S n G y n - x * ) ( 1 - δ n ) 2 δ n x n - x * 2 + ( 1 - δ n ) { ( 1 - α n ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) ) x n - x * 2 + 2 α n 1 - ( α n δ + γ n ( 1 - t n ) ) f ( x * ) - x * , J ( y n - x * ) } = 1 - α n ( 1 - δ n ) ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) x n - x * 2 + α n ( 1 - δ n ) ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) · 2 f ( x * ) - x * , J ( y n - x * ) 1 - δ .
Since lim inf n ( 1 - δ n ) ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) > 0 , { α n ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) } ( 0 , 1 ) and n = 0 α n = , we know that { α n ( 1 - δ n ) ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) } ( 0 , 1 ) and n = 0 α n ( 1 - δ n ) ( 1 - δ ) 1 - ( α n δ + γ n ( 1 - t n ) ) = . Using (27) and Lemma 8, we conclude from (29) that x n - x * 0 as n . This proof is now complete.
Remark 1.
From the related associated results in Ceng et al. [16], Song and Ceng [25], our obtained results extend and improve and them in the following ways:
(i) The approximating problem of n = 0 Fix ( S n ) GSVI ( C , A 1 , A 2 ) in [[16], Theorem 3.1] is moved to devise our approximating problem n = 0 Fix ( S n ) GSVI ( C , A 1 , A 2 ) ( A + B ) - 1 0 where ( A + B ) - 1 0 is the solution set of the VI: 0 ( A + B ) x . The implicit (two-step) relaxed extragradient method in [[16], Theorem 3.1] is extended to develop our generalized Mann viscosity implicit rule in Theorem 1. That is, two iterative steps y n = ( 1 - α n ) G x n + α n f ( y n ) and x n + 1 = ( 1 - β n ) S n y n + β n x n in [[16], Theorem 3.1] is refined to develop our two iterative steps y n = α n f ( y n ) + β n x n + γ n T n ( t n x n + ( 1 - t n ) y n ) and x n + 1 = δ n x n + ( 1 - δ n ) S n G y n , where T n = J λ n B ( I - λ n A ) . In addition, uniformly convex and 2-uniformly smooth restructures in [[16], Theorem 3.1] is generalized to the structures of uniformly convex and q-uniformly smooth for 1 < q 2 .
(ii) The problem of finding an element of n = 0 Fix ( S n ) GSVI ( C , A 1 , A 2 ) in [[25], Theorem 3.1] is generalized to devise our approximating problem on the element in n = 0 Fix ( S n ) GSVI ( C , A 1 , A 2 ) ( A + B ) - 1 0 , where ( A + B ) - 1 0 is the solution set of the VI: 0 ( A + B ) x . The modified relaxed extragradient method in [[25], Theorem 3.1] is extended to develop our generalized Mann viscosity implicit rule in Theorem 1. That is, two iterative steps y n = ( 1 - β n ) x n + β n G x n and x n + 1 = Π C [ α n γ f ( x n ) + γ n x n + ( ( 1 - γ n ) I - α n ρ F ) S n y n ] in [[25], Theorem 3.1] is extended to develop our two iterative steps y n = α n f ( y n ) + β n x n + γ n T n ( t n x n + ( 1 - t n ) y n ) and x n + 1 = δ n x n + ( 1 - δ n ) S n G y n , where T n = J λ n B ( I - λ n A ) .
Next, Theorem 1 is applied to solve the GSVI, VIP and FPP in an illustrating example. Let C = [ - 2 , 2 ] and H = R with the inner product a , b = a b and induced norm · = | · | . The initial point x 0 is randomly chosen in C. We define f ( x ) = 1 2 x , S x = sin x and A 1 x = A 2 x = A x = 2 3 x + 1 4 sin x for all x C . Then f is 1 2 -contraction, S is a nonexpansive self-mapping on C with Fix ( S ) = { 0 } and A is 11 12 -Lipschitzian and 5 12 -strongly monotone mapping. Indeed, we observe that
A x - A y 2 3 x - y + 1 4 sin x - sin y ( 2 3 + 1 4 ) x - y = 11 12 x - y ,
and
A x - A y , x - y = 2 3 x - y , x - y + 1 4 sin x - sin y , x - y ( 2 3 - 1 4 ) x - y 2 = 5 12 x - y 2 .
This ensures that A x - A y , x - y 60 121 A x - A y 2 . So it follows that A 1 = A 2 = A is 60 121 -inverse-strongly monotone, and hence α 1 = α 2 = α = 60 121 . Therefore, it is easy to see that Ω = Fix ( S ) GSVI ( C , A 1 , A 2 ) VI ( C , A ) = { 0 } . Let μ 1 = μ 2 = α = 60 121 . Putting α n = 1 2 ( n + 2 ) , β n = 1 2 - 1 2 ( n + 2 ) , γ n = 1 2 , δ n = 1 2 , t n = 1 2 and λ n = λ ¯ = α = 60 121 , we know that the conditions (i)–(v) on the parameter sequences { λ n } ( 0 , 2 α ) and { α n } , { β n } , { γ n } , { δ n } , { t n } ( 0 , 1 ) all are satisfied. In this case, the iterative scheme in Theorem 1 can be rewritten as follows:
y n = 1 2 ( n + 2 ) · 1 2 y n + ( 1 2 - 1 2 ( n + 2 ) ) x n + 1 2 P C ( I - 60 121 A ) ( x n + y n 2 ) , x n + 1 = x n + S G y n 2 ,
where G : = P C ( I - 60 121 A ) P C ( I - 60 121 A ) . Then, by Theorem 1, we know that { x n } converges to 0 Ω = Fix ( S ) GSVI ( C , A 1 , A 2 ) VI ( C , A ) .

4. Applications

In this section, we will apply the main result of this paper for solving some important optimization problems in the setting of Hilbert spaces.

4.1. Variational Inequality Problems

Let A : C H be a single-valued nonself mapping. Recall the monotone variational inequality of getting the desired vector x * C with A x * , x - x * 0 , x C , whose solution set of is VI ( C , A ) . Let I C be an indicator operator of C given by
I C y = 0 if y C , if y C .
We denote the normal cone of C at u by N C ( u ) , i.e., N C ( u ) is a set consists of such points which solve w , v - u 0 , v C . It is known that I C is a convex, lower semi-continuous and proper function and the subdifferential I C is maximally monotone. For λ > 0 , the resolvent mapping of I C is denoted by J λ I C , i.e., J λ I C = ( I + λ I C ) - 1 . Please note that
I C ( u ) = { w H : I C ( v ) + w , v - u I C ( u ) , v C } = { w H : w , v - u 0 v C } = N C ( u ) , u C .
So we know that u = J λ I C ( x ) x - u λ N C ( u ) x - u , v - u 0 , v C u = P C ( x ) . Hence we get ( A + I C ) - 1 0 = VI ( C , A ) .
Next, putting B = I C in Theorem 1, we can obtain the following result.
Theorem 2.
Let non-empty set C be a convex close in a Hilbert space X stated as Theorem 1. For i = 1 , 2 , mappings A , A i : C H are α-inverse-strongly monotone and α i -inverse-strongly monotone, respectively. Let S be a nonexpansive singled-valued self-mapping on C. Suppose Ω = Fix ( S ) GSVI ( C , A 1 , A 2 ) VI ( C , A ) , where GSVI ( C , A 1 , A 2 ) is the fixed-point set of G : = P C ( I - μ 1 A 1 ) P C ( I - μ 2 A 2 ) with 0 < μ i < 2 α i for i = 1 , 2 . Let f : C C be a strictly contraction with constant δ ( 0 , 1 ) . For arbitrary initial x 0 C , define { x n } by
y n = α n f ( y n ) + γ n P C ( I - λ n A ) ( t n x n + ( 1 - t n ) y n ) + β n x n , x n + 1 = ( 1 - δ n ) S G y n + δ n x n , n 0 ,
where 0 < λ ¯ λ n , n 0 and lim n λ n = λ < 2 α , and { α n } , { β n } , { γ n } , { δ n } , { t n } ( 0 , 1 ) satisfy conditions (i)–(iii) as in Theorem 1 in Section 2. Then x n x * Ω , which is the unique solution to the variational inequality: ( I - f ) x * , x * - p 0 , p Ω .

4.2. Convex Minimization Problems

Let g : H R and h : H R be two functions, where g is convex smooth and h is proper convex and lower semicontinuous. The associated minima problem is to find x * H such that
g ( x * ) + h ( x * ) = min x H { g ( x ) + h ( x ) } .
By Fermat’s rule, we know that the problem (30) is equivalent to the fact that finds x * H such that 0 g ( x * ) + h ( x * ) with g being the gradient of function g and h being the subdifferential function of function h. It is also known that if g is 1 α -Lipschitz continuous, then it is also α -inverse-strongly monotone. Next, putting A = g and B = h in Theorem 1, we can obtain the following result.
Theorem 3.
Let g : H R be a convex and differentiable function whose gradient g is 1 α -Lipschitz continuous and h : H R be a convex and lower semi-continuous function. A i : C H are supposed to be α i -inverse-strongly monotone for i = 1 , 2 . Let S be a nonexpansive single-valued self-mapping on C such that Ω = Fix ( S ) GSVI ( C , A 1 , A 2 ) ( g + h ) - 1 0 where ( g + h ) - 1 0 is the set of minima attained by g + h , and GSVI ( C , A 1 , A 2 ) is the fixed point set of G : = P C ( I - μ 1 A 1 ) P C ( I - μ 2 A 2 ) with 0 < μ i < 2 α i for i = 1 , 2 . Let f : C C be a strictly contraction with constant δ ( 0 , 1 ) . For arbitrary initial x 0 C , define { x n } by
y n = α n f ( y n ) + γ n J λ n h ( I - λ n g ) ( t n x n + ( 1 - t n ) y n ) + β n x n , x n + 1 = ( 1 - δ n ) S G y n + δ n x n , n 0 ,
where 0 < λ ¯ λ n , n 0 and lim n λ n = λ < 2 α , and { α n } , { β n } , { γ n } , { δ n } , { t n } ( 0 , 1 ) satisfy conditions (i)–(iii) as in Theorem 1 in Section 2. Then x n x * Ω , which uniquely solves ( I - f ) x * , x * - p 0 , p Ω .

4.3. Split Feasibility Problems

Let C and Q be non-empty convex closed sets in Hilbert spaces H 1 and H 2 , respectively. Let T : H 1 H 2 be a linearly bounded operator with its adjoint T * . Consider the split feasibility problem (SFP) of obtaining a desired point x * C and T x * Q . The SFP can be borrowed to model the radiation therapy. It is clear that the set of solutions of the SFP is C T - 1 Q . To solve the SFP, we can rewrite it as the following convexly constrained minimization problem:
min x C g ( x ) : = 1 2 T x - P Q T x 2 .
Please note that the function g is differentiable convex whose Lipschitz gradient is given by g = T * ( I - P Q ) T . Furthermore, g is 1 T 2 -inverse-strongly monotone, where T 2 is the spectral radius of T * T . Thus, x * solves the SFP if and only if x * H 1 such that
0 g ( x * ) + I C ( x * ) x * - λ g ( x * ) ( I + λ I C ) x * x * = J λ I C ( x * - λ g ( x * ) ) x * = P C ( x * - λ g ( x * ) ) .
Next, putting A = g and B = I C in Theorem 1, we can obtain the following result:
Theorem 4.
Let C and Q be nonempty closed convex subsets of H 1 and H 2 , respectively. Let T : H 1 H 2 be a bounded linear operator with its adjoint T * . Let the mapping A i : C H 1 be α i -inverse-strongly monotone for i = 1 , 2 . Let S be a nonexpansive self-mapping on C such that Ω = Fix ( S ) GSVI ( C , A 1 , A 2 ) ( C T - 1 Q ) where GSVI ( C , A 1 , A 2 ) is the fixed point set of G : = P C ( I - μ 1 A 1 ) P C ( I - μ 2 A 2 ) with 0 < μ i < 2 α i for i = 1 , 2 . Let f : C C be a δ-contraction with constant δ ( 0 , 1 ) . For arbitrarily given x 0 C , let { x n } be a sequence generated by
y n = α n f ( y n ) + γ n P C ( I - λ n T * ( I - P Q ) T ) ( t n x n + ( 1 - t n ) y n ) + β n x n , x n + 1 = δ n x n + ( 1 - δ n ) S G y n , n 0 ,
where 0 < λ ¯ λ n , n 0 and lim n λ n = λ < 2 T 2 , and { α n } , { β n } , { γ n } , { δ n } , { t n } ( 0 , 1 ) satisfy conditions (i)–(iii) as in Theorem 1 in Section 2. Then x n x * Ω , which uniquely solves ( I - f ) x * , x * - p 0 , p Ω .

Author Contributions

All the authors contributed equally to this work.

Funding

This paper was supported by the National Natural Science Foundation of China under Grant 11601348.

Acknowledgments

The two authors are very grateful to the reviewers for useful and important suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ceng, L.C.; Wang, C.Y.; Yao, J.C. Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67, 375–390. [Google Scholar] [CrossRef]
  2. Aoyama, K.; Iiduka, H.; Takahashi, W. Weak convergence of an iterative sequence for accretive operators in Banach spaces. Fixed Point Theory Appl. 2006, 2006, 35390. [Google Scholar] [CrossRef]
  3. Cho, S.Y.; Kang, S.M. Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32, 1607–1618. [Google Scholar] [CrossRef]
  4. Cho, S.Y.; Li, W.; Kang, S.M. Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. 2013, 2013, 199. [Google Scholar] [CrossRef] [Green Version]
  5. Qin, X.; Petrusel, A.; Yao, J.C. CQ iterative algorithms for fixed points of nonexpansive mappings and split feasibility problems in Hilbert spaces. J. Nonlinear Convex Anal. 2018, 19, 157–165. [Google Scholar]
  6. Qin, X.; Yao, J.C. A viscosity iterative method for a split feasibility problem. J. Nonlinear Convex Anal. 2019, 20, 1497–1506. [Google Scholar]
  7. Qin, X.; Cho, S.Y.; Yao, J.C. Weak and strong convergence of splitting algorithms in Banach spaces. Optimization 2019. [Google Scholar] [CrossRef]
  8. Bin Dehaish, B.A. Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 2015, 16, 1321–1336. [Google Scholar]
  9. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
  10. Ansari, Q.H.; Babu, F.; Yao, J.C. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 25. [Google Scholar] [CrossRef]
  11. Nguyen, L.V. Some results on strongly pseudomonotone quasi-variational inequalities. Set-Valued Var. Anal. 2019. [Google Scholar] [CrossRef]
  12. Cho, S.Y.; Bin Dehaish, B.A. Weak convergence of a splitting algorithm in Hilbert spaces. J. Appl. Anal. Comput. 2017, 7, 427–438. [Google Scholar]
  13. Qin, X.; Yao, J.C. Weak convergence of a Mann-like algorithm for nonexpansive and accretive operators. J. Inequal. Appl. 2016, 2016, 232. [Google Scholar] [CrossRef]
  14. Sen, M.D.L. Stable iteration procedures in metric spaces which generalize a Picard-type iteration. Fixed Point Theory Appl. 2000, 2000, 953091. [Google Scholar]
  15. Abed, S.S.; Taresh, N.S. On stability of iterative sequences with error. Mathematics 2019, 7, 765. [Google Scholar] [CrossRef]
  16. Ceng, L.C.; Latif, A.; Yao, J.C. On solutions of a system of variational inequalities and fixed point problems in Banach spaces. Fixed Point Theory Appl. 2013, 2013, 176. [Google Scholar] [CrossRef] [Green Version]
  17. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–501. [Google Scholar] [CrossRef]
  18. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–133. [Google Scholar] [CrossRef]
  19. Qin, X.; An, N.T. Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 2019. [Google Scholar] [CrossRef]
  20. Cho, S.Y.; Kang, S.M. Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 2011, 24, 224–228. [Google Scholar] [CrossRef]
  21. Takahashi, W.; Wen, C.F.; Yao, J.C. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
  22. Takahashi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
  23. Qin, X.; Cho, S.Y.; Wang, L. A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014, 2014, 75. [Google Scholar] [CrossRef] [Green Version]
  24. Qin, X.; Wang, L. Strong convergence of an iterative algorithm involving nonlinear mappings of nonexpansive and accretive type. Optimization 2018, 67, 1377–1388. [Google Scholar] [CrossRef]
  25. Song, Y.; Ceng, L.C. A general iteration scheme for variational inequality problem and common fixed point problems of nonexpansive mappings in q-uniformly smooth Banach spaces. J. Global Optim. 2013, 57, 1327–1348. [Google Scholar] [CrossRef]
  26. Chang, S.S.; Wen, C.F.; Yao, J.C. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
  27. Qin, X.; Cho, S.Y. Convergence analysis of a monotone projection algorithm in reflexive Banach spaces. Acta Math. Sci. 2017, 37, 488–502. [Google Scholar] [CrossRef]
  28. Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malaysian Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
  29. Zhao, X. Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Optim. 2018, 78, 613–641. [Google Scholar] [CrossRef]
  30. Ceng, L.C. Variational inequalities approaches to minimization problems with constraints of generalized mixed equilibria and variational inclusions. Mathematics 2019, 7, 270. [Google Scholar] [CrossRef]
  31. Ceng, L.C.; Postolache, M.; Yao, Y. Iterative algorithms for a system of variational inclusions in Banach spaces. Symmetry 2019, 11, 811. [Google Scholar] [CrossRef]
  32. Qin, X.; Wang, L. Iterative algorithms with errors for zero points of m-accretive operators. Fixed Point Theory Appl. 2013, 2013, 148. [Google Scholar] [CrossRef] [Green Version]
  33. Chen, J.; Kobis, E. Optimality conditions and duality for robust nonsmooth multiobjective optimization problems with constraints. J. Optim. Theory Appl. 2019, 181, 411–436. [Google Scholar] [CrossRef]
  34. Cho, S.Y. On the strong convergence of an iterative process for asymptotically strict pseudocontractions and equilibrium problems. Appl. Math. Comput. 2014, 235, 430–438. [Google Scholar] [CrossRef]
  35. Xu, H.K. Inequalities in Banach spaces with applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
  36. Bruck, R.E. Nonexpansive projections on subsets of Banach spaces. Pacific J. Math. 1973, 47, 341–355. [Google Scholar] [CrossRef]
  37. Barbu, V. Nonlinear Semigroups and Differential Equations in Banach Spaces; Springer Netherlands: Amsterdam, The Netherlands, 1976. [Google Scholar]
  38. Lopez, G.; Martin-Marquez, V.; Wang, F.; Xu, H.K. Forward-backward splitting methods for accretive operators in Banach spaces. Abst. Appl. Anal. 2012, 2012, 109236. [Google Scholar] [CrossRef]
  39. Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef] [Green Version]
  40. Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. 2007, 67, 2350–2360. [Google Scholar] [CrossRef]
  41. Suzuki, T. Strong convergence of Krasnoselskii and Mann’s type sequences for one-parameter nonexpansive semigroups without Bochner integrals. J. Math. Anal. Appl. 2005, 305, 227–239. [Google Scholar] [CrossRef]
  42. Bruck, R.E. Properties of fixed point sets of nonexpansive mappings in Banach spaces. Trans. Amer. Math. Soc. 1973, 179, 251–262. [Google Scholar] [CrossRef]
  43. Xu, H.K. Iterative algorithms for nonlinear operators. J. London Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Ceng, L.-C.; Shang, M. Generalized Mann Viscosity Implicit Rules for Solving Systems of Variational Inequalities with Constraints of Variational Inclusions and Fixed Point Problems. Mathematics 2019, 7, 933. https://doi.org/10.3390/math7100933

AMA Style

Ceng L-C, Shang M. Generalized Mann Viscosity Implicit Rules for Solving Systems of Variational Inequalities with Constraints of Variational Inclusions and Fixed Point Problems. Mathematics. 2019; 7(10):933. https://doi.org/10.3390/math7100933

Chicago/Turabian Style

Ceng, Lu-Chuan, and Meijuan Shang. 2019. "Generalized Mann Viscosity Implicit Rules for Solving Systems of Variational Inequalities with Constraints of Variational Inclusions and Fixed Point Problems" Mathematics 7, no. 10: 933. https://doi.org/10.3390/math7100933

APA Style

Ceng, L. -C., & Shang, M. (2019). Generalized Mann Viscosity Implicit Rules for Solving Systems of Variational Inequalities with Constraints of Variational Inclusions and Fixed Point Problems. Mathematics, 7(10), 933. https://doi.org/10.3390/math7100933

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop