Next Article in Journal
The Extended Minimax Disparity RIM Quantifier Problem
Next Article in Special Issue
Non-Unique Fixed Point Theorems in Modular Metric Spaces
Previous Article in Journal
Polynomial Least Squares Method for Fractional Lane–Emden Equations
Previous Article in Special Issue
Anti-Periodic Boundary Value Problems for Nonlinear Langevin Fractional Differential Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Shrinking Extragradient Method for Pseudomonotone Equilibrium Problems and Quasi-Nonexpansive Mappings

by
Manatchanok Khonchaliew
1,
Ali Farajzadeh
2 and
Narin Petrot
1,3,*
1
Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
2
Department of Mathematics, Razi University, Kermanshah 67149, Iran
3
Centre of Excellence in Nonlinear Analysis and Optimization, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(4), 480; https://doi.org/10.3390/sym11040480
Submission received: 8 March 2019 / Revised: 29 March 2019 / Accepted: 29 March 2019 / Published: 3 April 2019
(This article belongs to the Special Issue Fixed Point Theory and Fractional Calculus with Applications)

Abstract

:
This paper presents two shrinking extragradient algorithms that can both find the solution sets of equilibrium problems for pseudomonotone bifunctions and find the sets of fixed points of quasi-nonexpansive mappings in a real Hilbert space. Under some constraint qualifications of the scalar sequences, these two new algorithms show strong convergence. Some numerical experiments are presented to demonstrate the new algorithms. Finally, the two introduced algorithms are compared with a standard, well-known algorithm.

1. Introduction

The equilibrium problem started to gain interest after the publication of a paper by Blum and Oettli [1], which discussed the problem of finding a point x * C such that
f ( x * , y ) 0 , y C ,
where C is a nonempty closed convex subset of a real Hilbert space H, and f : C × C ( , + ) is a bifunction. This well-known equilibrium model (1) has been used for studying a variety of mathematical models for physics, chemistry, engineering, and economics. In addition, the equilibrium problem (1) can be applied to many mathematical problems, such as optimization problems, variational inequality problems, minimax problems, Nash equilibrium problems, saddle point problems, and fixed point problems, see [1,2,3,4], and the references therein.
In order to solve the equilibrium problem (1), when f is a monotone bifunction, approximate solutions are frequently based on the proximal point method. That is, given x k , at each step, the next iterate x k + 1 can be found by solving the following regularized equilibrium problem: find x C such that
f ( x , y ) + 1 r k y x , x x k 0 , y C ,
where { r k } ( 0 , ) . Note that the existence of each x k is guaranteed, on condition that the subproblem (2) is a strongly monotone problem (see [5,6]). However, if f is a pseudomonotone bifunction (a property which is weaker than a monotone) the strong monotone-ness of the problem (2) cannot be guaranteed. Therefore, the sequence { x k } may not be well-defined. To overcome this drawback, Tran et al. [7] proposed the following extragradient method for solving the equilibrium problem, when the considered bifunction f is pseudomonotone and Lipschitz-type continuous with positive constants L 1 and L 2 :
x 0 C , y k = a r g m i n { ρ f ( x k , y ) + 1 2 x k y 2 : y C } , x k + 1 = a r g m i n { ρ f ( y k , y ) + 1 2 x k y 2 : y C } ,
where 0 < ρ < min { 1 2 L 1 , 1 2 L 2 } . Tran et al. guaranteed that the sequence { x k } generated by (3) converges weakly to a solution of the equilibrium problem (1).
On the other hand, for a nonempty closed convex subset C of H and a mapping T : C C , the fixed point problem is a problem of finding a point x C such that T x = x . This fixed point problem has many important applications, such as optimization problems, variational inequality problems, minimax problems, and saddle point problems, see [8,9,10,11], and the references therein. The set of fixed points of a mapping T will be represented by F i x ( T ) .
An iteration method for finding fixed points of the mapping T was proposed by Mann [12] as follows:
x 0 C , x k + 1 = ( 1 α k ) x k + α k T x k ,
where { α k } ( 0 , 1 ) and k = 0 α k = . If T is a nonexpansive mapping and has a fixed point, then the sequence { x k } generated by (4) converges weakly to a fixed point of T. In addition, in 1994, Park and Jeong [13] showed that if T is a quasi-nonexpansive mapping with I T demiclosed at 0, then the sequence which is generated by the Mann iteration method converges weakly to a fixed point of T.
Furthermore, in order to obtain a strong convergence result for the Mann iteration method, Nakajo and Takahashi [14] proposed the following hybrid method:
x 0 C , y k = α k x k + ( 1 α k ) T x k , C k = { x C : y k x x k x } , Q k = { x C : x 0 x k , x x k 0 } , x k + 1 = P C k Q k ( x 0 ) ,
where { α k } [ 0 , 1 ] such that α k 1 α ¯ for some α ¯ ( 0 , 1 ] , and P C k Q k is the metric projection onto C k Q k . Nakajo and Takahashi proved that if T is a nonexpansive mapping, then the sequence { x k } generated by (5) converges strongly to P F i x ( T ) ( x 0 ) .
In addition, in 1974, Ishikawa [15] proposed the following method for finding fixed points of a Lipschitz pseudocontractive mapping T:
x 0 C , y k = ( 1 α k ) x k + α k T x k , x k + 1 = ( 1 β k ) x k + β k T y k ,
where 0 β k α k 1 , lim k α k = 0 and k = 0 α k β k = . If C is a convex compact subset of H, then the sequence { x k } generated by (6) converges strongly to fixed points of T. It has been previously shown that the Mann iteration method is generally not applicable for finding fixed points of a Lipschitz pseudocontractive mapping in a Hilbert space. For example, see [16].
In 2008, by using Ishikawa’s iteration concept, Takahashi et al. [17] proposed the following hybrid method, called the shrinking projection method, which is different from Nakajo and Takahashi’s method [14]:
u 0 H , C 1 = C , x 1 = P C 1 ( u 0 ) , y k = α k x k + ( 1 α k ) T x k , z k = β k x k + ( 1 β k ) T y k , C k + 1 = { x C k : z k x x k x } , x k + 1 = P C k + 1 ( x 0 ) ,
where { α k } [ α ̲ , α ¯ ] with 0 < α ̲ α ¯ < 1 , and { β k } [ 0 , 1 β ¯ ] for some β ¯ ( 0 , 1 ) . Takahashi et al. proved that if T is a nonexpansive mapping, then the sequence { x k } generated by (7) converges strongly to P F i x ( T ) ( x 0 ) .
In recent years, many algorithms have been proposed for finding a common element of the set of solutions of the equilibrium problem and the set of solutions of the fixed point problem. See, for instance, [8,11,18,19,20,21,22,23] and the references therein. In 2016, by using both hybrid and extragradient methods together in combination with Ishikawa’s iteration concept, Dinh and Kim [24] proposed the following iteration method for finding a common element of fixed points of a symmetric generalized hybrid mapping T and the set of solutions of the equilibrium problem, when a bifunction f is pseudomonotone and Lipschitz-type continuous with positive constants L 1 and L 2 :
x 0 C , y k = a r g m i n { ρ k f ( x k , y ) + 1 2 x k y 2 : y C } , z k = a r g m i n { ρ k f ( y k , y ) + 1 2 x k y 2 : y C } , t k = α k x k + ( 1 α k ) T x k , u k = β k t k + ( 1 β k ) T z k , C k = { x H : x u k x x k } , Q k = { x H : x x k , x 0 x k 0 } , x k + 1 = P C k Q k C ( x 0 ) ,
where { ρ k } [ ρ ̲ , ρ ¯ ] with 0 < ρ ̲ ρ ¯ < min { 1 2 L 1 , 1 2 L 2 } , { α k } [ 0 , 1 ] such that lim k α k = 1 , and { β k } [ 0 , 1 β ¯ ] for some β ¯ ( 0 , 1 ) . Dinh and Kim proved that the sequence { x k } generated by (8) converges strongly to P E P ( f , C ) F i x ( T ) ( x 0 ) , where E P ( f , C ) is the solution set of the equilibrium problem.
Now, let us consider the problem of finding a common solution of a finite family of equilibrium problems (CSEP). Let C be a nonempty closed convex subset of H and let f i : C × C ( , + ) , i = 1 , , N , be bifunctions satisfying f i ( x , x ) = 0 for each x C . The problem CSEP is to find x * C such that
f i ( x * , y ) 0 , y C , i = 1 , , N .
The solution set of the problem CSEP will be denoted by i = 1 N E P ( f i , C ) . It is worth pointing out that the problem CSEP is a generalization of many mathematical models, such as common solutions to variational inequality problems, convex feasibility problems and common fixed point problems. See [1,25,26,27] for more details. In 2016, Hieu et al. [28] considered the following problem:
find   a   point x * C such   that T j x * = x * , j = 1 , , M , and f i ( x * , y ) 0 , y C , i = 1 , , N ,
where C is a nonempty closed convex subset of H, T j : C C , j = 1 , , M , are mappings, and f i : C × C ( , + ) , i = 1 , , N , are bifunctions satisfying f i ( x , x ) = 0 for each x C . From now on, the solution set of problem (10) will be denoted by S. That is:
S : = ( j = 1 M F i x ( T j ) ) ( i = 1 N E P ( f i , C ) ) .
By using both hybrid and extragradient methods together in combination with Mann’s iteration concept and parallel splitting-up techniques (see [25,29]), they proposed the following algorithm for finding the solution set of problem (10), when mappings are nonexpansive, and bifunctions are pseudomonotone and Lipschitz-type continuous with positive constants L 1 and L 2 :
x 0 C , y k i = a r g m i n { ρ f i ( x k , y ) + 1 2 x k y 2 : y C } , i = 1 , 2 , , N , z k i = a r g m i n { ρ f i ( y k i , y ) + 1 2 x k y 2 : y C } , i = 1 , 2 , , N , z ¯ k = a r g m a x { z k i x k : i = 1 , 2 , , N } , u k j = α k x k + ( 1 α k ) T j z ¯ k , j = 1 , 2 , , M , u ¯ k = a r g m a x { u k j x k : j = 1 , 2 , , M } , C k = { x C : x u ¯ k x x k } , Q k = { x C : x x k , x 0 x k 0 } , x k + 1 = P C k Q k ( x 0 ) ,
where 0 < ρ < min { 1 2 L 1 , 1 2 L 2 } , and { α k } ( 0 , 1 ) such that lim sup k α k < 1 . Hieu et al. proved that the sequence { x k } generated by (PHMEM) converges strongly to P S ( x 0 ) . The algorithm (11) is called PHMEM method.
The current study will continue developing methods for finding the solution set of problem (10). Roughly speaking, some new iterative algorithms will be introduced for finding the solution set of problem (10). Some numerical examples will be considered and the introduced methods will be discussed and compared with the PHMEM algorithm.
This paper is organized as follows: In Section 2, some relevant definitions and properties will be reviewed for use in subsequent sections. Section 3 will present two shrinking extragradient algorithms and prove their convergence. Finally, in Section 4, the performance of the introduced algorithms will be compared to the performance of the PHMEM algorithm and discussed.

2. Preliminaries

This section will present some definitions and properties that will be used subsequently. First, let H be a real Hilbert space induced by the inner product · , · and norm · . The symbols → and ⇀ will be used here to denote the strong convergence and the weak convergence in H, respectively.
Now, recalled here are definitions of nonlinear mappings related to this work.
Definition 1 
([30,31]). Let C be a nonempty closed convex subset of H. A mapping T : C C is said to be:
(i) 
pseudocontractive if
T x T y 2 x y 2 + ( I T ) x ( I T ) y 2 , x , y C ,
where I denotes the identity operator on H.
(ii) 
Lipschitzian if there exists L 0 such that
T x T y L x y , x , y C .
In particular, if L = 1 , then T is said to be nonexpansive.
(iii) 
quasi-nonexpansive if F i x ( T ) is nonempty and
T x p x p , x C , p F i x ( T ) .
(iv) 
( α , β , γ , δ ) -symmetric generalized hybrid if there exists α , β , γ , δ ( , + ) such that
α T x T y 2 + β ( x T y 2 + y T x 2 ) + γ x y 2 + δ ( x T x 2 + y T y 2 ) 0 , x , y C .
Definition 2. 
(see [32]) Let C be a nonempty closed convex subset of H and T : C H be a mapping. The mapping T is said to be demiclosed at y H if for any sequence { x k } C with x k x * C and T x k y imply T x * = y .
Note that the class of pseudocontractive mappings includes the class of nonexpansive mappings. In addition, a nonexpansive mapping with at least one fixed point is a quasi-nonexpansive mapping, but the converse is not true. For example, see [33]. Moreover, if a ( α , β , γ , δ ) -symmetric generalized hybrid mapping satisfies ( 1 ) α + 2 β + γ 0 , ( 2 ) α + β > 0 and ( 3 ) δ 0 then T is quasi-nonexpansive and I T demiclosed at 0 (see [34,35]). Moreover, F i x ( T ) is closed and convex when T is a quasi-nonexpansive mapping (see [36]).
Next, we recall definitions and facts for considering the equilibruim problems.
Definition 3 
([1,4,37]). Let C be a nonempty closed convex subset of H and f : C × C ( , + ) be a bifunction. The bifunction f is said to be:
(i) 
strongly monotone on C if there exists a constant γ > 0 such that
f ( x , y ) + f ( y , x ) γ x y 2 , x , y C ;
(ii) 
monotone on C if
f ( x , y ) + f ( y , x ) 0 , x , y C ;
(iii) 
pseudomonotone on C if
x , y C , f ( x , y ) 0 f ( y , x ) 0 .
(iv) 
Lipshitz-type continuous on C with constants L 1 > 0 and L 2 > 0 if
f ( x , y ) + f ( y , z ) f ( x , z ) L 1 x y 2 L 2 y z 2 , x , y , z C .
Remark 1.
From Definition 3, we observe that (i) ⇒ (ii) ⇒ (iii). However, if f is pseudomonotone, f might not be monotone on C. For example, see [38].
For a nonempty closed convex subset C of H and a bifunction f : C × C ( , + ) satisfying f ( x , x ) = 0 for each x C . In this paper, we are concerned with the following assumptions:
(A1)
f is weakly continuous on C × C in the sense that, if x , y C and { x k } , { y k } are two sequences in C converge weakly to x and y respectively, then f ( x k , y k ) converges to f ( x , y ) ;
(A2)
f ( x , · ) is convex and subdifferentiable on C for each fixed x C ;
(A3)
f is psuedomonotone on C;
(A4)
f is Lipshitz-type continuous on C with constants L 1 > 0 and L 2 > 0 .
It is well-known that the solution set E P ( f , C ) is closed and convex, when the bifunction f satisfies the assumptions ( A 1 ) ( A 3 ) . See, for instance, [7,39,40].
The following facts are very important in order to obtain our main results.
Lemma 1 
([18]). Let f : C × C ( , + ) be satisfied ( A 2 ) ( A 4 ) . If E P ( f , C ) is nonempty set and 0 < ρ 0 < min { 1 2 L 1 , 1 2 L 2 } . Let x 0 C . If y 0 and z 0 are defined by
y 0 = arg min { ρ 0 f ( x 0 , w ) + 1 2 w x 0 2 : w C } , z 0 = arg min { ρ 0 f ( y 0 , w ) + 1 2 w x 0 2 : w C } ,
then,
(i) 
ρ 0 [ f ( x 0 , w ) f ( x 0 , y 0 ) ] y 0 x 0 , y 0 w , for all w C ;
(ii) 
z 0 q 2 x 0 q 2 ( 1 2 ρ 0 L 1 ) x 0 y 0 2 ( 1 2 ρ 0 L 2 ) y 0 z 0 2 , for all q E P ( f , C ) .
This section will be closed by recalling the projection mapping and calculus concepts in Hilbert space.
Let C be a nonempty closed convex subset of H. For each x H , we denote the metric projection of x onto C by P C ( x ) , that is
x P C ( x ) y x , y C .
The following facts will also be used in this paper.
Lemma 2.
(see, for instance, [41,42]) Let C be a nonempty closed convex subset of H. Then
(i) 
P C ( x ) is singleton and well-defined for each x H ;
(ii) 
z = P C ( x ) if and only if x z , y z 0 , y C ;
(iii) 
P C ( x ) P C ( y ) 2 x y 2 P C ( x ) x + y P C ( y ) 2 , x , y C .
For a nonempty closed convex subset C of H and a convex function g : C R , the subdifferential of g at z C is defined by
g ( z ) = { w C : g ( y ) g ( z ) w , y z , y C } .
The function g is said to be subdifferentiable at z if g ( z ) .

3. Main Result

In this section, we propose two shrinking extragradient algorithms for finding a solution of problem (10), when each mapping T j , j = 1 , 2 , , M , is quasi-nonexpansive with I T j demiclosed at 0, and each bifunction f i , i = 1 , 2 , , N , satisfies all the assumptions ( A 1 ) ( A 4 ) . We start by observing that if each bifunction f i , i = 1 , 2 , , N , is Lipshitz-type continuous on C with constants L 1 i > 0 and L 2 i > 0 , then
f i ( x , y ) + f i ( y , z ) f i ( x , z ) L 1 i x y 2 L 2 i y z 2 f i ( x , z ) L 1 x y 2 L 2 y z 2 ,
where L 1 = m a x { L 1 i : i = 1 , 2 , , N } and L 2 = m a x { L 2 i : i = 1 , 2 , , N } . This means the bifunctions f i , i = 1 , 2 , , N , are Lipshitz-type continuous on C with constants L 1 > 0 and L 2 > 0 . Of course, we will use this notation in this paper. Moreover, for each N N and k N { 0 } , we denote [ k ] N for a modulo function at k with respect to N, that is
[ k ] N = k ( m o d N ) + 1 .
Now, we propose a following cyclic algorithm.
CSEM Algorithm (Cyclic Shrinking Extragradient Method)
Initialization. Pick x 0 C = : C 0 , choose parameters { ρ k } with 0 < inf ρ k sup ρ k < min { 1 2 L 1 , 1 2 L 2 } , { α k } [ 0 , 1 ] such that lim k α k = 1 , and { β k } with 0 inf β k sup β k < 1 .
Step 1. Solve the strongly convex program
y k = a r g m i n { ρ k f [ k ] N ( x k , y ) + 1 2 y x k 2 : y C } .
Step 2. Solve the strongly convex program
z k = a r g m i n { ρ k f [ k ] N ( y k , y ) + 1 2 y x k 2 : y C } .
Step 3. Compute
t k = α k x k + ( 1 α k ) T [ k ] M x k , u k = β k t k + ( 1 β k ) T [ k ] M z k .
Step 4. Construct closed convex subset of C:
C k + 1 = { x C k : x u k x x k } .
Step 5. The next approximation x k + 1 is defined as the projection of x 0 onto C k + 1 , i.e.,
x k + 1 = P C k + 1 ( x 0 ) .
Step 6. Put k = k + 1 and go to Step 1.
Before going to prove the strong convergence of CSEM Algorithm, we need the following lemma.
Lemma 3.
Suppose that the solution set S is nonempty. Then, the sequence { x k } which is generated by CSEM Algorithm is well-defined.
Proof. 
To prove the Lemma, it suffices to show that C k is a nonempty closed and convex subset of H, for each k N { 0 } . Firstly, we will show the non-emptiness by showing that S C k , for each k N { 0 } . Obviously, S C 0 .
Now, let q S . Then, by Lemma 1 (ii), we have
z k q 2 x k q 2 ( 1 2 ρ k L 1 ) x k y k 2 ( 1 2 ρ k L 2 ) y k z k 2 ,
for each k N { 0 } . This implies that
z k q x k q ,
for each k N { 0 } . On the other hand, since q F i x ( T j ) , it follows from the quasi-nonexpansivity of each T j ( j { 1 , 2 , , M } ) and the definitions of t k , u k that
t k q α k x k q + ( 1 α k ) T [ k ] M x k q α k x k q + ( 1 α k ) x k q = x k q ,
and
u k q β k t k q + ( 1 β k ) T [ k ] M z k q β k t k q + ( 1 β k ) z k q ,
for each k N { 0 } . The relations (12) and (13) imply that
u k q β k x k q + ( 1 β k ) x k q = x k q ,
for each k N { 0 } . Now, suppose that S C k . Thus, by using (14), we see that S C k + 1 . So, by induction, we have S C k , for each k N { 0 } . Since S is a nonempty set, we obtain that C k is a nonempty set, for each k N { 0 } .
Next, we show that C k is a closed and convex subset, for each k N { 0 } . Note that we already have that C 0 is a closed and convex subset. Now, suppose that C k is a closed and convex subset, we will show that C k + 1 is likewise. To do this, let us consider a set B k = { x H : x u k x x k } . We see that
B k = { x H : x k u k , x 1 2 ( x k 2 u k 2 ) } .
This means that B k is a halfspace and C k + 1 = C k B k . Thus, C k + 1 is a closed and convex subset. Thus, by induction, we can conclude that C k is a closed and convex subset, for each k N { 0 } . Consequently, we can guarantee that { x k } is well-defined.  □
Theorem 1.
Suppose that the solution set S is nonempty. Then, the sequence { x k } which is generated by CSEM Algorithm converges strongly to P S ( x 0 ) .
Proof. 
Let q S . By the definition of x k + 1 , we observe that x k + 1 C k + 1 C k , for each k N { 0 } . Since x k = P C k ( x 0 ) and x k + 1 C k , we have
x k x 0 x k + 1 x 0 ,
for each k N { 0 } . This means that { x k x 0 } is a nondecreasing sequence. Similarly, for each q S C k + 1 , we obtain that
x k + 1 x 0 q x 0 ,
for each k N { 0 } . By the above inequalities, we get
x k x 0 q x 0 ,
for each k N { 0 } . So { x k x 0 } is a bounded sequence. Consequently, we can conclude that { x k x 0 } is a convergent sequence. Moreover, we see that { x k } is bounded. Thus, in view of (13) and (14), we get that { t k } and { u k } are also bounded. Suppose k , j N { 0 } such that k > j . It follows that x k C k C j . Then, by Lemma 2 (iii), we have
P C j ( x k ) P C j ( x 0 ) 2 x 0 x k 2 P C j ( x k ) x k + x 0 P C j ( x 0 ) 2 .
Consequently,
x k x j 2 x 0 x k 2 x j x 0 2 .
Thus, by using the existence of lim k x k x 0 , we get
lim k , j x k x j = 0 .
That is { x k } is a Cauchy sequence in C. Since C is closed, there exists p C such that
lim k x k = p .
By the definition of C k + 1 and x k + 1 C k , we see that
x k + 1 u k x k + 1 x k ,
for each k N { 0 } . It follows that
u k x k u k x k + 1 + x k + 1 x k x k + 1 x k + x k + 1 x k = 2 x k + 1 x k ,
for each k N { 0 } . Since x k p and x k + 1 p , as k , we obtain that
lim k x k + 1 x k = 0 .
This together with (17) imply that
lim k u k x k = 0 .
Since lim k α k = 1 and the quasi-nonexpansivity of each T j ( j { 1 , 2 , , M } ), it follows that
lim k t k x k = lim k α k x k + ( 1 α k ) T [ k ] M x k x k = lim k ( 1 α k ) x k T [ k ] M x k = 0 .
Consider,
u k q 2 = β k ( t k q ) + ( 1 β k ) ( T [ k ] M z k q ) 2 = β k t k q 2 + ( 1 β k ) T [ k ] M z k q 2 β k ( 1 β k ) t k T [ k ] M z k 2 β k t k q 2 + ( 1 β k ) T [ k ] M z k q 2 ,
for each k N { 0 } . By using (13) and the quasi-nonexpansivity of each T j ( j { 1 , 2 , , M } ), we obtain
u k q 2 β k x k q 2 + ( 1 β k ) z k q 2 ,
for each k N { 0 } . Then, by Lemma 1 (ii), we have
u k q 2 β k x k q 2 + ( 1 β k ) [ x k q 2 ( 1 2 ρ k L 1 ) x k y k 2 ( 1 2 ρ k L 2 ) y k z k 2 ] x k q 2 ( 1 β k ) [ ( 1 2 ρ k L 1 ) x k y k 2 + ( 1 2 ρ k L 2 ) y k z k 2 ] ,
for each k N { 0 } . It follows that
( 1 β k ) [ ( 1 2 ρ k L 1 ) x k y k 2 + ( 1 2 ρ k L 2 ) y k z k 2 ] x k u k ( x k q + u k q ) ,
for each k N { 0 } . Thus, by using (18) and the choices of { β k } , { ρ k } , we have
lim k x k y k = 0 ,
and
lim k y k z k = 0 .
These imply that
lim k x k z k = 0 .
Then, by lim k x k = p , we also have
lim k y k = p ,
and
lim k z k = p .
Next, we claim that p S . From the definition of u k , we see that
( 1 β k ) T [ k ] M z k z k = u k z k β k ( t k z k ) u k z k + β k t k z k u k x k + β k t k x k + ( 1 + β k ) x k z k ,
for each k N { 0 } . Then, by using (18), (19) and (23), we have
lim k T [ k ] M z k z k = 0 .
Furthermore, for each fixed j { 1 , 2 , , M } , we observe that
[ ( j 1 ) + k M ] M = j ,
for each k N { 0 } . Thus, it follows from (26) that
0 = lim k T [ ( j 1 ) + k M ] M z ( j 1 ) + k M z ( j 1 ) + k M = lim k T j z ( j 1 ) + k M z ( j 1 ) + k M ,
for each j { 1 , 2 , , M } . Since z k p , as k , then for each j { 1 , 2 , , M } , we get z ( j 1 ) + k M p , as k . Combining with (27), by the demiclosedness at 0 of I T j , implies that
T j p = p ,
for each j = 1 , 2 , , M .
Similarly, for each fixed i { 1 , 2 , , N } , we note that
[ ( i 1 ) + k N ] N = i ,
for each k N { 0 } . Since x k p and y k p , as k , then for each i { 1 , 2 , , N } , we have x ( i 1 ) + k N p and y ( i 1 ) + k N p , as k . By Lemma 1 (i), for each i { 1 , 2 , , N } , we obtain
ρ ( i 1 ) + k N [ f [ ( i 1 ) + k N ] N ( x ( i 1 ) + k N , y ) f [ ( i 1 ) + k N ] N ( x ( i 1 ) + k N , y ( i 1 ) + k N ) ] y ( i 1 ) + k N x ( i 1 ) + k N , y ( i 1 ) + k N y , y C .
It follows that, for each i { 1 , 2 , , N } , we have
f [ ( i 1 ) + k N ] N ( x ( i 1 ) + k N , y ) f [ ( i 1 ) + k N ] N ( x ( i 1 ) + k N , y ( i 1 ) + k N ) 1 ρ ( i 1 ) + k N y ( i 1 ) + k N x ( i 1 ) + k N y ( i 1 ) + k N y , y C .
By using (21) and weak continuity of each f i ( i { 1 , 2 , , N } ), we get that
f i ( p , y ) 0 , y C ,
for each i = 1 , 2 , , N . Then, we had shown that p S .
Finally, we will show that p = P S ( x 0 ) . In fact, since P S ( x 0 ) S , it follows from (15) that
x k x 0 P S ( x 0 ) x 0 ,
for each k N { 0 } . Then, by using the continuity of norm and lim k x k = p , we see that
p x 0 = lim k x k x 0 P S ( x 0 ) x 0 .
Thus, by the definition of P S ( x 0 ) and p S , we obtain that p = P S ( x 0 ) . This completes the proof.  □
Next, by replacing cyclic method by parallel method, we propose the following algorithm.
PSEM Algorithm (Parallel Shrinking Extragradient Method)
Initialization. Pick x 0 C = : C 0 , choose parameters { ρ k i } with 0 < inf ρ k i sup ρ k i < min { 1 2 L 1 , 1 2 L 2 } , i = 1 , 2 , , N , { α k } [ 0 , 1 ] such that lim k α k = 1 , and { β k } with 0 inf β k sup β k < 1 .
Step 1. Solve N strongly convex programs
y k i = a r g m i n { ρ k i f i ( x k , y ) + 1 2 y x k 2 : y C } , i = 1 , 2 , , N .
Step 2. Solve N strongly convex programs
z k i = a r g m i n { ρ k i f i ( y k i , y ) + 1 2 y x k 2 : y C } , i = 1 , 2 , , N .
Step 3. Find the farthest element from x k among z k i , i = 1 , 2 , , N , i.e.,
z ¯ k = a r g m a x { z k i x k : i = 1 , 2 , , N } .
Step 4. Compute
t k j = α k x k + ( 1 α k ) T j x k , j = 1 , 2 , , M , u k j = β k t k j + ( 1 β k ) T j z ¯ k , j = 1 , 2 , , M .
Step 5. Find the farthest element from x k among u k j , j = 1 , 2 , , M , i.e.,
u ¯ k = a r g m a x { u k j x k : j = 1 , 2 , , M } .
Step 6. Construct closed convex subset of C:
C k + 1 = { x C k : x u ¯ k x x k } .
Step 7. The next approximation x k + 1 is defined as the projection of x 0 onto C k + 1 , i.e.,
x k + 1 = P C k + 1 ( x 0 )
.
Step 8. Put k = k + 1 and go to Step 1.
Theorem 2.
Suppose that the solution set S is nonempty. Then, the sequence { x k } which is generated by PSEM Algorithm converges strongly to P S ( x 0 ) .
Proof. 
Let q S . By the definition of z ¯ k , we suppose that i k { 1 , 2 , , N } such that z k i k = z ¯ k = a r g m a x { z k i x k : i = 1 , 2 , , N } . Then, by Lemma 1 (ii), we have
z ¯ k q 2 x k q 2 ( 1 2 ρ k i k L 1 ) x k y k i k 2 ( 1 2 ρ k i k L 2 ) y k i k z ¯ k 2 ,
for each k N { 0 } . This implies that
z ¯ k q x k q ,
for each k N { 0 } . On the other hand, by the definition of t k j and the quasi-nonexpansivity of each T j ( j { 1 , 2 , , M } ), we have
t k j q α k x k q + ( 1 α k ) T j x k q α k x k q + ( 1 α k ) x k q = x k q ,
for each k N { 0 } . Additionally, by the definition of u ¯ k , we suppose that j k { 1 , 2 , , M } such that u k j k = u ¯ k = a r g m a x { u k j x k : j = 1 , 2 , , M } . It follows from the quasi-nonexpansivity of each T j ( j { 1 , 2 , , M } ) that
u ¯ k q β k t k j k q + ( 1 β k ) T j k z ¯ k q β k t k j k q + ( 1 β k ) z ¯ k q ,
for each k N { 0 } . The relations (28) and (29) imply that
u ¯ k q β k x k q + ( 1 β k ) x k q = x k q ,
for each k N { 0 } . Following the proof of Lemma 3 and Theorem 1, we can show that C k is a closed convex subset of H and S C k , for each k N { 0 } . Moreover, we can check that the sequence { x k } is a convergent sequence, say
lim k x k = p ,
for some p C .
By the definition of C k + 1 and x k + 1 C k , we see that
x k + 1 u ¯ k x k + 1 x k ,
for each k N { 0 } . It follows that
u ¯ k x k u ¯ k x k + 1 + x k + 1 x k x k + 1 x k + x k + 1 x k = 2 x k + 1 x k ,
for each k N { 0 } . Since x k p and x k + 1 p , as k , we obtain that
lim k x k + 1 x k = 0 .
This together with (32) implies that
lim k u ¯ k x k = 0 .
Then, by the definition of u ¯ k , we have
lim k u k j x k = 0 ,
for each j = 1 , 2 , , M . Since lim k α k = 1 and the quasi-nonexpansivity of each T j ( j { 1 , 2 , , M } ), it follows that
lim k t k j x k = lim k α k x k + ( 1 α k ) T j x k x k = lim k ( 1 α k ) x k T j x k = 0 ,
for each j = 1 , 2 , , M . Beside, by the definition of u k j , for each j = 1 , 2 , , M , we see that
u k j q 2 = β k ( t k j q ) + ( 1 β k ) ( T j z ¯ k q ) 2 = β k t k j q 2 + ( 1 β k ) T j z ¯ k q 2 β k ( 1 β k ) t k j T j z ¯ k 2 β k t k j q 2 + ( 1 β k ) T j z ¯ k q 2 ,
for each k N { 0 } . Thus, by using (29) and the quasi-nonexpansivity of each T j ( j { 1 , 2 , , M } ), we have
u k j q 2 β k x k q 2 + ( 1 β k ) z ¯ k q 2 ,
for k N { 0 } . So, by Lemma 1 (ii), for each j = 1 , 2 , , M , we get that
u k j q 2 β k x k q 2 + ( 1 β k ) [ x k q 2 ( 1 2 ρ k i k L 1 ) x k y k i k 2 ( 1 2 ρ k i k L 2 ) y k i k z ¯ k 2 ] = x k q 2 ( 1 β k ) [ ( 1 2 ρ k i k L 1 ) x k y k i k 2 + ( 1 2 ρ k i k L 2 ) y k i k z ¯ k 2 ] ,
for each k N { 0 } . It follows that, for each j = 1 , 2 , , M , we have
( 1 β k ) [ ( 1 2 ρ k i k L 1 ) x k y k i k 2 + ( 1 2 ρ k i k L 2 ) y k i k z ¯ k 2 ] x k q 2 u k j q 2 = x k u k j ( x k q + u k j q ) ,
for each k N { 0 } . Thus, by using (33) and the choices of { β k } , { ρ k i } , we see that
lim k x k y k i k = 0 ,
and
lim k y k i k z ¯ k = 0 .
These imply that
lim k x k z ¯ k = 0 .
Then, by the definition of z ¯ k , we have
lim k x k z k i = 0 ,
for each i = 1 , 2 , , N . Moreover, by Lemma 1 (ii), for each i = 1 , 2 , , N , we get that
z k i q 2 x k q 2 ( 1 2 ρ k i L 1 ) x k y k i 2 ( 1 2 ρ k i L 2 ) y k i z k i 2 ,
for each k N { 0 } . It follows that, for each i = 1 , 2 , , N , we have
( 1 2 ρ k i L 1 ) x k y k i 2 + ( 1 2 ρ k i L 2 ) y k i z k i 2 x k q 2 z k i q 2 = x k z k i ( x k q + z k i q ) ,
for each k N { 0 } . Combining with (39) implies that
lim k x k y k i = 0 ,
and
lim k y k i z k i = 0 ,
for each i = 1 , 2 , , N . Thus, by using (38), (40) and lim k x k = p , we have
lim k z ¯ k = p ,
and
lim k y k i = p ,
for each i = 1 , 2 , , N .
Next, we claim that p S . From the definition of u k j , for each j = 1 , 2 , , M , we see that
( 1 β k ) T j z ¯ k z ¯ k = u k j z ¯ k β k ( t k j z ¯ k ) u k j z ¯ k + β k t k j z ¯ k u k j x k + β k t k j x k + ( 1 + β k ) x k z ¯ k ,
for each k N { 0 } . Thus, in view of (33), (34), and (38), we get that
lim k T j z ¯ k z ¯ k = 0 ,
for each j = 1 , 2 , , M . Combining with (42), by the demiclosedness at 0 of I T j , implies that
T j p = p ,
for each j = 1 , 2 , , M .
On the other hand, by Lemma 1 (i), for each i = 1 , 2 , , N , we see that
ρ k i [ f i ( x k , y ) f i ( x k , y k i ) ] y k i x k , y k i y , y C .
It follows that, for each i = 1 , 2 , , N , we get
f i ( x k , y ) f i ( x k , y k i ) 1 ρ k i y k i x k y k i y , y C .
By using (31), (40), (43) and weak continuity of each f i ( i { 1 , 2 , , N } ), we have
f i ( p , y ) 0 , y C ,
for each i = 1 , 2 , , N . Thus, we can conclude that p S . The rest of the proof is similar to the arguments in the proof of Theorem 1, and it leads to the conclusion that the sequence { x k } converges strongly to P S ( x 0 ) .  □
Remark 2.
We note that for the PSEM algorithm we solve y k i , z k i , i = 1 , 2 , , N , by using N bifunctions and compute t k j , u k j , j = 1 , 2 , , M , by using M mappings. The farthest elements from x k among all z k i and u k j are chosen for the next step calculation. However, we solve only y k , z k , by using a bifunction and compute only t k , u k , by using a mapping for the CSEM algorithm. After that, we construct closed convex subset C k + 1 , and the approximation x k + 1 is the projection of x 0 onto C k + 1 for both algorithms. We claim that the numbers of iterations of the PSEM algorithm should be less than the CSEM algorithm. However, the computational times of the CSEM algorithm should be less than the PSEM algorithm for sufficiently large N , M . On the other hand, for the PHMEM algorithm they solved y k i , z k i , i = 1 , 2 , , N , by using N bifunctions, and computed u k j , j = 1 , 2 , , M , by using M mappings. The farthest elements from x k among all z k i and u k j are chosen similar to the PSEM algorithm. However, they constructed two closed convex subsets C k , Q k , and the approximation x k + 1 is the projection of x 0 onto C k Q k , which is difficult to compute. We will focus on these observations in the next section.

4. A Numerical Experiment

This section will compare the two introduced algorithms, CSEM and PSEM, with the PHMEM algorithm, which was presented in [28]. The following setting is taken from Hieu et al. [28]. Let H = R be a Hilbert space with the standard inner product x , y = x y and the norm x = | x | , for each x , y H . To be considered here are the nonexpansive self-mappings T j , j = 1 , 2 , , M , and the bifunctions f i , i = 1 , 2 , , N , which are given on C = [ 0 , 1 ] by
T j ( x ) = x j sin j 1 ( x ) 2 j 1 , j = 1 , 2 , , M ,
and
f i ( x , y ) = B i ( x ) ( y x ) , i = 1 , 2 , , N ,
where B i ( x ) = 0 if 0 x ξ i , and B i ( x ) = e x ξ i + sin ( x ξ i ) 1 if ξ i < x 1 . Moreover, 0 < ξ 1 < ξ 2 < < ξ N < 1 . Then, the bifunctions f i , i = 1 , 2 , , N , satisfy conditions ( A 1 ) ( A 4 ) (see [28]). Indeed, the bifunctions f i , i = 1 , 2 , , N , are Lipshitz-type continuous with constants L 1 = L 2 = 2 . Note that the solution set S is nonempty because 0 S .
The following numerical experiment is considered with these parameters: ρ k = 1 5 , ξ [ k ] N = [ k ] N N + 1 for the CSEM algorithm; ρ k i = 1 5 , ξ i = i N + 1 , i = 1 , 2 , , N for the PSEM algorithm, when N = 1000 and M = 2000 . The following six cases of the parameters α k and β k are considered:
Case 1. α k = 1 1 k + 2 , β k = 1 k + 2 .
Case 2. α k = 1 1 k + 2 , β k = 0.5 + 1 k + 3 .
Case 3. α k = 1 1 k + 2 , β k = 0.99 1 k + 2 .
Case 4. α k = 1 , β k = 1 k + 2 .
Case 5. α k = 1 , β k = 0.5 + 1 k + 3 .
Case 6. α k = 1 , β k = 0.99 1 k + 2 .
The experiment was written in Matlab R2015b and performed on a PC desktop with Intel(R) Core(TM) i3-3240 CPU @ 3.40GHz 3.40GHz and RAM 4.00 GB. The function f m i n c o n in Matlab Optimization Toolbox was used to solve vectors y k , z k for the CSEM algorithm; y k i , z k i , i = 1 , 2 , , N , for the PSEM algorithm. The set C k + 1 was computed by using the function s o l v e in Matlab Symbolic Math Toolbox. One can see that the set C k + 1 is the interval [ a , b ] , where a , b [ 0 , 1 ] , a b . Consequently, the metric projection of a point x 0 onto the set C k + 1 was computed by using this form
P C k + 1 ( x 0 ) = max { min { x 0 , b } , a } ,
see [41]. The CSEM and PSEM algorithms were tested along with the PHMEM algorithm by using the stopping criteria | x k + 1 x k | < 10 4 and the results below were presented as averages calculated from four starting points: x 0 at 0.01 , 0.25 , 0.75 and 1.
Table 1 shows that the parameter β k = 1 k + 2 yields faster computational times and fewer computational iterations than other cases. Compare cases 1–3 with each other and cases 4–6 with each other. Meanwhile, the parameter α k = 1 , in which the Ishikawa iteration reduces to the Mann iteration, yields slower computational times and more computational iterations than the other case. Compare cases 1 with 4, 2 with 5, and 3 with 6. Moreover, the computational times of the CSEM algorithm are faster than other algorithms, while the computational iterations of the PSEM algorithm are fewer than or equal to other algorithms. Finally, we see that both computational times and iterations of the CSEM and PSEM algorithms are better than or equal to those of the PHMEM algorithm.
Remark 3.
Let us consider the case of parameters α k = 1 and β k = 0 , in which the Ishikawa iteration will be reduced to the Picard iteration. We notice that the convergence of PHMEM algorithm cannot be guaranteed in this setting. The computational results of the CSEM and PSEM algorithms are shown as follows.
From Table 2, we see that both computational times and iterations are better than all those cases presented in Table 1. However, it should be warned that the Picard iteration method may not always converge to a fixed point of a nonexpansive mapping in general. For example, see [43].

5. Conclusions

We introduce the methods for finding a common element of the set of fixed points of a finite family for quasi-nonexpansive mappings and the solution set of equilibrium problems of a finite family for pseudomonotone bifunctions in a real Hilbert space. In fact, we consider both extragradient and shrinking projection methods together in combination with Ishikawa’s iteration concept for introducing a sequence which is strongly convergent to a common solution of the considered problems. Some numerical experiments are also provided and discussed. For the future research direction, the convergence analysis of the proposed algorithms and some practical applications should be considered and implemented.

Author Contributions

Conceptualization, M.K., A.F. and N.P.; methodology, M.K., A.F. and N.P.; formal analysis, M.K., A.F. and N.P.; investigation, M.K., A.F. and N.P.; writing—original draft preparation, M.K., A.F. and N.P.; writing—review and editing, M.K., A.F. and N.P.; funding acquisition, N.P.

Funding

This research is patially supported by Faculty of Science, Naresuan University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 127–149. [Google Scholar]
  2. Bigi, G.; Castellani, M.; Pappalardo, M.; Passacantando, M. Existence and solution methods for equilibria. Eur. J. Oper. Res. 2013, 227, 1–11. [Google Scholar] [CrossRef] [Green Version]
  3. Daniele, P.; Giannessi, F.; Maugeri, A. Equilibrium Problems and Variational Models; Kluwer: Dordrecht, The Netherlands, 2003. [Google Scholar]
  4. Muu, L.D.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. TMA 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  5. Combettes, P.L.; Hirstoaga, A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  6. Moudafi, A. Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 1999, 15, 91–100. [Google Scholar]
  7. Tran, D.Q.; Dung, L.M.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  8. Ahn, P.N. A hybrid extragradient method for pseudomonotone equilibrium problems and fixed point problems. Bull. Malays. Math. Sci. Soc. 2013, 36, 107–116. [Google Scholar]
  9. Ansari, Q.H.; Nimana, N.; Petrot, N. Split hierarchical variational inequality problems and related problems. Fixed Point Theory Appl. 2014, 2014, 208. [Google Scholar] [CrossRef] [Green Version]
  10. Iiduka, H. Convex optimization over fixed point sets of quasi-nonexpansive and nonexpansive mappings in utility-based bandwidth allocation problems with operational constraints. J. Comput. Appl. Math. 2015, 282, 225–236. [Google Scholar] [CrossRef]
  11. Moradlou, F.; Alizadeh, S. Strong convergence theorem by a new iterative method for equilibrium problems and symmetric generalized hybrid mappings. Mediterr. J. Math. 2016, 13, 379–390. [Google Scholar] [CrossRef]
  12. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  13. Park, J.Y.; Jeong, J.U. Weak convergence to a fixed point of the sequence of Mann type iterates. J. Math. Anal. Appl. 1994, 184, 75–81. [Google Scholar] [CrossRef]
  14. Nakajo, K.; Takahashi, W. Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 2003, 279, 372–379. [Google Scholar] [CrossRef] [Green Version]
  15. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 40, 147–150. [Google Scholar] [CrossRef]
  16. Chidume, C.E.; Mutangadura, S.A. An example of the Mann iteration method for Lipschitz pseudocontractions. Proc. Am. Math. Soc. 2001, 129, 2359–2363. [Google Scholar] [CrossRef]
  17. Takahashi, W.; Takeuchi, Y.; Kubota, R. Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2008, 341, 276–286. [Google Scholar] [CrossRef]
  18. Ahn, P.N. A hybrid extragradient method extended to fixed point problems and equilibrium problems. Optimization 2013, 62, 271–283. [Google Scholar]
  19. Anh, P.N.; Muu, L.D. A hybrid subgradient algorithm for nonexpansive mappings and equilibrium problems. Optim. Lett. 2014, 8, 727–738. [Google Scholar] [CrossRef]
  20. Ceng, L.C.; Al-Homidan, S.; Ansari, Q.H.; Yao, J.C. An iterative scheme for equilibrium problems and fixed point problems of strict pseudo-contraction mappings. J. Comput. Appl. Math. 2009, 223, 967–974. [Google Scholar] [CrossRef] [Green Version]
  21. Maingé, P.E. A hybrid extragradient viscosity methods for monotone operators and fixed point problems. SIAM J. Control. Optim. 2008, 47, 1499–1515. [Google Scholar] [CrossRef]
  22. Plubtieng, S.; Kumam, P. Weak convergence theorem for monotone mappings and a countable family of nonexpansive semigroups. J. Comput. Appl. Math. 2009, 224, 614–621. [Google Scholar] [CrossRef]
  23. Vuong, P.T.; Strodiot, J.J.; Nguyen, V.H. On extragradient-viscosity methods for solving equilibrium and fixed point problems in a Hilbert space. Optimization 2015, 64, 429–451. [Google Scholar] [CrossRef]
  24. Dinh, B.V.; Kim, D.S. Extragradient algorithms for equilibrium problems and symmetric generalized hybrid mappings. Optim. Lett. 2016, 11, 537–553. [Google Scholar] [CrossRef] [Green Version]
  25. Anh, P.K.; Hieu, D.V. Parallel and sequential hybrid methods for a finite family of asmyptotically quasi ϕ-nonexpansive mappings. J. Appl. Math. Comput. 2015, 48, 241–263. [Google Scholar] [CrossRef]
  26. Censor, Y.; Chen, W.; Combettes, P.L.; Davidi, R.; Herman, G.T. On the effectiveness of projection methods for convex feasibility problems with linear inequality constraints. Comput. Optim. Appl. 2012, 51, 1065–1088. [Google Scholar] [CrossRef]
  27. Censor, Y.; Gibali, A.; Reich, S.; Sabach, S. Common solutions to variational inequalities. Set-Valued Var. Anal. 2012, 20, 229–247. [Google Scholar] [CrossRef]
  28. Hieu, D.V.; Muu, L.D.; Anh, P.K. Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algor. 2016, 73, 197–217. [Google Scholar] [CrossRef] [Green Version]
  29. Anh, P.K.; Chung, C.V. Parallel hybrid methods for a finite family of relatively nonexpansive mappings. Numer. Funct. Anal. Optim. 2014, 35, 649–664. [Google Scholar] [CrossRef]
  30. Browder, F.E.; Petryshyn, W.V. Construction of fixed points of nonlinear mappings in Hilbert spaces. J. Math. Anal. Appl. 1967, 20, 197–228. [Google Scholar] [CrossRef]
  31. Takahashi, W.; Wong, N.C.; Yao, J.C. Fixed point theorems for new generalized hybrid mappings in Hilbert spaces and applications. Taiwan J. Math. 2013, 17, 1597–1611. [Google Scholar] [CrossRef]
  32. Browder, F.E. Semicontractive and semiaccretive nonlinear mappings in Banach spaces. Bull. Am. Math. Soc. 1968, 74, 660–665. [Google Scholar] [CrossRef]
  33. Dotson, W.G., Jr. Fixed points of quasi-nonexpansive mappings. J. Aust. Math. Soc. 1972, 13, 167–170. [Google Scholar] [CrossRef]
  34. Hojo, M.; Suzuki, T.; Takahashi, W. Fixed point theorems and convergence theorems for generalized hybrid non-self mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2013, 14, 363–376. [Google Scholar]
  35. Kawasaki, T.; Takahashi, W. Existence and mean approximation of fixed points of generalized hybrid mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2013, 14, 71–87. [Google Scholar]
  36. Itoh, S.; Takahashi, W. The common fixed point theory of single-valued mappings and multi-valued mappings. Pac. J. Math. 1978, 79, 493–508. [Google Scholar] [CrossRef]
  37. Mastroeni, G. On auxiliary principle for equilibrium problems. In Equilibrium Problems and Variational Models; Daniele, P., Giannessi, F., Maugeri, A., Eds.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 2003; pp. 289–298. [Google Scholar]
  38. Karamardian, S.; Schaible, S.; Crouzeix, J.P. Characterizations of generalized monotone maps. J. Optim. Theory Appl. 1993, 76, 399–413. [Google Scholar] [CrossRef]
  39. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  40. Quoc, T.D.; Anh, P.N.; Muu, L.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2012, 52, 139–159. [Google Scholar] [CrossRef]
  41. Andrzej, C. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  42. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  43. Krasnoselski, M.A. Two observations about the method of succesive approximations. Uspehi Math. Nauk 1955, 10, 123–127. [Google Scholar]
Table 1. Numerical results for six different cases of parameters α k and β k .
Table 1. Numerical results for six different cases of parameters α k and β k .
Average Times (sec)Average Iterations
CasesCSEMPSEMPHMEMCSEMPSEMPHMEM
14.905197165.099794173.34725714.2513.7514.25
27.326055287.918141345.02591425.2524.2528.25
320.371064834.0010352004.69384491.2574.25177
45.079676173.091716173.34725714.7514.2514.25
58.016109342.870819345.02591428.7528.2528.25
642.0352401986.1472732004.693844200177177
Table 2. Numerical results for parameters α k = 1 and β k = 0 .
Table 2. Numerical results for parameters α k = 1 and β k = 0 .
Average Times (sec)Average Iterations
CSEMPSEMCSEMPSEM
4.657696137.20081212.5011.50

Share and Cite

MDPI and ACS Style

Khonchaliew, M.; Farajzadeh, A.; Petrot, N. Shrinking Extragradient Method for Pseudomonotone Equilibrium Problems and Quasi-Nonexpansive Mappings. Symmetry 2019, 11, 480. https://doi.org/10.3390/sym11040480

AMA Style

Khonchaliew M, Farajzadeh A, Petrot N. Shrinking Extragradient Method for Pseudomonotone Equilibrium Problems and Quasi-Nonexpansive Mappings. Symmetry. 2019; 11(4):480. https://doi.org/10.3390/sym11040480

Chicago/Turabian Style

Khonchaliew, Manatchanok, Ali Farajzadeh, and Narin Petrot. 2019. "Shrinking Extragradient Method for Pseudomonotone Equilibrium Problems and Quasi-Nonexpansive Mappings" Symmetry 11, no. 4: 480. https://doi.org/10.3390/sym11040480

APA Style

Khonchaliew, M., Farajzadeh, A., & Petrot, N. (2019). Shrinking Extragradient Method for Pseudomonotone Equilibrium Problems and Quasi-Nonexpansive Mappings. Symmetry, 11(4), 480. https://doi.org/10.3390/sym11040480

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop