Next Article in Journal
Weighted Homology of Bi-Structures over Certain Discrete Valuation Rings
Next Article in Special Issue
A New Approach of Some Contractive Mappings on Metric Spaces
Previous Article in Journal
Classroom Methodologies for Teaching and Learning Ordinary Differential Equations: A Systemic Literature Review and Bibliometric Analysis
Previous Article in Special Issue
A Cyclic Iterative Algorithm for Multiple-Sets Split Common Fixed Point Problem of Demicontractive Mappings without Prior Knowledge of Operator Norm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Halpern-Subgradient Extragradient Method for Solving Equilibrium and Common Fixed Point Problems in Reflexive Banach Spaces

by
Annel Thembinkosi Bokodisa
,
Lateef Olakunle Jolaoso
* and
Maggie Aphane
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, P.O. Box 94, Medunsa 0204, Pretoria, South Africa
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(7), 743; https://doi.org/10.3390/math9070743
Submission received: 12 February 2021 / Revised: 17 March 2021 / Accepted: 19 March 2021 / Published: 31 March 2021
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
In this paper, using the concept of Bregman distance, we introduce a new Bregman subgradient extragradient method for solving equilibrium and common fixed point problems in a real reflexive Banach space. The algorithm is designed, such that the stepsize is chosen without prior knowledge of the Lipschitz constants. We also prove a strong convergence result for the sequence that is generated by our algorithm under mild conditions. We apply our result to solving variational inequality problems, and finally, we give some numerical examples to illustrate the efficiency and accuracy of the algorithm.

1. Introduction

In 1994, Blum and Oettli [1] revisited the Equilibrium Problem ( E P ) that was first introduced by Ky Fan which has become a fundamental concept and serves as an important mathematical tool for solving many concrete problems. The EP generalizes many nonlinear problems such as variational inequalities, minimization problems, fixed point problems, saddle point problems in unified ways, see, for instance [1,2,3,4]. It is well known that several problems arising in many fields of pure and applied mathematics such as economics, physics, optimization theory, engineering mechanics, management sciences, network analysis, etc., can be modeled as an EP; see, e.g., [5] for details.
Let E be a real reflexive Banach space, and C E be nonempty, closed, and convex subset. Let g: C × C R be a bifunction. An EP is defined in the following manner:
Find u * C such that g ( u * , z ) 0 , z C .
We denote the set of solutions of problem (1) by E P ( g ) . Because the E P and its applications are of great importance, it has provided a rich area of research for many mathematicians. Recently, many authors have proposed numerous algorithms for solving the E P (1); see, for example, [6,7,8,9]. Some of those algorithms involve proximal point methods [10,11], projection methods [12,13], extragradient methods with or without line-searches [14,15,16], decent methods based on merit functions [17,18], and methods using Bregman distance [19,20].
In 1976, Korpelevich [21] introduced the extragradient method for solving variational inequality problem (which is really a spacial case of the E P ) for L-Lipschitz continuous and monotone operators in Euclidean spaces. Korpelevich proved the convergence of the generated sequence under the assumptions of Lipschitz continuity and strong monotonicity. Moreover, there is still a need to calculate two projections onto the closed convex set C in each iteration of the algorithm. The Korpelevich’s extragradient strategy has been widely concentrated in the literature for taking care of increasingly broad problems, which comprises of finding a common point that lies in the solution set of a variational inequality and the set of fixed points of a nonexpansive mapping. This kind of problems emerges in different theoretical and modeling contexts, see [22,23], and the references therein. Several years back, Quoc et al. [15] presented a modified version of Korpelevich’s method, in which they stretched the method to solve E P ’s for the case of pseudomonotone and Lipschitz continuous bifunctions. They substituted the two projections on the feasible set C by solving two convex optimization programs for each and every iteration. In 2013, Anh Pham Ngoc [24] presented a hybrid extragradient iteration method, where the extragradient method was extended to fixed point and equilibrium problem. This was done for a pseudomonotone and Lipschitz-type continuous bifunction in a setting of real Hilbert space.
As of late, numerous authors have studied and improved Korpelevich’s extragradient method for variational inequality in various ways, see, for example [25,26]. The subgradient extragradient method is one of the ways in which the Korpelevich’s extragradient method was studied and improved; see [25]. This involves replacing the second projection in Korpelevich’s extragradient method with a projection onto a simple half-space. It is important to say that the projection onto half-spaces can be easily calculated explicitly, unlike the projection onto the whole set C, which can be complicated when C is not simple. This process has motivated several improvements for extragradient-like methods in the literature; see [27,28,29,30]. Recently, Dan Van Hieu [31] extended the subgradient extragradient method to equilibrium problem in real Hilbert spaces. He proved that the subgradient extragradient method strongly converges to an element x E P ( g ) provided the stepsize condition
0 < λ n < min 1 2 c 1 , 1 2 c 2
is satisfied, where c 1 and c 2 are the Lipschitz-like constant of g. It is important to note that the constants c 1 and c 2 are very difficult to find, even when they are estimated, they are often too small, which deteriorates the rate of convergence of the algorithm. There has been an increasing effort on finding iterative methods for solving the EP without a prior condition consisting of the constant c 1 and c 2 ; see, e.g., [32,33,34,35,36,37,38]. On the other hand, Eskandani et al. [39] introduced a hybrid extragradient method for solving EP (1) in a real reflexive Banach space. They showed that the sequence that is produced by their algorithm strongly converges to E P ( g ) (1).
Being motivated by the above results, we introduce a Halpern-subgradient extragradient method for solving pseudomonotone EP and finding common fixed point of countable family of quasi-Bregman nonexpansive mappings in real reflexive Banach spaces. The stepsize of our algorithm is determined by a self-adaptive technique, and we prove a strong convergence result without prior estimate of the Lipschitz constants. We also provide an application of our result to variational inequality problems and give some numerical experiments to show the numerical behaviour of our algorithm. This improves the work of Eskandami et al. [39] and extends the results of [32,33,34,35,36,37] to a reflexive Banach space while using Bregman distance techniques.
Throughout this paper, E denotes a real Banach space with dual E * ; x * , x denotes the duality pairing between x E and x * E * ; ∀ denotes the for all; min { A } is the minimum of a set A; max { B } is the maximum of a set B ; x n u implies the strong converges of a sequence { x n } X to a point u E ; x n u is the weak convergence of x n to u ; · denotes the norm on E, while · * denotes the norm on E * ; EP denotes the equilibrium problem, E P ( f ) denotes the solution set of the equilibrium problem; F ( T ) is the set of fixed point of a mapping T, f is the gradient of a function f and R is the real number line.

2. Preliminaries

In this section, we recall some definitions and basic facts and notions that we will need in the sequel.
Let E and C E be as defined earlier in the introduction. We denote the dual space of E by E * . the function f : E ( , ] is always an admissible, i.e., it is proper, lower semicontinuous, and differentiable. Let dom f = { u * E : f ( u * ) < } denote the domain of f. Let u * int domf. We define the subdifferential of f at u * as the convex set that is defined in the following manner
f ( u * ) = { η E * : f ( u * ) + z u * , η f ( z ) , z E } ,
and the Fenchel conjugate of f is the function
f * : E * ( , ] , f * ( η ) = sup { η , u * f ( u * ) : u * E } .
It takes less effort to show that f 6 is indeed an admissible function.
For any convex mapping f : E ( , ] , we denote, by f ( x , y ) , the right-hand derivative of f at x int domf in the direction of y, which is,
f ( u * , z ) : = lim t 0 + f ( u * + t z ) f ( z ) t .
If the limit as t 0 + in (4) exists for each z, then the function f is said to be G a ^ teaux differentiable at x. In this case, the gradient of f at u * is the linear function f ( u * ) , which is defined by f ( u * ) , z : = f ( u * , z ) for all z E . The function f is a said to be G a ^ teaux differentiable if it is G a ^ teaux differentiable at each u * dom f. When the limit t 0 in (4) is uniformly attained for any z E with z = 1 , we say that f is Fréchet differentiable at u * . Throughout this paper, f : E R is always an admissible function, under this condition we know that f is continuous in int dom f.
The function f is said to be Legendre if it satisfies the following two conditions:
L1.
int dom f , and the subdifferential f is single-valued on its domain; and,
L2.
int dom f * , and f * is single-valued on its domain.
It is common knowledge that, in E f = ( f * ) 1 (check [40], p. 83). Putting condition (L1) and (L2) together, we get
ran f = dom f * = int ( dom f ) * and ran f * = dom f = int dom f .
It also follows that f is Lengendre if and only if f * is Legendre [41] (Corollary 5.5, p. 634), and that the functions f and f * are G a ^ teaux differentiable and strictly convex in the interior of their designated domains.
In 1967, Bregman [42] introduced the concept of Bregman distance; furthermore, he found a rich and compelling method for the utilization of the Bregman distance during the time spent designing and breaking down feasibility and optimization calculations. From now on, we assume that f : E ( , ] is also Legendre. The Bregamn distance is the bifunction D f : dom f × int dom f [ 0 , + ) , being defined by,
D f ( z , u * ) = f ( z ) f ( u * ) f ( z ) , z u * .
The Bregman distance does not fulfill the notable properties of a metric, which is, it is not symmetric in general and the triangle inequality does not hold. However, it generalizes the law of cosines, which, in this case, it is known as the three point identity: for any u * dom f and y , z int dom f
D f ( u * , z ) + D f ( z , y ) D f ( u * , y ) = f ( y ) f ( z ) , u * z .
Looking at [43,44], the modulus of total convexity at x int dom f is the function v f ( x , . ) : [ 0 , + ) [ 0 , ] , as given by
v f ( u * , s ) : = inf { D f ( z , u * ) : z int dom f , z u * = s } .
f is termed totally convex at u * int dom f if v f ( u * , s ) is positive for any s > 0 . Additionally, f is termed totally convex when it is totally convex at every point u * int dom f . We comment in passing that f is totally convex on bounded subsets if and only if f is uniformly convex on bounded subsets (check [43]). We remember that f is termed sequentially consistent [45] if for any two sequences { u n } and { z n } in E, such that the first is bounded,
lim n D f ( u n , z n ) = 0 lim n u n z n = 0 .
Lemma 1
([46]). If f : E R is uniformly Fréchet differentiable and bounded on bounded subsets of E, then f is uniformly continuous on bounded subsets of E from the strong topology E to the strong topology of E * .
Lemma 2
([47]). Let f : E R be a G a ^ teaux differentiable and totally convex function. If x 0 E and the sequence D f ( x n , x 0 ) is bounded, then the sequence { x n } is also bounded.
The Bregman projection [42] with respect to f of x int dom f onto a nonempty, closed, and convex set C int dom f is defined as the necessarily unique vector Proj C f ( x ) C , which satisfies
D f ( Proj C f ( x ) , x ) = inf { D f ( y , x ) : y C } .
Similar to the metric projection in Hilbert spaces, the Bregman projection with respect to totally convex and G a ^ teaux differentiable functions has a variational characterization [46] (Corollary 4.4, p. 23).
Suppose that f is G a ^ teaux differentiable and totally convex on int dom f. Let x int dom f and C int dom f be a nonempty, closed, and convex set. If x ^ C , then the following conditions are equivalent:
M1.
The vector x ^ C is the Bregman projection of x onto C with respect to f.
M2.
The x ^ C is the unique solution of the variational inequality:
z y , f ( x ) f ( z ) 0 , y C .
M3.
The vector x ^ is the unique solution of the inequality:
D f ( y , z ) + D f ( z , x ) D f ( y , x ) , y C .
Definition 1.
Let T : C C be a mapping. A point x is called fixed point of T if T x = x . The set of fixed points of T is denoted by F ( T ) . Additionally, a point x * C is said to be an asymptotic fixed point of T if C contains a sequence { x n } n = 1 which converges weakly to x * , and lim n x n T x n = 0 . The set of asymptotic fixed points of T is denoted by F ^ ( T ) .
Definition 2
([48]). Let C be a nonempty, closed and convex subset of E. A mapping T : C int dom f is called
i.
Bregman firmly nonexpansive (BFNE for short) if
f ( T x ) f ( T y ) , T x T y f ( x ) f ( y ) , T x T y , for all x , y C .
ii.
Bregman strongly nonexpansive (BSNE) with respect to a nonempty F ^ (T) if
D f ( p , T x ) D f ( p , x ) ,
for all p F ^ ( T ) and x C , and if whenever { x n } n = 1 C is bounded, p F ^ ( T ) and
lim n ( D f ( p , x n ) D f ( p , T x n ) ) = 0 , it follows that the lim n ( D f ( p , x n ) = 0 ,
iii.
Quasi-Bregman nonexpansive (QBNE) if F ( T ) 0 and
D f ( p , T x ) D f ( p , x ) , for all x C , p F ( T ) .
It was remarked in [19] that, in the case where F ^ ( T ) = F ( T ) , the following inclusion holds:
BFNE BSNE QBNE .
Let B and S be the closed unit ball and the unit sphere of a Banach space E. Let r B = { z E : z r } , for all r > 0 . Then, the function f : E R is said to be uniformly convex on bounded subsets (see [49]) if ρ r > 0 for all r , t > 0 , where ρ r : [ 0 , ) [ 0 , ] is defined by
ρ r = inf x , y r B , x y = t , α ( 0 , 1 ) α f ( x ) + ( 1 α ) f ( y ) f ( α x + ( 1 α ) ) y α ( 1 α ) ,
for all t 0 . The function ρ r is called the gauge of uniform convexity of f. It is known that ρ r is a nondecreasing function. If f is uniformly convex, then the following Lemma is known.
Lemma 3
([50]). Let E be a Banach space, r > 0 be a constant and f : E R be a uniformly convex function on bounded subsets of E. Then
f k = 0 n a k x k k = 0 n a k f ( x k ) a i a j ρ r ( x i x j ) ,
for all i , j ( 0 , 1 , 2 , . . . , n ) , x k r B , a k ( 0 , 1 ) and k = 0 , 1 , 2 , , . . . , n with k = 0 n a k = 1 , where ρ r is the gauge of uniform convexity of f.
Lemma 4
([47]). Suppose that f : E ( , + ] is a Legendre function. The function f is totally convex on bounded subsets if and only if f is uniformly convex on bounded subsets.
For each u C , the subgradient of the convex function f ( u , · ) at u is denoted by 2 f ( u , u ) , i.e.,
2 f ( u , u ) = { w H : f ( u , v ) f ( u , u ) + w , v u , v C } = { w H : f ( u , v ) w , v u , v C } .
Lemma 5
([51]). Let C be a nonempty convex subset of E and f : C R be a convex and subdifferentiable function on C. Subsequently, f attains its minimum at x C if and only if 0 f ( x ) + N C ( x ) , where N C ( x ) is the normal cone of C at x, that is
N C ( x ) : = { x * E * : x z , x * 0 , z C }
Throughout this paper, we assume that the following assumptions hold on g:
A1.
g is pseudomonotone, i.e., g ( x , y ) 0 and g ( y , x ) 0 for all x , y C ,
A2.
g is Bregman-Lipschitz-type condition, i.e., there exist two positive constants c 1 , c 2 , such that
g ( x , y ) + g ( y , z ) g ( x , z ) c 1 D f ( y , x ) c 2 D f ( z , y ) , x , y , z C .
A3.
g ( x , x ) = 0 for all x C ,
A4.
g ( · , y ) is continuous on C for every y C , and
A5.
g ( x , · ) is convex, lower semicontinuous, and subdifferentiable on C for every fixed x C .
Lemma 6
([52]). Let E be a reflexive Banach space, f : E R be a strong coercive Bregman function and V f : E × E [ 0 , + ) be defined by
V f ( u , u * ) = f ( u ) u , u * + f * ( u * ) , u E , u * E * .
Subsequently, the following assertions hold:
i.
D f ( u , f ( u * ) ) = V f ( u , u * ) , u E , u * E * , and
ii.
V f ( u , u * ) + f * ( u * ) u , y * V f ( u , u * + y * ) , u E , y * E * .
In addition, if f : E ( , ] is a proper lower semicontinuous function, then f * : E * ( , ] is a proper weak lower semicontinuous and convex. Hence, V f is convex in the second variable. Thus, for all z E , we have
D f z , f * i = 1 N t i f ( x i ) i = 1 N D f ( z , x i ) ,
where { x i } i = 1 N E and { t i } i = 1 N ( 0 , 1 ) with i = 1 N t i = 1 .
Lemma 7
([53]). Let { Θ n } be a sequence of non-negative real numbers satisfying the following identity:
Θ n + 1 ( 1 α n ) Θ n + α n δ n , n 0 ,
where { α n } ( 0 , 1 ) and { δ n } R , such that n = 0 α n = and lim sup n δ n 0 or n = 0 | α n δ n | < . Afterwards, lim n Θ n = 0 .
Lemma 8
([54]). Let { Θ n } be a sequence of real numbers such that there exists a subsequence { Θ n i } of { Θ n } with Θ n i < Θ n i + 1 for all i N . Consider the integer { m k } that is defined by
m k = max { j k : Θ j < Θ j + 1 } .
Subsequently, { m k } is a non-decreasing sequence verifying lim n m k = , and for all k N , the following estimate hold:
Θ m k Θ m k + 1 , and Θ k Θ m k + 1 .

3. Main Results

In this section, we present our algorithms and establish convergence analysis.
Let E be a real reflexive Banach space, and C be a nonempty, closed, and convex subset of E. Let g : C × C R be a bifunction satisfying (A1)–(A5). For i , where = N { 0 } , let T i : E E be a countable family of quasi-Bregman nonexpansive mappings, such that I T i are demiclosed at zero. Let f : E R be a uniformly Fréchet differentiable, coercive, Legendre, totally convex, and bounded on bounded subsets of E . Suppose that the solution set
S o l = E P ( f ) i = 1 F ( T i ) .
We assume that the control sequences satisfy the following condition.
C1.
{ α n } ( 0 , 1 ) , lim n α n = 0 and n = 0 α n = + ;
C2.
{ β n , i } ( 0 , 1 ) , n = 0 β n , i = 1 and lim inf n β n , 0 β n , i > 0 .
Now, suppose that the sequence { x n } is generated by the following algorithm.
Remark 1.
Note that when x n = y n = u n , we are at a common solution of the EP and fixed points of T i for i . More so, the following present some of the importance of Algorithm 1.
(i).
Eskandami et al. [39] introduced a hybrid extragradient method whose convergence depends on the Lipschitz constants c 1 and c 2 which are very difficult to estimate. Moreover, our Algorithm 1 does not depend on the Lipschitz constants and the second a r g m i n problem can be easily solved over the half-space D n .
(ii).
Hieu and Strodiot [55] proposed an extragradient method with the line search technique (Algorithm 4.1) in two uniformly convex Banach spaces. It is known that such line search method is not always efficient because it consist of inner loop which may consume extra computation time. In Algorithm 1, the stepsize is selected self-adaptively and does not involve any inner loop.
(iii).
Our algorithm also extends the subgradient extragradient method of  [32,33,34,35,36,37] to reflexive Banach spaces using Bregman distance.
Algorithm 1: Halpern-Subgradient Extragradient Method (H-SEM)
Initialization: Choose x 0 C , u E , λ 0 > 0 , σ ( 0 , 1 ) and Set n = 1 .
Step 1:
Compute
y n = a r g m i n { λ n g ( x n , y ) + D f ( y , x n ) : y C } .
If x n = y n : set x n = z n and go to Step 3. Else, do Step 2.
Step 2:
Compute
z n = a r g m i n { λ n g ( y n , y ) + D f ( y , x n ) : y D n } ,
where D n = { y E : f ( x n ) λ n w n f ( y n ) , y y n 0 } , and w n 2 g ( x n , y n ) .
Step 3:
Compute
u n = f * α n f ( u ) + ( 1 α n ) f ( z n ) .
Step 4:
Calculate x n + 1 and λ n + 1 as follows
x n + 1 = f * ( β n , 0 f ( u n ) + i = 1 β n , i f ( T i u n ) ) ,
and
λ n + 1 = min λ n , σ ( D f ( y n , x n ) + D f ( z n , y n ) ) g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) if g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) 0 , λ n , otherwise .
Set n n + 1 and go to Step 1.
We now give the convergence analysis of Algorithm 1. We begin by proving the following necessary results.
Lemma 9.
The sequence { λ n } that is generated by our algorithm is bounded by
min λ 0 , σ max { c 1 , c 2 } .
Proof. 
We deduce from the Definition of λ n + 1 that λ n + 1 λ n . This implies that λ n + 1 is monotonically decreasing. It follows from (8) that
c 1 D f ( y n , x n ) + c 2 D f ( z n , y n ) g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) .
Thus
σ ( D f ( y n , x n ) + D f ( z n , y n ) ) g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) σ ( D f ( y n , x n ) + D f ( z n , y n ) ) c 1 D f ( y n , x n ) + c 2 D f ( z n , y n ) σ ( D f ( y n , x n ) + D f ( z n , y n ) ) max { c 1 , c 2 } ( D f ( y n , x n ) + D f ( z n , y n ) ) = σ max { c 1 , c 2 } .
Hence, the sequence { λ n } is bounded by min λ 0 , σ max { c 1 , c 2 } . This also implies that there exists
lim n λ n = λ > 0 .
   □
Lemma 10.
For all u * E P ( g ) , the following inequality holds:
D f ( u * , z n ) D f ( u * , x n ) 1 λ n λ n + 1 σ D f ( y n , x n ) 1 λ n λ n + 1 σ D f ( z n , y n ) , n 0 .
Proof. 
Because z n D n , then from Algorithm 1, we have
f ( x n ) λ n w n f ( y n ) , z n y n 0 ,
hence
f ( x n ) f ( y n ) , z n y n λ n w n , z n y n .
Additionally, since w n 2 g ( x n , y n ) , , then
g ( x n , y ) g ( x n , y n ) w n , y y n y E .
This implies that
g ( x n , z n ) g ( x n , y n ) w n , z n y n .
Combining (11) and (12), we get
f ( x n ) f ( y n ) , z n y n λ n { g ( x n , z n ) g ( x n , y n ) } .
Additionally, since z n = a r g m i n { λ n g ( y n , y ) + D f ( y , x n ) : y D n } , it follows from Lemma 5 that
0 λ n g ( y n , y ) + D f ( y , x n ) ( z n ) + N D n ( z n ) .
This implies that there exists w ¯ n 2 g ( y n , z n ) and η N D n ( z n ) , such that
λ n w ¯ n + f ( z n ) f ( x n ) + ξ = 0 .
Because ξ N D n ( z n ) , then ξ , y z n 0 for all y D n . Hence,
λ n w ¯ n , y z n f ( x n ) f ( z n ) , y z n y E .
Additionally, w ¯ n 2 g ( y n , z n ) , then
g ( y n , y ) g ( y n , z n ) w ¯ n , y z n y E .
Thus,
λ n g ( y n , y ) g ( y n , z n ) λ n w ¯ n , y z n y E .
From (14) and (15), we get
f ( x n ) f ( z n ) , y z n λ n g ( y n , y ) g ( y n , z n ) y E .
Now, let y = u * E P ( g ) in (16), then we have
f ( x n ) f ( z n ) , u * z n λ n g ( y n , u * ) g ( y n , z n ) .
Because g ( u * , y n ) 0 and g is pseudomonotone, then g ( y n , u * ) 0 . This implies that
f ( x n ) f ( z n ) , u * z n λ n g ( y n , z n ) .
Adding (13) and (17), we have
f ( x n ) f ( z n ) , u * z n + f ( x n ) f ( y n ) , z n y n λ n { g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) } .
By Bregman three point identity (5), it follows that:
D f ( u * , z n ) D f ( u * , x n ) D f ( y n , x n ) D f ( z n , y n ) + λ n { g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) }
Additionally, from the Definition of λ n , we have:
D f ( u * , z n ) D f ( u * , x n ) D f ( y n , x n ) D f ( z n , y n ) + λ n λ n + 1 λ n + 1 { g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) } D f ( u * , x n ) D f ( y n , x n ) D f ( z n , y n ) + λ n λ n + 1 σ ( D f ( y n , x n ) + D f ( z n , y n ) ) = D f ( u * , x n ) 1 λ n λ n + 1 σ D f ( y n , x n ) 1 λ n λ n + 1 σ D f ( z n , y n ) .
   □
Lemma 11.
The sequence { x n } that is generated by Algorithm 1 is bounded.
Proof. 
Let u * S o l . Afterwards, u * E P ( g ) and u * F ( T i ) , for all i . Because lim n λ n exists (see Lemma 9), then λ n λ n + 1 1 and so there exists N N , such that
1 λ n λ n + 1 σ = 1 σ > 0 n N .
Thus, from Lemma 10, we have
D f ( u * , z n ) D f ( u * , x n ) .
Therefore,
D f ( u * , u n ) = D f ( u * , f * α n f ( u ) + ( 1 α n ) f ( z n ) α n D f ( u * , u ) + ( 1 α n ) D f ( u * , z n ) α n D f ( u * , u ) + ( 1 α n ) D f ( u * , x n )
Additionally,
D f ( u * , x n + 1 ) = D f ( u * , f * ( β n , 0 f ( u n ) + i = 1 β n , i f ( T i u n ) ) β n , 0 D f ( u * , u n ) + i = 1 β n , i D f ( u * , T i u n ) β n , 0 D f ( u * , u n ) + i = 1 β n , i D f ( u * , u n ) = D f ( u * , u n ) .
Hence, by (18) and (19), we have
D f ( u * , x n + 1 ) D f ( u * , u n ) α n D f ( u * , u ) + ( 1 α n ) D f ( u * , x n ) max { D f ( u * , u ) , D f ( u * , x n ) } max { D f ( u * , u ) , D f ( u * , x 0 ) } .
Therefore, { D f ( u * , x n ) } is bounded and, by Lemma 2, the sequence { x n } is also bounded.    □
Lemma 12.
Let s = sup { f ( y n ) , f ( T i y n ) } and let ρ * : E * R be the gauge of uniform convexity of the conjugate function f * . Subsequently,
D f ( u * , x n + 1 ) α n D f ( u * , u ) + ( 1 α n ) D f ( u * , x n ) β n , 0 i = 1 β n , i ρ s * ( f ( y n ) f ( T i y n ) ) .
Proof. 
From our algorithm, we have:
D f ( u * , x n + 1 ) = D f ( u * , f * ( β n , 0 f ( u n ) + i = 1 β n , i f ( T i u n ) ) ) = V f ( u * , β n , 0 f ( u n ) + i = 1 β n , i f ) T i u n ) ) .
It follows from Lemma 6 that:
D f ( u * , x n + 1 ) = f ( u * ) u * , β n , 0 f ( u n ) + i = 1 β n , i f ( T i u n ) + f * ( β n , 0 f ( u n ) + i = 1 β n , i f ( T i u n ) ) .
By Lemma 3 and (18), we have that:
D f ( u * , x n + 1 ) = β n , 0 f ( u * ) + i = 1 β n , i f ( u * ) β n , 0 u * , f ( u n ) i = 1 β n , i u * , f ( T i u n ) + β n , 0 f * ( f ( u n ) ) + i = 1 β n , i f * ( f ( T i u n ) ) β n , 0 i = 1 β n , i ρ s * ( f ( y n ) f ( T i u n ) ) = β n , 0 [ f ( u * ) u * , f ( u n ) + f * ( f ( u n ) ) ] + i = 1 β n , i [ f ( u * ) u * , f ( T i u n ) + f * ( f ( T i u n ) ) ] β n , 0 i = 1 β n , i ρ s * ( f ( y n ) f ( T i u n ) ) β n , 0 D f ( u * , u n ) + i = 1 β n , i D f ( u * , T i u n ) β n , 0 1 = 1 β n , i ρ s * ( f ( u n ) f ( T i u n ) ) D f ( u * , u n ) β n , 0 i = 1 β n , i ρ s * ( f ( u n ) f ( T i u n ) ) α n D f ( u * , u ) + ( 1 α n ) D f ( u * , x n ) β n , 0 i = 1 β n , i ρ s * ( f ( u n ) f ( T i u n ) ) .
   □
Next, we prove the strong convergence of the sequence that is generated by our algorithm.
Theorem 1.
The sequence { x n } generated by Algorithm 1 converges strongly to z, where z = P r o j S o l f ( u ) .
Proof. 
Let u * S o l and put Γ n = D f ( u * , x n ) . We divide the proof into two cases.
Case 1: suppose that there exists N N such that { Γ n } is monotonically decreasing for all n N . This implies that lim n Γ n exists since { x n } is bounded and, thus, we have
Γ n Γ n + 1 0 , as n .
From Lemma 10 and Lemma 11, we have the following:
D f ( u * , x n + 1 ) = α n D f ( u * , u ) + ( 1 α n ) D f ( u * , z n ) α n D f ( u * , u ) + ( 1 α n ) { D f ( u * , x n ) 1 λ n λ n + 1 σ D f ( y n , x n ) 1 λ n λ n + 1 σ D f ( z n , y n ) } α n D f ( u * , u ) + ( 1 α n ) D f ( u * , x n ) ( 1 α n ) 1 λ n λ n + 1 σ D f ( y n , x n ) ( 1 α n ) 1 λ n λ n + 1 σ D f ( z n , y n ) .
Thus, we have
( 1 α n ) 1 λ n λ n + 1 σ D f ( y n , x n ) + D f ( z n , x n ) α n D f ( u * , u ) + ( 1 α n ) D f ( u * , x n ) D f ( u * , x n + 1 ) .
Because α n 0 , and Γ n Γ n + 1 0 , it follows that
1 λ n λ n + 1 σ D f ( y n , x n ) + D f ( z n , x n ) 0 as n .
Note that λ n λ n + 1 1 . Thus, from (21), we obtain
( 1 σ ) D f ( y n , x n ) + D f ( z n , x n ) 0 .
Also, since σ ( 0 , 1 ) then
D f ( y n , x n ) + D f ( z n , x n ) 0 .
Therefore
lim n D f ( y n , x n ) = 0 , and lim n D f ( z n , x n ) = 0 .
By (6), we have
lim n y n x n = 0 = lim n z n x n .
Accordingly,
z n y n z n x n + x n y n 0 as n .
Moreover, since f is Fréchet differentiable, then f is uniformly continuous on bounded subsets of E. Hence, we have
f ( u n ) f ( z n ) = α n f ( u ) + ( 1 α n ) f ( z n ) f ( z n ) = α n f ( u ) f ( z n ) 0 as n .
Additionally, f * is uniformly continuous on bounded subsets of E * . Hence,
lim n u n z n = 0 .
Remember, from (20), we have
D f ( u * , x n + 1 ) α n D f ( u * , u ) + ( 1 α n ) D f ( u * , x n ) β n , 0 i = 1 β n , i ρ s * ( f ( u n ) f ( T i u n ) )
This implies that
β n , 0 i = 1 β n , i ρ s * ( f ( u n ) f ( T i u n ) ) α n D f ( u * , u ) + ( 1 α n ) D f ( u * , x n ) D f ( u * , x n + 1 ) 0 .
Therefore,
lim n β n , 0 i = 1 β n , i ρ s * ( f ( u n ) f ( T i u n ) ) = 0 .
It follows from (C2) that
lim n f ( u n ) f ( T i u n ) = 0 ,
Hence,
lim n u n T i u n = 0 .
Because { x n } is bounded, there exists a subsequence { x n k } of { x n } , such that x n k z C . It follows from the fact that lim n y n x n 0 , then y n k z . Additionally
y n = a r g m i n { λ n g ( x n , y ) + D f ( y , x n ) : y C } ,
it follows from Lemma 5 that
0 α n g ( x n , y ) + D f ( y , x n ) ( y n ) + N C ( y n ) , y C .
This implies that
λ n w n + f ( y n ) f ( x n ) + ξ = 0 , where ξ N C ( y n ) .
Note that
ξ , y y n 0 , y C .
Hence, from (25), we get
λ n w n , y y n + ξ , y y n = f ( x n ) f ( y n ) , y y n ,
which implies that
λ n w n , y y n f ( x n ) f ( y n ) , y y n .
Additionally, since w n 2 g ( x n , y n ) , then
g ( x n , y ) + g ( x n , y n ) w n , y z n , y C .
From (26) and (27), we have
λ n g ( x n , y ) g ( x n , y n ) f ( x n ) f ( y n ) , y y n , y C .
Because lim n x n y n 0 , f is uniformly Fréchet differentiable. Afterwards, f is uniformly continuous on bounded subsets of E. Hence,
lim n f ( x n ) f ( y n ) = 0 .
Therefore, passing limit to (27) as n and using (28), we have
g ( z , y ) 0 y C .
Hence,
z E P ( g , C ) .
Furthermore, since u n T i u n 0 and u n z , then z F ^ ( T i ) . By the demiclosedness of I T i , we have z F ( T i ) , i . Therefore, it follows that
z i = 1 F ( T i )
Therefore, by (29) and (30), we have
z S o l = E P ( g , C ) i = 1 F ( T i ) .
Now, we prove that { x n } strongly converges strongly to u * . Lemma 6, we have
D f ( u * , x n + 1 ) D f ( u * , u n ) = D f u * , f * ( α n f ( u ) + ( 1 α n ) f ( z n ) ) = V f u * , α n f ( u ) + ( 1 α n ) f ( z n ) V f u * , α n f ( u ) + ( 1 α n ) f ( z n ) ( α n f ( u ) α n f ( u * ) ) f * α n f ( u ) + ( 1 α n ) f ( z n ) u * , ( α n f ( u ) α n f ( u * ) ) = V f u * , ( 1 α n ) f ( z n ) + α n f ( u * ) + f * α n f ( u ) + ( 1 α n ) f ( z n ) u * , α n f ( u ) α n f ( u * ) ( 1 α n ) V f ( u * , f ( z n ) ) + α n V f ( u * , f ( u * ) ) + α n f * α n f ( u ) + ( 1 α n ) f ( z n ) u * , f ( u ) f ( u * ) = ( 1 α n ) D f ( u * , z n ) + α n D f ( u * , u * ) + α n u n u * , f ( u ) f ( u * ) = ( 1 α n ) D f ( u * , z n ) + α n u n u * , f ( u ) f ( u * ) ( 1 α n ) D f ( u * , x n ) + α n u n u * , f ( u ) f ( u * ) .
Let b n = u n u * , f ( u ) f ( u * ) . It suffices to show that lim sup n b n 0 . It follows from (7) that
lim sup n x n u * , f ( u ) f ( u * ) = lim k x n k u * , f ( u ) f ( u * ) = z u * , f ( u ) ( u * ) 0 .
Hence,
lim sup n x n u * , f ( u ) f ( u * ) 0 .
Because u n x n 0 , then
lim sup n u n u * , f ( u ) f ( u * ) 0 .
Therefore, using Lemma 7, (32) and (33), we have D f ( u * , x n ) 0 . This implies that lim n x n u * = 0 . Hence, { x n } converges strongly to u * .
Case 2: suppose that { x n } is monotonically increasing. This means that
D f ( u * , x n ) < D f ( u * , x n + 1 ) .
Afterwards, by Lemma 8, we have that there exits a sequence { m n } N , such that a m n a m n + 1 and a n a m n + 1 , where m n = max { j n : a j a j + 1 } . Following similar analysis, as above, we have
lim n y m n x m n = 0 ,
lim n z m n x m n = 0 ,
lim n u m n x m n = 0 ,
and
lim n u m n T i u m n = 0 .
Additionally, we have
lim sup n u m n u * , f ( u ) f ( u * ) 0 ,
and
D f ( u * , x m n + 1 ) ( 1 α m n ) D f ( u * , x m n ) + α m n u m n u * , f ( u ) f ( u * ) .
Because a m n a m n + 1 , then
0 D f ( u * , x m n + 1 ) D f ( u * , x m n )
Therefore,
D f ( u * , x m n + 1 ) ( 1 α m n ) D f ( u * , x m n ) + α m n u m n u * , f ( u ) f ( u * ) D f ( u * , x m n ) .
Hence,
D f ( u * , x m n ) u m n u * , f ( u ) f ( u * ) ,
It follows from (34) that
lim sup n D f ( u * , x m n ) 0
Therefore
lim n D f ( u * , x m n ) = 0 .
Consequently,
lim n D f ( u * , x n ) lim n D f ( u * , x m n + 1 ) = 0 .
Hence, { x n } converges strongly to u * . This completes the proof.    □
The following can be obtained as consequences of our main Theorem.
Corollary 1.
Let E be a real reflexive Banach space, and C be a nonempty, closed, and convex subset of E. Let g : C × C R be a bifunction satisfying (A1)–(A5). For i , where = N { 0 } , let T i : E E be finite a family of Bregman strongly nonexpansive mappings. Let f : E R be a uniformly Fréchet differentiable, coercive, Legendre, totally convex, and bounded on bounded subsets of E . Suppose that the solution set
S o l = E P ( f ) i = 1 F ( T i ) .
Subsequently, the sequence { x n } that is generated by the Algorithm 1 converges strongly to a point u * Γ .
Corollary 2.
Let E be a real reflexive Banach space, and C be a nonempty, closed, and convex subset of E. Let g : C × C R be a bifunction satisfying (A1)–(A5) and T : E E be a quasi-Bregman nonexpansive mappings, such that I T is demiclosed at zero. Let f : E R be a uniformly Fréchet differentiable, coercive, Legendre, totally convex, and bounded on bounded subsets of E . Suppose that the solution set
S o l = E P ( f ) F ( T ) .
Subsequently, the sequence { x n } generated by Algorithm 1 converges strongly to a point u * S o l .

4. Application: Variational Inequality

In this section, we consider the classical variational inequality problem, which is a particular case of equilibrium problem.
Let A : C E * be a mapping. A variational inequality problem, denoted by V I P , is to find
z C such that A z , y z 0 , y C .
We denote the solution set of the VIP by V I P ( C , A ) . Variational inequalities are important mathematical tools for solving many problems arising in applied sciences, such as optimization, network equilibrium, mechanics, engineering, economics, etc., (see, for example, [28,29,30] and references therein).
The following results are important in this section.
Lemma 13
([48]). Let E be a real valued reflexive Banach space and C be a nonempty, closed, and convex subset of E. Let g : C × C R be a function, such that g ( x , x ) = 0 and f : E R be Legendre and totally coercive function. Subsequently, a point x * E P ( C , g ) if, and only if, x * solves the following minimization problem:
min { λ g ( x , y ) + D f ( y , x ) : y C } , where x C and λ > 0 .
Lemma 14
([39]). Let C be a nonempty and closed convex subset of a reflexive Banach space E, A : C × E * be a mapping, and f * : E R be a Legendre function. Afterwards,
P r o j C f f * ( f ( x ) λ A ( y ) ) = a r g m i n w C { λ w y , A ( y ) + D f ( w , x ) } ,
for all x E , y C , and λ ( 0 , + ) .
Now, by setting g ( x , y ) = A x , y x for all x , y C , it follows from Lemma 13 and 14 that
a r g m i n { λ n g ( x n , y ) + D f ( y , x n ) : y C } = a r g m i n { λ n A x n , y x n + D f ( y , x n ) : y C } = P r o j C f f ( f ( x n ) λ n A y n ) .
Similarly,
a r g m i n { λ n g ( y n , y ) + D f ( y , x n ) : y T n } = P r o j T n f f * ( f ( x n ) λ n A y n ) .
Note that
g ( x n , z n ) g ( x n , y n ) g ( y n , z n ) = A x n , z n x n A x n , y n x n A y n , z n y n = A x n , z n y n A y n , z n y n = A x n A y n , z n y n .
We assume that A : C E * satisfies the following assumptions.
(B1)
A is pseudomonotone, i.e., for x , y C we have
A x , y x 0 A y , y x 0 ;
(B2)
A is L-Lipschitz continuous with respect to D f , i.e., there exists L > 0 , such that
D f ( A x , A y ) L D f ( x , y ) x , y C ;
(B3)
A is weakly sequentially continuous, i.e., for any sequence { x n } C , such that x n x ¯ C , then A x n A x ¯ .
Therefore, we can apply our result to solving the VIP as follows:
Theorem 2.
Let E be a real reflexive Banach space, and C be a nonempty, closed, and convex subset of E. Let A : C E * be a mapping satisfying (B1)–(B3). For i , where = N { 0 } , let T i : E E be a finite family of quasi-Bregman nonexpansive mappings, such that I T i are demiclosed at zero. Let f : E R be a uniformly Fréchet differentiable, coercive, Legendre, totally convex, and bounded on bounded subsets of E . Suppose that the solution set
S o l = V I ( C , A ) i = 1 F ( T i ) .
Subsequently, the sequence { x n } generated by the following algorithm converges strongly to a point u * Γ .
For x 0 , u C , λ 0 > 0 , Compute
y n = P r o j c f f * ( f ( x n ) λ n A x n ) T n = { y E : f ( x n ) λ n A x n f ( y n ) , y y n 0 } z n = P r o j T n f f * ( f ( x n ) λ n A y n ) u n = f * ( α n f ( u ) + ( 1 α n ) f ( z n ) x n + 1 = f * β n , 0 f ( u n ) + i = 1 β n , i f ( T i u n ) λ n + 1 = min { λ n , σ ( D f ( y n , x n ) + D f ( z n , y n ) ) A x n A y n , z n y n } if A x n A y n , z n y n > 0 , λ n , if otherwise .
where { α n } and { β n , i } are sequences in ( 0 , 1 ) , such that condition (C1) and (C2) are satisfied.

5. Numerical Examples

In this section, we perform some numerical experiments to illustrate the performance of the proposed method and also compare with the method proposed by Eskandami et al. [39] (shortly, H-EGM), Algorithm 3.1 of Hieu and Strodiot [55] (shortly, HS-ALG. I) and Algorithm 4.1 of Hieu and Strodiot [55] (shortly, HS-ALG. II). All of the optimization subproblems are viably addressed by the quadprog function in Matlab. The calculations are completed utilizing MATLAB program on a Lenovo X250, Intel (R) Center i7 vPro consisting of RAM 8.00GB. All of the optimization subproblems are effectively solved by the quadprog function in Matlab. The computations are carried out using MATLAB program on a Lenovo X250, Intel (R) Core i7 vPro with RAM 8.00GB.
Example 1.
First, we consider the generalized Nash equilibrium problem described as follows:
Assume that there are m companies that produce a specific item. Let x mean the vector whose section x j represents the amount of the item that is delivered by organization j. We accept that the value cost p j ( s ) is a decreasing affine function of s with s = j = 1 m x j , i.e., p j ( s ) = α j β j s , where α j > 0 and β j > 0 . It follows that the profit that is generated by company j is given by
g j ( x ) = p j ( s ) x j c j ( x j ) ,
where c j ( x j ) is the tax expense for delivering x j . Assume that C = [ x j min , x j max ] is the system set of company j. At that point, the methodology set of the model is C : C 1 × C 2 × · · · × C m . Indeed, each company looks to optimize its benefit by picking the corresponding production level under the assumption that the production of different companies is a parametric input. The renowned Nash equilibrium idea established a commonly utilized method to deal with this model.
We recall that that a point x * C = C 1 × C 2 × · · · × C m is called an equilibrium point of the model if
g j ( x * ) g j ( x * [ x j ] ) for all x j C j , j = 1 , 2 , , m ,
where the vector x * [ x j ] stands for the vector obtained from x * by substituting x j * with x j . By taking g ( x , y ) : = ψ ( x , y ) ψ ( x , x ) : = j = 1 m g j ( x [ y j ] ) , the problem of finding a Nash equilibrium point of the model can be formulated, as follows:
Find x * C : g ( x * , x ) 0 for all x C .
Presently, let us guess that the tax-charge function c j ( x j ) is expanding and affine for each j 1 . This presumption implies that both the tax and charge for creating a unit are expanding as the amount of the production gets larger. All things considered, the bifunction g can be formed in the following structure:
g ( x , y ) = P x + Q y + q , y x ,
where q m and P , Q are two matrices of order m, such that Q is symmetric positive semidefinite and Q P is symmetric negative semidefinite. This shows that g is pseudomonotone. Moreover, it is easy to show that g satisfies the Lipschitz-type condition with c 1 = c 2 = P Q 2 .
We suppose that the set C has the form
C = { x m : 2 x j 5 , j = 1 , 2 , , m } .
The matrices P , Q are randomly generated, such that their properties are satisfied and the vector q is generated randomly with its entries being in ( 2 , 2 ) . The mapping T i : m m is defined as the projection P C and the initial vector x 0 m is generated randomly for m = 10 , 30 , 50 , 100 . For Algorithm 1, we choose u m , λ 0 = 0.36 , α n = 1 10 ( n + 1 ) and for each n N , i 1 , { β n , i } is defined by
β n , i = 0 if n < i , 1 n n + 1 k = 1 n 1 2 k if n = i , 1 2 i + 1 n n + 1 if n > i .
For H-EGM, we take N = 1 , M = 5 , α n = 1 10 ( n + 1 ) , β n , r = 1 6 and choose the best stepsize λ n = 1 2.02 c for the algorithm. Also for HS-ALG. I, we take α n = 1 10 ( n + 1 ) , β n = 2 n 5 n + 8 , λ n = 1 2.02 c . Similarly for HS-ALG II, we choose α n = 1 10 ( n + 1 ) , β n = 2 n 5 n + 8 , γ = 0.08 , α = 0.64 , ν = 0.8 .
We use D n = x n + 1 x n 2 < ϵ to illustrate the convergence of the algorithms, where ϵ = 10 5 . The numerical results are shown in Table 1 and Figure 1.
From Table 1 and Figure 1, we see that Algorithm 1 performs better than H-EGM, HS-ALG I, and HS-ALG II. This is due to the fact that Algorithm 1 uses a self-adaptive technique to select its stepsize, while H-EGM and HS-ALG I used the choice λ n = 1 2.02 c which deteriorate the performance of the algorithms as the value of m increases. Furthermore, HS-ALG II used a computationally expensive line search procedure to determine its stepsize at each iteration. This technique uses inner iteration and consumed additional computational time.
Example 2.
Let E = 2 ( R ) be the linear spaces whose elements are all 2-summable sequences { x j } j = 1 of scalars in R , that is
2 ( R ) : = x = ( x 1 , x 2 , , x j , ) , x j R and j = 1 | x j | 2 <
with inner product · , · : 2 × 2 R and | | · | | : 2 R defined by x , y : = j = 1 x j y j and | | x | | = j = 1 | x j | 2 1 2 , where x = { x j } j = 1 , y = { y j } j = 1 . Let C = { x E : | | x | | 1 } . Define the bifunction g : C × C R by
g ( x , y ) = ( 3 | | x | | ) x , y x x , y C .
It is easy to show that g is a pseudomonotone bifunction that is not monotone and g satisfies condition (A1)–(A5) with Lipschitz-like constant c 1 = c 2 = 5 2 . We define the mapping T i : 2 2 by T i x = x 1 2 , x 2 2 , , x i 2 , . Subsequently, T i is QBNE and F ( T i ) = { 0 } and, thus, S o l = { 0 } . We use similar parameters and stopping criterion, as used in Example 1, for the algorithms with the following initial value:
Case I:   x 0 = ( 5 , 5 , 5 , , 5 , ) ,
Case II:   x 0 = ( 2 , 4 , 0 , , 0 , ) ,
Case III:   x 0 = ( 3 , 1 , 3 , , 3 , ) ,
Case IV:   x 0 = ( 2 , 2 , 0 , , 0 , ) .
Table 2 and Figure 2 show the numerical results.
From Figure 2 and Table 2, we see that Algorithm 1 performs better than H-EGM, HS-ALG I, and HS-ALG II. Note that the Lipschitz-like constant for the cost operator in this example is c = 5 2 . Subsequently, the prior estimate of the stepsize for H-EGM and HS-ALG I can easily be obtained, which is fixed for every iteration. More so, HS-ALG II used a line search method to determine an appropriate stepsize for each iteration. However, Algorithm 1 updated its stepsize at every iteration while using a computational inexpensive method.
Example 3.
In this example, we take E = L 2 ( [ 0 , 1 ] ) with the inner product x , y = 0 1 x ( t ) y ( t ) d t , and norm | | x | | = 0 1 x 2 ( t ) d t 1 2 for all x , y L 2 ( [ 0 , 1 ] ) . The set C is defined by C = { x H : 0 1 ( t 2 + 1 ) x ( t ) d t 1 } and the function g : C × C R is given by g ( x , y ) = A x , y x , where A x ( t ) = max { 0 , x ( t ) } , t [ 0 , 1 ] for all x H . We defined the mapping T : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) by T ( x ) = 0 1 x ( t ) 2 . It is not difficult to show that T is QBNE and S o l = { 0 } . We take α n = 1 n + 1 , β n , i = 3 n 8 n + 11 for all the algorithms. For Algorithm 1, we take λ 0 = 0.28 and u = sin ( 3 t ) . For H-EGM and HS-ALG I., we take N = M = 1 and λ n = 1 3 . Additionally, for HS-ALG. II, we take γ = 0.05 , α = 0.28 , ν = 0.5 . We test the algorithms for the following initial values:
Case I:   x 0 = t 2 + 1 ,
Case II:   x 0 = cos ( 4 t ) 4 ,
Case III:   x 0 = exp ( 3 t ) 3 ,
Case IV:   x 0 = cos ( 2 t ) .
We use | | x n + 1 x n | | < 10 4 as the stopping criterion for the numerical computation and plot the graphs of | | x n + 1 x n | | against number of iterations in each case. Table 3 and Figure 3 present the numerical results.
From Table 3 and Figure 3, we see that Algorithm 1 also performs better than H-EGM, HS-ALG I, and HS-ALG II. The reason for this advantage is similar to that in Example 2.

6. Conclusions

In this paper, we introduced a Halpern-type Bregman subgradient extragradient method for solving the pseudomonotone equilibrium problem in a real reflexive Banach space. The stepsize of the algorithm is chosen by a self-adaptive method that does not require computing the prior estimate of the Lipschitz-like constants of the cost operator. We also proved a strong convergence theorem for the sequence that is generated by our algorithm to a common solution of the equilibrium and fixed point problem. Finally, we presented some numerical experiments to illustrate the performance and efficiency of the proposed method. The numerical results showed that the proposed algorithm performs better than other related methods in the literature in terms of the number of iterations and CPU time taken for the computation.

Author Contributions

Conceptualization, L.O.J.; methodology, A.T.B. and L.O.J.; validation, M.A. and L.O.J.; formal analysis, A.T.B. and L.O.J.; writing—original draft preparation, A.T.B.; writing—review and editing, L.O.J. and M.A.; visualization, L.O.J.; supervision, L.O.J. and M.A.; project administration, L.O.J. and M.A.; funding acquisition, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Sefako Makgatho Health Sciences University Postdoctoral research fund and and the APC was funded by Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Pretoria, South Africa.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors acknowledge with thanks, the Department of Mathematics and Applied Mathematics at the Sefako Makgatho Health Sciences University for making their facilities available for the research.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–146. [Google Scholar]
  2. Konnov, I.V. Combined Relaxation Methods for Variational Inequalities; Springer: Berlin, Germany, 2000. [Google Scholar]
  3. Konnov, I.V. Equilibrium Models and Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2007. [Google Scholar]
  4. Muu, L.D.; Oettli, W. Convergence of an adative penalty scheme for finding constrained equilibria. Nonlinear Anal. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  5. Bigi, G.; Castellani, M.; Pappalardo, M.; Passacantando, M. Nonlinear Programming Technique for Equilibria; Spinger Nature: Cham, Switzerland, 2019. [Google Scholar]
  6. Giannessi, F.; Maugeri, A.; Pardalos, P.M. Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models; Kluwer: Dordrecht, The Netherlands, 2001. [Google Scholar]
  7. Iusem, A.N.; Sosa, W. Iterative algorithms for equilibrium problems. Optimization 2003, 52, 301–316. [Google Scholar] [CrossRef]
  8. Mastroeni, G. Gap functions for equilibrium problems. J. Glob. Optim. 2003, 27, 411–426. [Google Scholar] [CrossRef]
  9. Muu, L.D. Stability property of a class of variational inequalities. Optimization 1984, 15, 347–353. [Google Scholar] [CrossRef]
  10. Dadashi, V.; Khatibzadeh, H. On the weak and strong convergence of the proximal point algorithm in reflexive Banach spaces. Optimization 2017, 9, 1487–1494. [Google Scholar] [CrossRef]
  11. Dadashi, V.; Postolache, M. Hybrid proximal point algorithm and applications to equilibrium problems and convex programming. J. Optim. Theory Appl. 2017, 174, 518–529. [Google Scholar] [CrossRef]
  12. Scheimberg, S.; Santos, P. A relaxed projection method for finite dimensional equilibrium problems. Optimization 2011, 60, 1193–1208. [Google Scholar] [CrossRef]
  13. Shehu, Y. Iterative procedures for left Bregman strongly relatively nonexpansive mappings with application to equilibrium problems. Fixed Point Theory. 2016, 1, 173–188. [Google Scholar]
  14. Quoc, T.D.; Anh, P.N.; Muu, M.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2012, 52, 139–159. [Google Scholar] [CrossRef]
  15. Quoc, T.D.; Muu, L.D.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  16. Van, N.T.; Strodiot, J.J.; Nguyen, V.H. The interior proximal extragradient method for solving equilibrium problems. J. Glob. Optim. 2009, 44, 175–192. [Google Scholar]
  17. Bigi, G.; Passacantando, M. Descent and penalization techniques for equilibrium problems with nonlinear constraints. J. Optim. Theory Appl. 2015, 164, 804–818. [Google Scholar] [CrossRef] [Green Version]
  18. Chadli, O.; Konnov, I.V.; Yao, J.C. Descent methods for equilibrium problems in a Banach space. Comput. Math. Appl. 2004, 48, 609–616. [Google Scholar] [CrossRef] [Green Version]
  19. Kassay, G.; Reich, S.; Sabach, S. Iterative methods for solving systems of variational inequalities in reflexive Banach spaces. SIAM J. Optim. 2011, 21, 1319–1344. [Google Scholar] [CrossRef]
  20. Reich, S.; Sabach, S. Three strong convergence Theorems regarding iterative methods for solving equilibrium problems in reflexive Banach spaces. Contemp. Math. 2012, 568, 225–240. [Google Scholar]
  21. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Matem. Metody. 1976, 12, 747–756. [Google Scholar]
  22. Ceng, L.C.; Yao, J.C. An extragradient-like approximation method for variational inequality problems and fixed point problems. Appl. Math. Comput. 2007, 190, 206–215. [Google Scholar] [CrossRef]
  23. Ceng, L.C.; Yao, J.C. Strong convergence Theorem by an extragradient method for fixed point problems and variational inequality problems. Taiwan. J. Math. 2006, 10, 1293–1303. [Google Scholar]
  24. Anh, P.N. A hybrid extragradient method extended to fixed point problems and equilibrium problems. Optimization 2013, 2, 271–283. [Google Scholar] [CrossRef]
  25. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 2, 318–335. [Google Scholar] [CrossRef] [Green Version]
  26. Malitsky, Y.V. Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. 2015, 1, 502–520. [Google Scholar] [CrossRef] [Green Version]
  27. Jolaoso, L.O.; Alakoya, T.O.; Taiwo, A.; Mewomo, O.T. A parallel combination extragradient method with Armijo line searching for finding common solution of finite families of equilibrium and fixed point problems. Rend. Circ. Matem. Palermo Series 2 2020, 69, 711–735. [Google Scholar] [CrossRef]
  28. Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A self adaptive inertial subgradient extragradient algorithm for variational inequality and common fixed point of multivalued mappings in Hilbert spaces. Demonstr. Math. 2019, 52, 183–203. [Google Scholar] [CrossRef]
  29. Cholamjiak, P.; Thong, D.V.; Cho, Y.J. A novel inertial projection and contraction method for solving pseudomonotone variational inequality problems. Acta Appl. Math. 2020. [Google Scholar] [CrossRef]
  30. Sunthrayuth, P.; Cholamjiak, P. A modified extragradient method for variational inclusion and fixed point problems in Banach spaces. Appl. Anal. 2019, 1–20. [Google Scholar] [CrossRef]
  31. Hieu, D.V. Halpern subgradient extragradient method extended to equilibrium problems. RACSAM 2017, 111, 823–840. [Google Scholar] [CrossRef]
  32. Jolaoso, L.O.; Alakoya, T.O.; Taiwo, A.; Mewomo, O.T. An inertial extragradient method via viscoscity approximation approach for solving equilibrium problem in Hilbert spaces. Optimization 2021, 70, 387–412. [Google Scholar] [CrossRef]
  33. Jolaoso, L.O.; Aphane, M. A self-adaptive inertial subgradient extragradient method for pseudomonotone equilibrium and common fixed point problems. Fixed Point Theory Appl. 2020, 2020, 9. [Google Scholar] [CrossRef]
  34. Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibrium problems. J. Inequal. Appl. 2019, 1, 1–25. [Google Scholar]
  35. Yang, J.; Liu, H. The subgradient extragradient method extended to pseudomonotone equilibrium problems and fixed point problems in Hilbert space. Optim. Lett. 2020, 14, 1803–1816. [Google Scholar] [CrossRef]
  36. Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The Inertial Sub-Gradient Extra-Gradient Method for a Class of Pseudo-Monotone Equilibrium Problems. Symmetry 2020, 3, 463. [Google Scholar]
  37. Dadashi, V.; Iyiola, S.; Shehu, Y. The subgradient extragradient method for pseudomonotone equilibrium problems. Optimization 2019, 69, 901–923. [Google Scholar] [CrossRef]
  38. Yao, Y.; Postolache, M.; Liou, Y.C. Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013, 201. [Google Scholar] [CrossRef] [Green Version]
  39. Eskandani, G.; Zamani, M.; Raeisi, M.; Rassias, T.M. A hybrid extragradient method for solving pseudomonotone equilibrium problems using Bregman distance. J. Fixed Point Theory Appl. 2018, 20, 132. [Google Scholar] [CrossRef]
  40. Bonnans, J.F.; Shapiro, A. Perturbation Analysis of Optimization Problems; Springer: New York, NY, USA, 2000. [Google Scholar]
  41. Bauschke, H.; Borwein, J.M.; Combettes, P.L. Essential smoothness, essential strict convexity, and Legendre functions in Banach spaces. Commun. Contemp. Math. 2001, 3, 615–647. [Google Scholar] [CrossRef] [Green Version]
  42. Bregman, L.M. A relaxation method for finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 1967, 7, 200–217. [Google Scholar] [CrossRef]
  43. Butnariu, D.; Iusem, A.N. Totally Convex Functions for Fixed Points Computation and Infinite Dimensional Optimization; Kluwer Academic: Dordrecht, The Netherlands, 2000; Volume 40. [Google Scholar]
  44. Butnariu, D.; Censor, Y.; Reich, S. Iterative averaging of entropic projections for solving stochastic convex feasibility problems. Comput. Optim. Appl. 1997, 8, 21–39. [Google Scholar] [CrossRef]
  45. Bauschke, H.H.; Borwein, J.M.; Combettes, P.L. Bregman monotone optimization algorithms. SIAM J. Control Optim. 2003, 42, 596–636. [Google Scholar] [CrossRef]
  46. Reich, S.; Sabach, S. A strong convergence Theorem for a proximal-type algorithm in reflexive Banach spaces. J. Nonlinear Convex Anal. 2009, 10, 471–485. [Google Scholar]
  47. Reich, S.; Sabach, S. Two strong convergence Theorems for a proximal method in reflexive Banach spaces. Numer. Funct. Anal. Optim. 2010, 31, 22–44. [Google Scholar] [CrossRef]
  48. Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A Strong Convergence Theorem for Solving Pseudo-monotone Variational Inequalities Using Projection Methods. J. Optim. Theory Appl. 2020, 185, 744–766. [Google Scholar] [CrossRef]
  49. Z<i>a</i>˘linescu, C. Convex Analysis in General Vector Spaces; World Scientific Publishing: Singapore, 2002. [Google Scholar]
  50. Naraghirad, E.; Jao, J.C. Bregman weak relatively nonexpansive mappings in Banach spaces. Fixed Point Theory Appl. 2013, 1, 141. [Google Scholar] [CrossRef] [Green Version]
  51. Tiel, J.V. Convex Analysis: An Introductory Text; Wiley: New York, NY, USA, 1984. [Google Scholar]
  52. Kohsaka, F.; Takahashi, W. Proximal point algorithms with Bregman functions in Banach spaces. J. Nonlinear Convex Anal. 2005, 6, 505–523. [Google Scholar]
  53. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  54. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  55. Hieu, D.V.; Strodiot, J.J. Strong convergence Theorems for equilibrium problems and fixed point problems in Banach spaces. J. Fixed Point Theory Appl. 2018, 20, 131. [Google Scholar] [CrossRef]
Figure 1. Example 1, Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Figure 1. Example 1, Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Mathematics 09 00743 g001
Figure 2. Example 1, Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Figure 2. Example 1, Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Mathematics 09 00743 g002
Figure 3. Example 1, Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Figure 3. Example 1, Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Mathematics 09 00743 g003
Table 1. Computational result for Example 1.
Table 1. Computational result for Example 1.
Algorithm 1H-EGMHS-ALG. IHS-ALG. II
m = 10 No of Iter.8224536
Time (s)0.22860.39761.21790.7832
m = 30 No of Iter.92311437
Time (s)0.62150.80264.21331.9558
m = 50 No of Iter.112626337
Time (s)0.69911.31496.51562.3046
m = 100 No of Iter.112739141
Time (s)0.52261.39378. 89432.5175
Table 2. Computational result for Example 2.
Table 2. Computational result for Example 2.
Algorithm 1H-EGMHS-ALG. IHS-ALG. II
Case INo of Iter.17402852
Time (s)0.51820.58530.54890.6420
Case IINo of Iter.23432951
Time (s)0.65561.28691.17391.5896
Case IIINo of Iter.15412852
Time (s)0.31201.06601.04791.2412
Case IVNo of Iter.24432954
Time (s)0.62641.16960.81841.4417
Table 3. Computational result for Example 3.
Table 3. Computational result for Example 3.
Algorithm 1H-EGMHS-ALG. IHS-ALG. II
Case INo of Iter.58813
Time (s)0.38490.38870.79882.1976
Case IINo of Iter.6101116
Time (s)1.31041.86961.56084.9464
Case IIINo of Iter.6111116
Time (s)1.14481.53511.60293.8841
Case IVNo of Iter.6101015
Time (s)1.18191.90652.04392.8720
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Bokodisa, A.T.; Jolaoso, L.O.; Aphane, M. Halpern-Subgradient Extragradient Method for Solving Equilibrium and Common Fixed Point Problems in Reflexive Banach Spaces. Mathematics 2021, 9, 743. https://doi.org/10.3390/math9070743

AMA Style

Bokodisa AT, Jolaoso LO, Aphane M. Halpern-Subgradient Extragradient Method for Solving Equilibrium and Common Fixed Point Problems in Reflexive Banach Spaces. Mathematics. 2021; 9(7):743. https://doi.org/10.3390/math9070743

Chicago/Turabian Style

Bokodisa, Annel Thembinkosi, Lateef Olakunle Jolaoso, and Maggie Aphane. 2021. "Halpern-Subgradient Extragradient Method for Solving Equilibrium and Common Fixed Point Problems in Reflexive Banach Spaces" Mathematics 9, no. 7: 743. https://doi.org/10.3390/math9070743

APA Style

Bokodisa, A. T., Jolaoso, L. O., & Aphane, M. (2021). Halpern-Subgradient Extragradient Method for Solving Equilibrium and Common Fixed Point Problems in Reflexive Banach Spaces. Mathematics, 9(7), 743. https://doi.org/10.3390/math9070743

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop