1. Introduction
In 1994, Blum and Oettli [
1] revisited the Equilibrium Problem (
) that was first introduced by Ky Fan which has become a fundamental concept and serves as an important mathematical tool for solving many concrete problems. The
EP generalizes many nonlinear problems such as variational inequalities, minimization problems, fixed point problems, saddle point problems in unified ways, see, for instance [
1,
2,
3,
4]. It is well known that several problems arising in many fields of pure and applied mathematics such as economics, physics, optimization theory, engineering mechanics, management sciences, network analysis, etc., can be modeled as an
EP; see, e.g., [
5] for details.
Let
E be a real reflexive Banach space, and
be nonempty, closed, and convex subset. Let
g:
be a bifunction. An
EP is defined in the following manner:
We denote the set of solutions of problem (
1) by
. Because the
and its applications are of great importance, it has provided a rich area of research for many mathematicians. Recently, many authors have proposed numerous algorithms for solving the
(
1); see, for example, [
6,
7,
8,
9]. Some of those algorithms involve proximal point methods [
10,
11], projection methods [
12,
13], extragradient methods with or without line-searches [
14,
15,
16], decent methods based on merit functions [
17,
18], and methods using Bregman distance [
19,
20].
In 1976, Korpelevich [
21] introduced the extragradient method for solving variational inequality problem (which is really a spacial case of the
) for L-Lipschitz continuous and monotone operators in Euclidean spaces. Korpelevich proved the convergence of the generated sequence under the assumptions of Lipschitz continuity and strong monotonicity. Moreover, there is still a need to calculate two projections onto the closed convex set
C in each iteration of the algorithm. The Korpelevich’s extragradient strategy has been widely concentrated in the literature for taking care of increasingly broad problems, which comprises of finding a common point that lies in the solution set of a variational inequality and the set of fixed points of a nonexpansive mapping. This kind of problems emerges in different theoretical and modeling contexts, see [
22,
23], and the references therein. Several years back, Quoc et al. [
15] presented a modified version of Korpelevich’s method, in which they stretched the method to solve
’s for the case of pseudomonotone and Lipschitz continuous bifunctions. They substituted the two projections on the feasible set
C by solving two convex optimization programs for each and every iteration. In 2013, Anh Pham Ngoc [
24] presented a hybrid extragradient iteration method, where the extragradient method was extended to fixed point and equilibrium problem. This was done for a pseudomonotone and Lipschitz-type continuous bifunction in a setting of real Hilbert space.
As of late, numerous authors have studied and improved Korpelevich’s extragradient method for variational inequality in various ways, see, for example [
25,
26]. The subgradient extragradient method is one of the ways in which the Korpelevich’s extragradient method was studied and improved; see [
25]. This involves replacing the second projection in Korpelevich’s extragradient method with a projection onto a simple half-space. It is important to say that the projection onto half-spaces can be easily calculated explicitly, unlike the projection onto the whole set
C, which can be complicated when
C is not simple. This process has motivated several improvements for extragradient-like methods in the literature; see [
27,
28,
29,
30]. Recently, Dan Van Hieu [
31] extended the subgradient extragradient method to equilibrium problem in real Hilbert spaces. He proved that the subgradient extragradient method strongly converges to an element
provided the stepsize condition
is satisfied, where
and
are the Lipschitz-like constant of
g. It is important to note that the constants
and
are very difficult to find, even when they are estimated, they are often too small, which deteriorates the rate of convergence of the algorithm. There has been an increasing effort on finding iterative methods for solving the EP without a prior condition consisting of the constant
and
; see, e.g., [
32,
33,
34,
35,
36,
37,
38]. On the other hand, Eskandani et al. [
39] introduced a hybrid extragradient method for solving
EP (
1) in a real reflexive Banach space. They showed that the sequence that is produced by their algorithm strongly converges to
(
1).
Being motivated by the above results, we introduce a Halpern-subgradient extragradient method for solving pseudomonotone
EP and finding common fixed point of countable family of quasi-Bregman nonexpansive mappings in real reflexive Banach spaces. The stepsize of our algorithm is determined by a self-adaptive technique, and we prove a strong convergence result without prior estimate of the Lipschitz constants. We also provide an application of our result to variational inequality problems and give some numerical experiments to show the numerical behaviour of our algorithm. This improves the work of Eskandami et al. [
39] and extends the results of [
32,
33,
34,
35,
36,
37] to a reflexive Banach space while using Bregman distance techniques.
Throughout this paper, E denotes a real Banach space with dual denotes the duality pairing between and ∀ denotes the for all; is the minimum of a set A; is the maximum of a set implies the strong converges of a sequence to a point is the weak convergence of to denotes the norm on E, while denotes the norm on EP denotes the equilibrium problem, denotes the solution set of the equilibrium problem; is the set of fixed point of a mapping T, is the gradient of a function f and is the real number line.
2. Preliminaries
In this section, we recall some definitions and basic facts and notions that we will need in the sequel.
Let
E and
be as defined earlier in the introduction. We denote the dual space of
E by
. the function
is always an admissible, i.e., it is proper, lower semicontinuous, and differentiable. Let dom
denote the domain of
f. Let
int dom
f. We define the subdifferential of
f at
as the convex set that is defined in the following manner
and the Fenchel conjugate of
f is the function
It takes less effort to show that is indeed an admissible function.
For any convex mapping
, we denote, by
, the right-hand derivative of
f at
int dom
f in the direction of
y, which is,
If the limit as in (4) exists for each z, then the function f is said to be Gteaux differentiable at x. In this case, the gradient of f at is the linear function , which is defined by for all . The function f is a said to be Gteaux differentiable if it is Gteaux differentiable at each dom f. When the limit in (4) is uniformly attained for any with , we say that f is Fréchet differentiable at . Throughout this paper, is always an admissible function, under this condition we know that f is continuous in int dom f.
The function f is said to be Legendre if it satisfies the following two conditions:
- L1.
int dom , and the subdifferential is single-valued on its domain; and,
- L2.
int dom , and is single-valued on its domain.
It is common knowledge that, in
E (check [
40], p. 83). Putting condition (L1) and (L2) together, we get
It also follows that
f is Lengendre if and only if
is Legendre [
41] (Corollary 5.5, p. 634), and that the functions
f and
are G
teaux differentiable and strictly convex in the interior of their designated domains.
In 1967, Bregman [
42] introduced the concept of Bregman distance; furthermore, he found a rich and compelling method for the utilization of the Bregman distance during the time spent designing and breaking down feasibility and optimization calculations. From now on, we assume that
is also Legendre. The Bregamn distance is the bifunction
, being defined by,
The Bregman distance does not fulfill the notable properties of a metric, which is, it is not symmetric in general and the triangle inequality does not hold. However, it generalizes the law of cosines, which, in this case, it is known as the three point identity: for any
and
Looking at [
43,
44], the modulus of total convexity at
is the function
, as given by
f is termed totally convex at
if
is positive for any
. Additionally,
f is termed totally convex when it is totally convex at every point
. We comment in passing that
f is totally convex on bounded subsets if and only if
f is uniformly convex on bounded subsets (check [
43]). We remember that
f is termed sequentially consistent [
45] if for any two sequences
and
in
E, such that the first is bounded,
Lemma 1 ([
46]).
If is uniformly Fréchet differentiable and bounded on bounded subsets of E, then is uniformly continuous on bounded subsets of E from the strong topology E to the strong topology of . Lemma 2 ([
47]).
Let be a Gteaux differentiable and totally convex function. If and the sequence is bounded, then the sequence is also bounded. The Bregman projection [
42] with respect to
f of
onto a nonempty, closed, and convex set
is defined as the necessarily unique vector
, which satisfies
Similar to the metric projection in Hilbert spaces, the Bregman projection with respect to totally convex and G
teaux differentiable functions has a variational characterization [
46] (Corollary 4.4, p. 23).
Suppose that f is Gteaux differentiable and totally convex on int dom f. Let and be a nonempty, closed, and convex set. If , then the following conditions are equivalent:
- M1.
The vector is the Bregman projection of x onto C with respect to f.
- M2.
The
is the unique solution of the variational inequality:
- M3.
The vector
is the unique solution of the inequality:
Definition 1. Let be a mapping. A point x is called fixed point of T if . The set of fixed points of T is denoted by . Additionally, a point is said to be an asymptotic fixed point of T if C contains a sequence which converges weakly to , and . The set of asymptotic fixed points of T is denoted by .
Definition 2 ([
48]).
Let C be a nonempty, closed and convex subset of E. A mapping is called - i.
Bregman firmly nonexpansive (BFNE for short) if - ii.
Bregman strongly nonexpansive (BSNE) with respect to a nonempty (T) if for all and , and if whenever is bounded, and - iii.
Quasi-Bregman nonexpansive (QBNE) if and
It was remarked in [
19] that, in the case where
, the following inclusion holds:
Let
B and
S be the closed unit ball and the unit sphere of a Banach space
E. Let
, for all
. Then, the function
is said to be uniformly convex on bounded subsets (see [
49]) if
for all
, where
is defined by
for all
. The function
is called the gauge of uniform convexity of
f. It is known that
is a nondecreasing function. If
f is uniformly convex, then the following Lemma is known.
Lemma 3 ([
50]).
Let E be a Banach space, be a constant and be a uniformly convex function on bounded subsets of E. Thenfor all , , and with , where is the gauge of uniform convexity of f.
Lemma 4 ([
47]).
Suppose that is a Legendre function. The function f is totally convex on bounded subsets if and only if f is uniformly convex on bounded subsets. For each
, the subgradient of the convex function
at
u is denoted by
i.e.,
Lemma 5 ([
51]).
Let C be a nonempty convex subset of E and be a convex and subdifferentiable function on C. Subsequently, f attains its minimum at if and only if , where is the normal cone of C at x, that is Throughout this paper, we assume that the following assumptions hold on g:
- A1.
g is pseudomonotone, i.e., for all ,
- A2.
g is Bregman-Lipschitz-type condition, i.e., there exist two positive constants
, such that
- A3.
for all ,
- A4.
is continuous on C for every , and
- A5.
is convex, lower semicontinuous, and subdifferentiable on C for every fixed .
Lemma 6 ([
52]).
Let E be a reflexive Banach space, be a strong coercive Bregman function and be defined by Subsequently, the following assertions hold:
- i.
, and
- ii.
In addition, if is a proper lower semicontinuous function, then is a proper weak lower semicontinuous and convex. Hence, is convex in the second variable. Thus, for all , we havewhere and with . Lemma 7 ([
53]).
Let be a sequence of non-negative real numbers satisfying the following identity: where and , such that and or Afterwards, Lemma 8 ([
54]).
Let be a sequence of real numbers such that there exists a subsequence of with for all . Consider the integer that is defined by Subsequently, is a non-decreasing sequence verifying and for all the following estimate hold: 3. Main Results
In this section, we present our algorithms and establish convergence analysis.
Let
E be a real reflexive Banach space, and
C be a nonempty, closed, and convex subset of
E. Let
be a bifunction satisfying (A1)–(A5). For
, where
, let
be a countable family of quasi-Bregman nonexpansive mappings, such that
are demiclosed at zero. Let
be a uniformly Fréchet differentiable, coercive, Legendre, totally convex, and bounded on bounded subsets of
Suppose that the solution set
We assume that the control sequences satisfy the following condition.
- C1.
and
- C2.
and
Now, suppose that the sequence is generated by the following algorithm.
Remark 1. Note that when we are at a common solution of the EP and fixed points of for More so, the following present some of the importance of Algorithm 1.
- (i).
Eskandami et al. [39] introduced a hybrid extragradient method whose convergence depends on the Lipschitz constants and which are very difficult to estimate. Moreover, our Algorithm 1 does not depend on the Lipschitz constants and the second problem can be easily solved over the half-space - (ii).
Hieu and Strodiot [55] proposed an extragradient method with the line search technique (Algorithm 4.1) in two uniformly convex Banach spaces. It is known that such line search method is not always efficient because it consist of inner loop which may consume extra computation time. In Algorithm 1, the stepsize is selected self-adaptively and does not involve any inner loop. - (iii).
Our algorithm also extends the subgradient extragradient method of [32,33,34,35,36,37] to reflexive Banach spaces using Bregman distance.
Algorithm 1: Halpern-Subgradient Extragradient Method (H-SEM) |
Initialization: Choose , and Set
- Step 1:
If set and go to Step 3. Else, do Step 2. - Step 2:
Compute
where
and - Step 3:
- Step 4:
Calculate and as follows
and
Set and go to Step 1.
|
We now give the convergence analysis of Algorithm 1. We begin by proving the following necessary results.
Lemma 9. The sequence that is generated by our algorithm is bounded by Proof. We deduce from the Definition of
that
. This implies that
is monotonically decreasing. It follows from (
8) that
Hence, the sequence
is bounded by
. This also implies that there exists
□
Lemma 10. For all , the following inequality holds: Proof. Because
, then from Algorithm 1, we have
hence
Additionally, since
, then
Combining (
11) and (
12), we get
Additionally, since
it follows from Lemma 5 that
This implies that there exists
and
, such that
Because
then
for all
Hence,
Additionally,
then
From (
14) and (
15), we get
Now, let
in (
16), then we have
Because
and
g is pseudomonotone, then
. This implies that
Adding (
13) and (
17), we have
By Bregman three point identity (
5), it follows that:
Additionally, from the Definition of
, we have:
□
Lemma 11. The sequence that is generated by Algorithm 1 is bounded.
Proof. Let
Afterwards,
and
for all
. Because
exists (see Lemma 9), then
and so there exists
, such that
Thus, from Lemma 10, we have
Hence, by (
18) and (
19), we have
Therefore, is bounded and, by Lemma 2, the sequence is also bounded. □
Lemma 12. Let and let be the gauge of uniform convexity of the conjugate function . Subsequently, Proof. From our algorithm, we have:
It follows from Lemma 6 that:
By Lemma 3 and (
18), we have that:
□
Next, we prove the strong convergence of the sequence that is generated by our algorithm.
Theorem 1. The sequence generated by Algorithm 1 converges strongly to z, where
Proof. Let and put . We divide the proof into two cases.
Case 1: suppose that there exists
such that
is monotonically decreasing for all
. This implies that
exists since
is bounded and, thus, we have
From Lemma 10 and Lemma 11, we have the following:
Because
and
, it follows that
Note that
. Thus, from (
21), we obtain
Also, since
then
Moreover, since
f is Fréchet differentiable, then
is uniformly continuous on bounded subsets of
E. Hence, we have
Additionally,
is uniformly continuous on bounded subsets of
Hence,
Remember, from (
20), we have
It follows from (C2) that
Because
is bounded, there exists a subsequence
of
, such that
It follows from the fact that
, then
. Additionally
it follows from Lemma 5 that
Hence, from (
25), we get
which implies that
Additionally, since
, then
From (
26) and (
27), we have
Because
,
f is uniformly Fréchet differentiable. Afterwards,
is uniformly continuous on bounded subsets of
E. Hence,
Therefore, passing limit to (
27) as
and using (
28), we have
Furthermore, since
and
, then
. By the demiclosedness of
, we have
,
. Therefore, it follows that
Therefore, by (
29) and (
30), we have
Now, we prove that
strongly converges strongly to
. Lemma 6, we have
Let
It suffices to show that
It follows from (
7) that
Because
then
Therefore, using Lemma 7, (
32) and (
33), we have
This implies that
Hence,
converges strongly to
.
Case 2: suppose that
is monotonically increasing. This means that
Afterwards, by Lemma 8, we have that there exits a sequence
, such that
and
, where
Following similar analysis, as above, we have
and
Additionally, we have
and
Because
, then
It follows from (
34) that
Hence, converges strongly to . This completes the proof. □
The following can be obtained as consequences of our main Theorem.
Corollary 1. Let E be a real reflexive Banach space, and C be a nonempty, closed, and convex subset of E. Let be a bifunction satisfying (A1)–(A5). For , where , let be finite a family of Bregman strongly nonexpansive mappings. Let be a uniformly Fréchet differentiable, coercive, Legendre, totally convex, and bounded on bounded subsets of Suppose that the solution set Subsequently, the sequence that is generated by the Algorithm 1 converges strongly to a point
Corollary 2. Let E be a real reflexive Banach space, and C be a nonempty, closed, and convex subset of E. Let be a bifunction satisfying (A1)–(A5) and be a quasi-Bregman nonexpansive mappings, such that is demiclosed at zero. Let be a uniformly Fréchet differentiable, coercive, Legendre, totally convex, and bounded on bounded subsets of Suppose that the solution set Subsequently, the sequence generated by Algorithm 1 converges strongly to a point
4. Application: Variational Inequality
In this section, we consider the classical variational inequality problem, which is a particular case of equilibrium problem.
Let
be a mapping. A variational inequality problem, denoted by
, is to find
We denote the solution set of the VIP by
Variational inequalities are important mathematical tools for solving many problems arising in applied sciences, such as optimization, network equilibrium, mechanics, engineering, economics, etc., (see, for example, [
28,
29,
30] and references therein).
The following results are important in this section.
Lemma 13 ([
48]).
Let E be a real valued reflexive Banach space and C be a nonempty, closed, and convex subset of E. Let be a function, such that and be Legendre and totally coercive function. Subsequently, a point if, and only if, solves the following minimization problem: Lemma 14 ([
39]).
Let C be a nonempty and closed convex subset of a reflexive Banach space E, be a mapping, and be a Legendre function. Afterwards,for all
Now, by setting
for all
, it follows from Lemma 13 and 14 that
We assume that satisfies the following assumptions.
- (B1)
A is pseudomonotone, i.e., for
we have
- (B2)
A is
L-Lipschitz continuous with respect to
i.e., there exists
, such that
- (B3)
A is weakly sequentially continuous, i.e., for any sequence , such that then
Therefore, we can apply our result to solving the VIP as follows:
Theorem 2. Let E be a real reflexive Banach space, and C be a nonempty, closed, and convex subset of E. Let be a mapping satisfying (B1)–(B3). For , where , let be a finite family of quasi-Bregman nonexpansive mappings, such that are demiclosed at zero. Let be a uniformly Fréchet differentiable, coercive, Legendre, totally convex, and bounded on bounded subsets of Suppose that the solution set Subsequently, the sequence generated by the following algorithm converges strongly to a point
For , Compute where and are sequences in , such that condition (C1) and (C2) are satisfied.
5. Numerical Examples
In this section, we perform some numerical experiments to illustrate the performance of the proposed method and also compare with the method proposed by Eskandami et al. [
39] (shortly, H-EGM), Algorithm 3.1 of Hieu and Strodiot [
55] (shortly, HS-ALG. I) and Algorithm 4.1 of Hieu and Strodiot [
55] (shortly, HS-ALG. II). All of the optimization subproblems are viably addressed by the quadprog function in Matlab. The calculations are completed utilizing MATLAB program on a Lenovo X250, Intel (R) Center i7 vPro consisting of RAM 8.00GB. All of the optimization subproblems are effectively solved by the quadprog function in Matlab. The computations are carried out using MATLAB program on a Lenovo X250, Intel (R) Core i7 vPro with RAM 8.00GB.
Example 1. First, we consider the generalized Nash equilibrium problem described as follows:
Assume that there are m companies that produce a specific item. Let x mean the vector whose section represents the amount of the item that is delivered by organization j. We accept that the value cost is a decreasing affine function of s with , i.e., , where and . It follows that the profit that is generated by company j is given by where is the tax expense for delivering . Assume that is the system set of company j. At that point, the methodology set of the model is . Indeed, each company looks to optimize its benefit by picking the corresponding production level under the assumption that the production of different companies is a parametric input. The renowned Nash equilibrium idea established a commonly utilized method to deal with this model.
We recall that that a point is called an equilibrium point of the model if where the vector stands for the vector obtained from by substituting with . By taking the problem of finding a Nash equilibrium point of the model can be formulated, as follows: Presently, let us guess that the tax-charge function is expanding and affine for each . This presumption implies that both the tax and charge for creating a unit are expanding as the amount of the production gets larger. All things considered, the bifunction g can be formed in the following structure: where and are two matrices of order m, such that Q is symmetric positive semidefinite and is symmetric negative semidefinite. This shows that g is pseudomonotone. Moreover, it is easy to show that g satisfies the Lipschitz-type condition with .
We suppose that the set C has the form The matrices are randomly generated, such that their properties are satisfied and the vector q is generated randomly with its entries being in The mapping is defined as the projection and the initial vector is generated randomly for . For Algorithm 1, we choose and for each is defined by For H-EGM, we take and choose the best stepsize for the algorithm. Also for HS-ALG. I, we take Similarly for HS-ALG II, we choose
We use to illustrate the convergence of the algorithms, where The numerical results are shown in Table 1 and Figure 1. From Table 1 and Figure 1, we see that Algorithm 1 performs better than H-EGM, HS-ALG I, and HS-ALG II. This is due to the fact that Algorithm 1 uses a self-adaptive technique to select its stepsize, while H-EGM and HS-ALG I used the choice which deteriorate the performance of the algorithms as the value of m increases. Furthermore, HS-ALG II used a computationally expensive line search procedure to determine its stepsize at each iteration. This technique uses inner iteration and consumed additional computational time. Example 2. Let be the linear spaces whose elements are all 2-summable sequences of scalars in that iswith inner product and defined by and where Let Define the bifunction by It is easy to show that g is a pseudomonotone bifunction that is not monotone and g satisfies condition (A1)–(A5) with Lipschitz-like constant We define the mapping by Subsequently, is QBNE and and, thus, We use similar parameters and stopping criterion, as used in Example 1, for the algorithms with the following initial value: From Figure 2 and Table 2, we see that Algorithm 1 performs better than H-EGM, HS-ALG I, and HS-ALG II. Note that the Lipschitz-like constant for the cost operator in this example is Subsequently, the prior estimate of the stepsize for H-EGM and HS-ALG I can easily be obtained, which is fixed for every iteration. More so, HS-ALG II used a line search method to determine an appropriate stepsize for each iteration. However, Algorithm 1 updated its stepsize at every iteration while using a computational inexpensive method. Example 3. In this example, we take with the inner product and norm for all The set C is defined by and the function is given by , where for all We defined the mapping by It is not difficult to show that T is QBNE and We take for all the algorithms. For Algorithm 1, we take and For H-EGM and HS-ALG I., we take and Additionally, for HS-ALG. II, we take We test the algorithms for the following initial values: We use as the stopping criterion for the numerical computation and plot the graphs of against number of iterations in each case. Table 3 and Figure 3 present the numerical results. From Table 3 and Figure 3, we see that Algorithm 1 also performs better than H-EGM, HS-ALG I, and HS-ALG II. The reason for this advantage is similar to that in Example 2.