1. Introduction
Consider that
is a nonempty, convex, and closed subset of a real Hilbert space
The inner product and norm are indicated with
and
respectively. Furthermore,
and
symbolize the set of real numbers and the set of natural numbers, respectively. Assume that
is indeed a bifunction with the equilibrium problem solution set
Let
whereas
represents a zero element in
In this case,
characterizes the subset of a Hilbert space
and
as follows:
is a bifunction through
for all
The
equilibrium problem [
1,
2] for
on
is to:
The above-mentioned framework is an appropriate mathematical framework that incorporates a variety of problems, including vector and scalar minimization problems, saddle point problems, variational inequality problems, complementarity problems, Nash equilibrium problems in non-cooperative games, and inverse optimization problems [
1,
3,
4]. This issue is primarily connected to Ky Fan inequity on the grounds of his prior contributions to the field [
2]. It is also important to consider an approximate solution if the problem does not have an exact solution or is difficult to calculate. Several methodologies have been proposed and tested to tackle various types of equilibrium problems (
1). Many successful algorithmic techniques, as well as theoretical characteristics, have already been proposed to solve the (
1) issue in both finite- and infinite-dimensional spaces.
The regularization technique is the most significant method for dealing with many ill-posed problems in various subfields of applied and pure mathematics. The regularization approach is distinguished by the use of monotone equilibrium problems to convert the original problem into a strongly monotone equilibrium subproblem. As a result, each computationally productive subproblem is strongly monotone and has a unique solution. The discovered subproblem, for example, may be more successfully resolved than the initial problem, and the regularization solutions may lead to some solution to the basic problem once the regularization variables look to have an adequate limit. The two most prevalent regularization methods are the proximal point and Tikhonov’s regularized approaches. These approaches were recently extended to equilibrium problems [
5,
6,
7,
8,
9,
10,
11,
12,
13]. A few techniques to address non-monotone equilibrium problems can be found in [
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26].
The proximal method [
27] is indeed an innovative approach for determining equilibrium problems that are founded on minimization problems. Along with Korpelevich’s contribution [
28] technique to addressing the saddle point problem, this procedure has also been known as the two-step extragradient method in [
29]. Tran et al. [
29] constructed an iterative sequence of
in the following manner:
where
The iterative sequence created by the aforementioned approach exhibits weak convergence, and prior knowledge of Lipschitz-type variables is necessary in order to use it. Lipschitz-type parameters are frequently unknown or difficult to calculate. To address this issue, Hieu et al. [
30] introduced the following adaptation of the approach in [
31] for equilibrium: Let
and select
with
such that
along with
To solve a pseudomonotone equilibrium problem, the authors have suggested a non-convex combination iterative technique in [
32]. The availability of a strong convergence iterative sequence without the need for hybrid projection or viscosity techniques is the main contribution. The details of the algorithm are as follows: Choose
with
and
such that
and
The main objective of this study is to focus on using well-known projection algorithms that are, in general, easier to apply due to their efficient and easy mathematical computation. We design and adapt an explicit subgradient extragradient method to solve the problem of pseudomonotone equilibrium and other specific classes of variational inequality problems and fixed-point problems, inspired by the works of [
30,
33]. Our techniques are a variation on the approaches described in [
32]. Strong convergence results matching the sequence of the two methods are achieved under specific, moderate circumstances. Some applications of variational inequality and fixed-point problems are given. Consequently, experimental investigations have shown that the proposed strategy is more successful than the current one [
32].
The rest of the article is organized as follows:
Section 2 includes basic definitions and lemmas.
Section 3 proposes new methods and their convergence analysis theorems.
Section 4 contains several applications of our findings to variational inequality and fixed-point problems.
Section 5 contains numerical tests to demonstrate the computational effectiveness of our proposed methods.
3. Main Results
We add a method and have strong convergence results for that method. The following is a detailed algorithm:
The following lemma can be used to demonstrate that the step-size sequence generated by the previous formula decreases monotonically and is bounded, as required for iterative sequence convergence.
Lemma 6. A sequence is decreasing monotonically with lower bound and converge to
Proof. It is straightforward that
decreases monotonically. Let
, such that
Thus, sequence has the lower bound Thus, there exists a real number to ensure that □
The following lemma can be used to verify the boundedness of an iterative sequence.
Lemma 7. Let be a bifunction that satisfies the conditions(
1)
–(
4)
. For any we have Proof. By the value
and Lemma 1, we obtain
From definition of
we have
Using the value of
we can write
Expressions (
2)–(
4) imply that (see Lemma 3.3 in [
42]):
□
The strong convergence analysis for Algorithm 1 is presented in the following theorem. The details of the convergence theorems are given below.
Algorithm 1 Self-Adaptive Explicit Extragradient Method with Non-Convex Combination |
Step 0: Let , through and such that
In the case that stop and . Otherwise, go to the next step. Step 2: First, choose satisfying and generate a half-space
Solve Step 4: Revise the step size as follows and continue:
Set and move back to Step 1.
|
Theorem 1. Let a sequence be generated by Algorithm 1. Then, sequence converges strongly to
Proof. Given that
then
is a number such that
As a result, there exists a finite number
such that
We derive using Lemma 3 (i) for any
such that
It is deduced that sequence
is a bounded sequence. Let
for any
By Lemma 3 (i), we have
By Lemma 3 (ii) and (
9), (
10) implies that (see Equation (3.6) [
32])
The remains of the proof can be split into two parts:
Case 1: Let
such that
Thus,
, exists and let
By relationship (
7), we have
The existence of
provides that
and accordingly
Thus, the sequence
is a bounded sequence. Hence, we may select a subsequence
of
such that
converges weakly to a certain
such that
From (
13) the subsequence
also converges weakly to
s as
Due to the expression (
3), we obtain
Allowing
entails that
As a result,
Eventually, using (
15) and Lemma 2 (ii), we derive
We have the desired results from of the assertion on
, (
11), (
13), (
14), (
18) and Lemma 4.
Case 2: Assume that there exists a subsequence
of
such that
Consequently, according to Lemma 5, there is indeed a sequence
such that
we have
By the expression (
7), we have
The above expressions imply that
thus
By statements identical to those in expression (
18), we have
From expression (
11), we obtain
It is given that
implies that
The expression (
19) and (
25) implies that
Because
it derives via expressions (
21), (
22) such that
Consequently, This is the required result. □
Now, a modification of Algorithm 1 proves a strong convergence theorem for it. For the purpose of simplicity, we will adopt the notation and the conventional and (). The following is a more detailed algorithm:
Lemma 8. Let be a bifunction satisfies the conditions(
1)
–(
4)
. For any we have The strong convergence analysis for Algorithm 2 is presented in the following theorem. The details of the convergence theorems are given below.
Algorithm 2 Modified Self-Adaptive Explicit Extragradient Method with Non-Convex Combination |
Step 0: Let , , with and such that
If then is the solution of problem (EP). Otherwise, go to next step. Step 2: First, choose satisfying and generate a half-space
Step 4: Modify step size as follows:
Set and go back to Step 1.
|
Theorem 2. Let a sequence be generated by Algorithm 2 and satisfy the conditions(1)–(4). Then, a sequence is strongly convergent to an element of
Proof. It is given that
there exists a fixed number
which is indeed a specific number such that
Thus, there exists a fixed number
such that
Combining the expression (
28) and (
29), we obtain
The value of
with Lemma 3 provides (see Equation (3.17) [
32])
The rest of the discussion will be divided into two parts:
Case 1: Assume that there exists an integer
such that
Thus, the
exists. By expression (
28), we have
The above, together with the assumptions on
,
and
yields that
As a result,
is bounded, and we may choose a subsequence
of
such that
converges weakly to
and
As with expression (
3) with (
34), we have
Allowing
indicates that
It continues that
In the end, by expression (
35) and Lemma 2, we may obtain
The needed result is obtained using Equation (
31) and the Lemma 4.
Case 2: Assume that a subsequence
of
such that
Thus, by Lemma 5 there exists a nondecreasing sequence
such that
which gives
Using expression (
31), we have
The remaining proof is analogous to Case 2 in Theorem 1. □
4. Applications
In this section, we derive our main results, which are used to solve fixed-point and variational inequality problems. An operator is said to be
(i)
κ-strict pseudocontraction [
43] on
if
which is equivalent to
(ii)
Weakly sequentially continuous on
if
Note: If we take
the equilibrium problem converts into to the fixed-point problem through
The algorithm’s
and
values become (for more information, see [
32]):
The following fixed-point theorems are derived from the results in
Section 3.
Corollary 1. Suppose that Σ is a nonempty closed and convex subset of a Hilbert space Let is a weakly continuous and κ-strict pseudocontraction with Let , , with and Additionally, the sequence is created as follows:where The relevant step-size is obtained: Thus, the sequence strongly converges to
Corollary 2. Suppose that Σ is a nonempty closed and convex subset of a Hilbert space Let is a weakly continuous and κ-strict pseudocontraction with Let , , with and such that Additionally, the sequence is created as follows:where The relevant step size is obtained as follows: Thus, the sequence strongly converges to
The variational inequality problem is presented as follows:
An operator is said to be
- (i)
L-Lipschitz continuous on
if
- (ii)
Note: If
for all
the equilibrium problem converts into a variational inequality problem via
(for more information, see [
44]). By the value of
and
in Algorithm 1, we derived
Due to
, we obtain
and consequently
It implies that
Assumption 1. Assume that G fulfills the following conditions:
- (i)
An operator G is pseudomonotone upon Σ and is nonempty;
- (ii)
G is L-Lipschitz continuous on Σ with
- (iii)
for any and meet
Corollary 3. Let be an operator and satisfies Assumption 1. Assume that sequence is generated as follows: Let , , with and such that Moreover, sequence is generated as follows:where Next, step size is obtained as follows: Then, sequence strongly converges to the solution
Corollary 4. Let be an operator that satisfies Assumption 1. Assume that is generated as follows: Let , , with and such that Moreover, the sequence generated as follows:where Next step-size is obtained as follows: Then, sequence strongly converges to the solution
5. Numerical Illustration
The computational results in this section show that our proposed algorithms are more efficient than Algorithms 3.1 and 3.2 in [
32]. The MATLAB program was executed in MATLAB version 9.5 on a PC (with Intel(R) Core(TM)i3-4010U CPU @ 1.70 GHz 1.70 GHz, RAM 4.00 GB) (R2018b). In all our algorithms, we used the built-in MATLAB fmincon function to solve the minimization problems. (i) The setting for design variables for Algorithm 3.1 (Algo. 3.1) and Algorithm 3.2 (Algo. 3.2) in [
32] possess different values that are given in all examples.
(ii) The settings for the design variables for Algorithm 1 (Algo. 1 ) and Algorithm 2 (Algo. 2) are
Example 1. Let us consider a bifunction which is represented as follows: In addition, the convex set is defined as follows: Consequently, is Lipschitz-type continuous across and meets the condition(
1)–(
4)
. The obtained simulations are shown in Figure 1 and Figure 2 and Table 1 and Table 2 by using and Example 2. According to the articles [29], the bifunction might be written as follows:where and A, B are The Lipschitz parameters are also (see [29]). The possible set Σ and its subset are given as Example 3. Consider that is indeed a Hilbert space withwhere the internal product Suppose that unit ball is Let us begin by defining an operatorwhere As illustrated in [45], G is monotone and L-Lipschitz-continuous via Figure 5 and Figure 6 and Table 5 and Table 6 illustrate the numerical results with and Discussion About Numerical Experiments: The following conclusions may be drawn from the numerical experiments outlined above: (i) Examples 1–3 have reported data for numerous methods in both finite- and infinite-dimensional domains. It is apparent that the given algorithms outperformed in terms of number of iterations and elapsed time in practically all circumstances. All trials demonstrate that the suggested algorithms outperform the previously available techniques. (ii) Examples 1–3 have reported results for several methods in finite and infinite-dimensional domains. In most cases, we can observe that the scale of the problem and the relative standard deviation used impact the algorithm’s effectiveness. (iii) The development of an inappropriate variable step size generates a hump in the graph of algorithms in all examples. It has no impact on the effectiveness of the algorithms. (iv) For large-dimensional problems, all approaches typically took longer and showed significant variation in execution time. The number of iterations, on the other hand, changes slightly less.