Next Article in Journal
On Geodesic Behavior of Some Special Curves
Next Article in Special Issue
A Self-Adaptive Extra-Gradient Methods for a Family of Pseudomonotone Equilibrium Programming with Application in Different Classes of Variational Inequality Problems
Previous Article in Journal
Ulam’s Type Stability and Generalized Norms
Previous Article in Special Issue
Convergence and Dynamics of a Higher-Order Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem

1
KMUTTFixed Point Research Laboratory, KMUTT-Fixed Point Theory and Applications Research Group, SCL 802 Fixed Point Laboratory, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
2
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
3
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
4
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
5
Department of Mathematics, College of Science and Arts, King Abdulaziz University, P.O. Box 344, Rabigh 21911, Saudi Arabia
6
Program in Applied Statistics, Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi, Thanyaburi, Pathumthani 12110, Thailand
*
Authors to whom correspondence should be addressed.
Symmetry 2020, 12(4), 503; https://doi.org/10.3390/sym12040503
Submission received: 19 February 2020 / Revised: 4 March 2020 / Accepted: 16 March 2020 / Published: 1 April 2020
(This article belongs to the Special Issue Iterative Numerical Functional Analysis with Applications)

Abstract

:
In this paper, we propose a new method, which is set up by incorporating an inertial step with the extragradient method for solving a strongly pseudomonotone equilibrium problems. This method had to comply with a strongly pseudomonotone property and a certain Lipschitz-type condition of a bifunction. A strong convergence result is provided under some mild conditions, and an iterative sequence is accomplished without previous knowledge of the Lipschitz-type constants of a cost bifunction. A sufficient explanation is that the method operates with a slow-moving stepsize sequence that converges to zero and non-summable. For numerical explanations, we analyze a well-known equilibrium model to support our well-established convergence result, and we can see that the proposed method seems to have a significant consistent improvement over the performance of the existing methods.

1. Introduction

Let C to be a nonempty closed, convex subset of E and f : E × E R is a bifunction such that f ( u , u ) = 0 for all u C . The equilibrium problem [1] for the bifunction f on C is defined as follows:
Find u C such that f ( u , v ) 0 , for all v C .
The equilibrium problem (shortly, E P ) was originally introduced in the unifying nature by Blum and Oettli [1] in 1994, and provides a comprehensive study on their theoretical properties. This unique formulation of a problem has an absolutely sensational way to deal with a wide range of topics that have emerged from the social sciences, economics, finance, restoration of image, ecology, transport, networking, elasticity and optimization problems (for more details see, [2,3,4]). The equilibrium problem covers several mathematical problems as a special case, namely minimization problems, the fixed point problems, variational inequality problems (shortly, V I P ), Nash equilibrium of non-cooperation games, complementarity problems, problem of vector minimization and saddle point problems (see, e.g., [1,5,6,7]). On the other hand, iterative methods are basic and powerful tools for studying the numerical solution of an equilibrium problem. In this direction, two well-established approaches are used, i.e., the proximal point method [8] and the auxiliary problem principle [9]. The strategy of the proximal point method was originally developed by Martinet [10] for the problems of a monotone variational inequality, and later Rockafellar [11] developed this idea for monotone operators. Moudafi [8] proposed the proximal point method for monotone equilibrium problems. Furthermore, Konnov [12] also provides a different variant of the proximal point method with weaker assumptions in the case of equilibrium problems. Several numerical methods based on these techniques have been developed to solve different classes of equilibrium problems in finite and infinite-dimensional abstract spaces (for more details see, [12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]). More specifically, Hieu et al. in [30] developed an iterative sequence sequence { u n } recursively as
u 0 C , v n = arg   min y C { ξ n f ( u n , y ) + 1 2 u n y 2 } , u n + 1 = arg   min y C { ξ n f ( v n , y ) + 1 2 u n y 2 } ,
where { ξ n } is a sequence of positive real numbers satisfying the following conditions:
T 1 : lim n ξ n = 0 and T 2 : n = 0 ξ n = + .
In addition, the inertial-like methods depend on the approach of the heavy-ball methods of the second-order time dynamic system. Polyak began by considering inertial extrapolation as a speed-up method to solve smooth convex minimization problems. Inertial-like methods are two-step iterative schemes and the next iteration is achieved by making use of the previous two iterations (see [31,32]). An inertial extrapolation term is required to boost the iterative sequence in order to achieve the desired solution. These inertial methods are basically used to accelerate the iterative sequence towards the required solution. Numerical reviews suggest that inertial effects often improve the performance of the algorithm in terms of the number of iterations and time of execution in this context. These two impressive advantages enhance the researcher’s interest in developing new inertial methods. There are many methods are already have been established for the different classes of variational inequality problem (for more details see [33,34,35,36,37]).
In this article, we focus on the second direction which consists of projection methods that are well recognized and practically easy to carry out based on their convenient mathematical computation. By relying on the research work of Hieu et al. [30] and Vinh et al. [38], we introduce an inertial extragradient method for solving a specific class of equilibrium problems, where f can be a strongly pseudomonotone bifunction. Our method is working without any knowledge of the Lipschitz-type and strongly pseudomonotone constants of a bifunction. The advantage of our method is based on the use of a stepsize sequence that gently converges to zero and non-summable. Due to this aspect and the strong pseudomonotonicity of the bifunction, the strong convergence of our method has been achieved. Nonetheless, we do not need to know such constants beforehand i.e., the input parameters of the method should not be such constants. In the end, the numerical experiments indicate that the proposed method seems to be more effective than the family of existing ones [30,39,40].
The rest of this paper organized subsequently: In Section 2 we provide some preliminaries and basic results that will be used throughout the paper. Section 3 includes our proposed method and the corresponding convergence result. Section 4 contains some application of our results in the variational inequality problems. Section 5 sets out the numerical experiments to explain the algorithmic efficiency of our proposed algorithm.

2. Background

Now, we provide important lemmas, definitions and other concepts that are useful throughout the convergence analysis. We continue to make use of C as a closed, convex subset of the Hilbert space E . By . , . and . we denote the inner product and norm on the Hilbert space respectively, etc. Let G : E E is a well-defined operator and V I ( G , C ) is the solution set of a variational inequality problem corresponding operator G over the set C. Moreover, E P ( f , C ) stands for the solution set of an equilibrium problem upon the set C and u is any arbitrary element of E P ( f , C ) or V I ( G , C ) .
In addition, Let h : C R is a convex function and subdifferential of h at u C is defined by:
h ( u ) = { z E : h ( v ) h ( u ) z , v u , v C } .
The normal cone of C at u C is given as
N C ( u ) = { z E : z , v u 0 , v C } .
Definition 1
([41]). The metric projection P C ( u ) of u onto a closed, convex subset C of E is define as follows:
P C ( u ) = arg   min v C { v u } .
Next, we have various notions of the bifunction monotonicity (see [1,42] for more details).
Definition 2.
The bifunction f : E × E R on C for γ > 0 is said to be:
(i).
Strongly monotone if f ( u , v ) + f ( v , u ) γ u v 2 , u , v C ;
(ii).
Monotone if f ( u , v ) + f ( v , u ) 0 , u , v C ;
(iii).
Strongly pseudomonotone if f ( u , v ) 0 f ( v , u ) γ u v 2 , u , v C ;
(iv).
Pseudomonotone if f ( u , v ) 0 f ( v , u ) 0 , u , v C ;
(v).
Satisfying the Lipschitz-type condition on C if there are two positive real numbers c 1 , c 2 such that
f ( u , w ) f ( u , v ) + f ( v , w ) + c 1 u v 2 + c 2 v w 2 , u , v , w C .
Remark 1.
We obtain the following results from the above definitions.
strongly   monotone monotone pseudomonotone
strongly   monotone strongly   pseudomonotone pseudomonotone
This section concludes with a few specific lemmas that are useful in studying the convergence analysis of our proposed method.
Lemma 1
([43]). Let C be a nonempty, closed and convex subset of a real Hilbert space E and h : C R be a convex, subdifferentiable and lower semicontinuous function on C . Moreover, u C is a minimizer of a function h if and only if 0 h ( u ) + N C ( u ) , where h ( u ) and N C ( u ) denotes the subdifferential of h at u and the normal cone of C at u, respectively.
Lemma 2
([44]). For every a , b E and μ R then the subsequent item is true:
μ a + ( 1 μ ) b 2 = μ a 2 + ( 1 μ ) b 2 μ ( 1 μ ) a b 2 .
Lemma 3
([45]). Suppose { a n } and { t n } are two sequences of nonnegative real numbers satisfying the inequality a n + 1 a n + t n for all n N . If t n < , then lim n a n exists.
Lemma 4
([46]). Let { a n } and { b n } be two sequences of nonnegative real numbers. If  n = 1 a n = and n = 1 a n b n < , then lim   inf n b n = 0 .

3. Convergence Analysis for an Algorithm

We develop an algorithmic procedure that consists of two strong convex minimization problems with such an inertial term that is used to improve the convergence speed of the iterative sequence, so that it is classified as an inertial extragradient method for strongly pseudomonotone equilibrium problems. We have the following hypothesis on a bifunction that are compulsory to achieve the strong convergence of the iterative sequence generated by Algorithm 1.
Assumption 1.
Let a bifunction f : E × E R such that
f1.
f ( u , u ) = 0 , for all u C and f is strongly pseudomonotone on C .
f2.
f satisfy the Lipschitz-type condition through two positive constants c 1 and c 2 .
f3.
f ( u , . ) is convex and subdifferentiable on C for each fixed u C .
Algorithm 1 (Inertial extragradient algorithm for strongly pseudomonotone equilibrium problems).
  • Initialization: Choose u 1 , u 0 C , θ [ 0 , 1 ) with a sequence { ϵ n } [ 0 , + ) such that
    n = 0 + ϵ n < +
    holds. In addition, let { ξ n } be the sequence of positive real numbers that satisfy the following conditions:
    T 1 : lim n ξ n = 0 and T 2 : n = 0 ξ n = + .
  • Iterative steps: Choose ϑ n such that 0 ϑ n β n , where
    β n = min θ , ϵ n u n u n 1 if u n u n 1 , θ otherwise .
  • Step 1: Compute
    v n = arg   min y C { ξ n f ( w n , y ) + 1 2 w n y 2 } ,
    where w n = u n + ϑ n ( u n u n 1 ) . If w n = v n ; STOP. Otherwise,
  • Step 2: Compute
    u n + 1 = arg   min y C { ξ n f ( v n , y ) + 1 2 w n y 2 } .
    Set n : = n + 1 and go back to Iterative steps.
Remark 2.
(i).
Notice that if θ = 0 , in the above method then it is equivalent to the default extragradient method in, e.g., [30].
(ii).
Evidently, from the expression (3) and (5) we have
n = 1 ϑ n u n u n 1 n = 1 β n u n u n 1 < ,
which implies that
lim n β n u n u n 1 = 0 .
Next, we are proving the validity of the stopping criterion for the Algorithm 1.
Lemma 5.
If v n = w n in Algorithm 1 then w n is the solution of a problem (1) over C .
Proof. 
By definition of v n with Lemma 1, we can write as
0 2 ξ n f ( w n , y ) + 1 2 w n y 2 ( v n ) + N C ( v n ) .
Thus, there is a η f ( w n , v n ) and η ¯ N C ( v n ) such that
ξ n η + v n w n + η ¯ = 0 .
Thus, we have
ξ n η , y v n + η ¯ , y v n = 0 , y C .
Since η ¯ N C ( v n ) then η ¯ , y v n 0 , and with above expression implies that
ξ n η , y v n 0 , y C .
Furthermore, by η f ( w n , v n ) and the subdifferential definition, we have
f ( w n , y ) f ( w n , v n ) η , y v n , y C .
From the expression (8) and (9) with ξ n ( 0 , + ) implies that
f ( w n , y ) f ( w n , v n ) 0 , y C .
By v n = w n and under the assumption f 1 given that f ( w n , y ) 0 , for all y C .
Lemma 6.
We have the following important inequality from the Algorithm 1.
ξ n f ( v n , y ) ξ n f ( v n , u n + 1 ) w n u n + 1 , y u n + 1 , y C .
Proof. 
By definition of u n + 1 , we have
0 2 ξ n f ( v n , y ) + 1 2 w n y 2 ( u n + 1 ) + N C ( u n + 1 ) .
Thus, for η f ( v n , u n + 1 ) there exists an η ¯ N C ( u n + 1 ) such that
ξ n η + u n + 1 w n + η ¯ = 0 .
The above expression can be written as
w n u n + 1 , y u n + 1 = ξ n η , y u n + 1 + η ¯ , y u n + 1 , y C .
Since η ¯ N C ( u n + 1 ) then η ¯ , y u n + 1 0 for all y C .
Thus, we obtain
w n u n + 1 , y u n + 1 ξ n η , y u n + 1 , y C .
By η f ( v n , u n + 1 ) and by subdifferential definition, we obtain
f ( v n , y ) f ( v n , u n + 1 ) η , y u n + 1 , y C .
Combining the expression (11) and (12) we obtain
ξ n f ( v n , y ) ξ n f ( v n , u n + 1 ) w n u n + 1 , y u n + 1 , y C .
Lemma 7.
We also have the following important inequality from the Algorithm 1.
ξ n f ( w n , y ) ξ n f ( w n , v n ) w n v n , y v n , y C .
Proof. 
We may obtain the proof by following the same step as in the proof of Lemma 6. □
Next, we give a crucial inequality which is useful to prove the boundedness of the iterative sequence generated by Algorithm 1.
Lemma 8.
Suppose that the assumptions f 1 - f 4 as in Assumption 1 hold and the E P ( f , C ) . Thus, for each u E P ( f , C ) , we can obtain
u n + 1 u 2 w n u 2 ( 1 2 c 1 ξ n ) w n v n 2 ( 1 2 c 2 ξ n ) v n u n + 1 2 2 γ ξ n v n u 2 .
Proof. 
It is follow from the Lemma 7 and substituting y = u n + 1 , we obtain
ξ n f ( w n , u n + 1 ) ξ n f ( w n , v n ) w n v n , u n + 1 v n .
Next, substituting y = u in Lemma 6, we have
ξ n f ( v n , u ) ξ n f ( v n , u n + 1 ) w n u n + 1 , u u n + 1 .
Since f ( u , v n ) 0 then from the assumption ( f 1 ), we have f ( v n , u ) γ v n u 2 such that
w n u n + 1 , u n + 1 u ξ n f ( v n , u n + 1 ) + γ ξ n v n u 2 .
By using the Lipschitz-type continuity of a bifunction f, we have
f ( w n , u n + 1 ) f ( w n , v n ) + f ( v n , u n + 1 ) + c 1 w n v n 2 + c 2 v n u n + 1 2 .
From the expression (16) and (17), we have
w n u n + 1 , u n + 1 u ξ n f ( w n , u n + 1 ) f ( w n , v n ) c 1 ξ n w n v n 2 c 2 ξ n v n u n + 1 2 + γ ξ n v n u 2 .
By the expression (14) and (18), we get
w n u n + 1 , u n + 1 u w n v n , u n + 1 v n c 1 ξ n w n v n 2 c 2 ξ n v n u n + 1 2 + γ ξ n v n u 2 .
Furthermore, we have the following facts:
2 v n w n , v n u n + 1 = w n v n 2 + u n + 1 v n 2 w n u n + 1 2 .
2 w n u n + 1 , u n + 1 u = w n u 2 u n + 1 w n 2 u n + 1 u 2 .
From above two facts and the expression (19), we obtain
u n + 1 u 2 w n u 2 ( 1 2 c 1 ξ n ) w n v n 2 ( 1 2 c 2 ξ n ) v n u n + 1 2 2 γ ξ n v n u 2 .
Theorem 1.
Let a bifunction f : E × E R satisfying the Assumptions A1. Then, for some u E P ( f , C ) , the sequence { w n } , { u n } and { v n } set up by Algorithm 1 strongly converges to u E P ( f , C ) .
Proof. 
Since ξ n 0 then there exist an n 0 N such that
ξ n min 1 2 c 1 , 1 2 c 2 for all n n 0 .
By using the above condition on Lemma 8, we have
u n + 1 u 2 w n u 2 , n n 0 .
Furthermore, the above expression for all n n 0 can be written as
u n + 1 u u n + ϑ n ( u n u n 1 ) u u n u + ϑ n u n u n 1 .
By Lemma 3 with the expression (6) and (22) implies that
lim n u n u = l , for some finite l 0 .
By the definition of w n in Algorithm 1, we have
w n u 2 = u n + ϑ n ( u n u n 1 ) u 2 = ( 1 + ϑ n ) ( u n u ) ϑ n ( u n 1 u ) 2 = ( 1 + ϑ n ) u n u 2 ϑ n u n 1 u 2 + ϑ n ( 1 + ϑ n ) u n u n 1 2 ( 1 + ϑ n ) u n u 2 ϑ n u n 1 u 2 + 2 ϑ n u n u n 1 2 .
The above expression with (23) and (7) implies that
lim n w n u = l .
From Lemma 8 and the expression (24), we have
u n + 1 u 2 ( 1 + ϑ n ) u n u 2 ϑ n u n 1 u 2 + 2 ϑ n u n u n 1 2 ( 1 2 c 1 ξ n ) w n v n 2 ( 1 2 c 2 ξ n ) v n u n + 1 2 2 γ ξ n v n u 2 ,
which further implies that
( 1 2 c 1 ξ n ) w n v n 2 + ( 1 2 c 2 ξ n ) v n u n + 1 2 u n u 2 u n + 1 u 2 + ϑ n u n u 2 u n 1 u 2 + 2 ϑ n u n u n 1 2 .
By taking the limit as n in the expression (27), we get
lim n w n v n = lim n v n u n + 1 = 0 .
Thus, the expression (25) and (28) gives that
lim n v n u = l .
From the expression (23), (25) and (29) implies that the sequences { u n } , { w n } and { v n } are bounded, and for each u E P ( f , C ) , the lim n u n u 2 , lim n v n u 2 , lim n w n u 2 exists. Next, we are going to prove that the sequence { u n } strongly converges to u . It follows from the expression (26) for each n n 0 such that
2 γ ξ n v n u 2 u n u 2 u n + 1 u 2 + ϑ n u n u 2 u n 1 u 2 + 2 ϑ n u n u n 1 2 .
For some k > n 0 using in the expression (30) gives that
n = n 0 k 2 γ ξ n v n u 2 u n 0 u 2 u k + 1 u 2 + 2 θ n = n 0 k u n u n 1 2 + θ u k u 2 u n 0 1 u 2 u n 0 u 2 + θ u k u 2 + 2 θ n = n 0 k u n u n 1 2 M ,
for some M 0 and letting k leads to
n = 1 2 γ ξ n v n u 2 < + .
It follows from Lemma 4 and the expression (32) such that
lim   inf v n u = 0 .
Thus, the expression (29) and (33) gives that
lim n v n u = 0 .
From the expression (7), (28) and using the Cauchy inequality, we have
0 u n v n = u n u n + 1 + u n + 1 v n 0 as n .
Finally, we get
lim n u n u = lim n w n u = 0 .
This completes the proof. □
If we take θ = 0 in the Algorithm 1, we get the result that appeared in the Hieu et al. [30].
Corollary 1.
Let f : E × E R be a bifunction satisfying the assumptions ( f 1 f 3 ). Thus, for some u E P ( f , K ) , the sequence { u n } and { v n } are generated as follows:
i.
Let u 0 C and compute
v n = arg   min y C { ξ n f ( u n , y ) + 1 2 u n y 2 } , u n + 1 = arg   min y C { ξ n f ( v n , y ) + 1 2 u n y 2 } ,
where { ξ n } be the sequence of positive real numbers satisfy the following conditions:
T 1 : lim n ξ n = 0 and T 2 : n = 0 ξ n = + .
The sequence { u n } and { v n } strongly converges to the solution u E P ( f , C ) .

4. Application to Variational Inequality Problems

Now we discuss the application of our results to solve variational inequality problems involving strongly pseudomonotone with Lipschitz continuous operator. An operator G : E E is called to be
  • strongly pseudomonotone upon C for γ > 0 if
    G ( u ) , v u 0 G ( v ) , u v γ u v 2 , u , v C ;
  • L-Lipschitz continuous upon C if G ( u ) G ( v ) L u v , u , v C .
The problem of variational inequality is to
find u C such that G ( u ) , v u 0 , v C .
Note:
Suppose that the bifunction f ( u , v ) : = G ( u ) , v u for all u , v C . Thus, the equilibrium problem convert into problem of variational inequality with L = 1 2 c 1 = 1 2 c 2 . It follows the definition of v n in the Algorithm 1 and the above definition of bifunction f such that
v n = arg   min y C { ξ n f ( w n , y ) + 1 2 w n y 2 } = arg   min y C { ξ n G ( w n ) , y w n + 1 2 w n y 2 } = arg   min y C 1 2 y ( w n ξ n G ( w n ) 2 ξ n 2 2 G ( w n ) 2 = P C ( w n ξ n G ( w n ) ) ,
and likewise u n + 1 in Algorithm 1 can reduce to
u n + 1 = P C ( w n ξ n G ( v n ) ) .
Assumption 2.
We assume that G satisfying the following assumptions:
G1.
G is strongly pseudomonotone on C and V I ( G , C ) ;
G2.
G is L-Lipschitz continuous upon C for some constant L > 0 .
Thus, the Algorithm 1 is reduced to the following algorithm to solve a strongly pseudomonotone variational inequality problem.
Corollary 2.
Assume that G : C E satisfies ( G 1 - G 2 ) as in Assumption A2. Let { w n } , { u n } and { v n } be the sequences generated as follows:
i.
Choose u 1 , u 0 C , θ [ 0 , 1 ) and a sequence { ϵ n } [ 0 , + ) such that
n = 0 + ϵ n < + ,
holds. In addition, let { ξ n } be the sequence of positive real numbers which meets the following criteria:
T 1 : lim n ξ n = 0 and T 2 : n = 0 ξ n = + .
ii.
Choose ϑ n such that 0 ϑ n β n where
β n = min θ , ϵ n u n u n 1 if u n u n 1 , θ otherwise .
iii.
Compute
w n = u n + ϑ n ( u n u n 1 ) , v n = P C ( w n ξ n G ( w n ) ) , u n + 1 = P C ( w n ξ n G ( v n ) ) .
Thus, the sequence { w n } , { u n } and { v n } strongly converges to u V I ( G , C ) .
By using θ = 0 in the Corollary 2, we get the following results.
Corollary 3.
Assume that G : C E satisfies ( G 1 - G 2 ) as in Assumption 2. Let { u n } and { v n } be the sequences generated as follows:
i.
Choose u 0 C and compute
v n = P C ( u n ξ n G ( u n ) ) , u n + 1 = P C ( u n ξ n G ( v n ) ) ,
where { ξ n } be the sequence of positive real numbers satisfy the following conditions:
T 1 : lim n ξ n = 0 and T 2 : n = 0 ξ n = + .
Thus, the sequence { u n } and { v n } strongly converges to u V I ( G , C ) .

5. Computational Experiment

We present some numerical results to explain the efficiency of our proposed methods. The MATLAB codes run in MATLAB version 9.5 (R2018b) on a PC Intel(R) Core(TM)i5-6200 CPU @ 2.30GHz 2.40GHz, RAM 8.00 GB. In all of these examples, we use u 1 = u 0 = v 0 = ( 1 , 1 , , 1 , 1 ) T , and x-axis points out to the number of iterations or the time elapsed (in seconds), whereas y-axes show for the value of D n . For each method, the corresponding stopping criterion is used, which helps the iterative sequence to converge the element of a solution set. Moreover, we use the following values for the error terms and some other terms (n: Dimension of a Hilbert space; N: Total number of samples; iter.: Average number of iterations; time: Average execution time).
(i).
For Hieu et al. [30] (shortly, Algo1), we use
D n = u n v n 2 .
(ii).
For Hieu et al. [39] (shortly, Algo2), we use
D n = max u n + 1 v n 2 , u n + 1 u n 2 .
(iii).
For Hieu et al. [40] (shortly, Algo3), we use θ n = 0.50 and
D n = max u n + 1 v n 2 , u n + 1 w n 2 .
(iv).
For Algorithm 1 (shortly, Algo4) we use θ n = 0.50 , ϵ n = 1 n 2 and
D n = w n v n 2 .

5.1. Example 1

Assume that there will be n firms which produces the same product. Let u sets for a vector in which each entry u i denotes the amount of the product produce by a firm i. Now choose the cost P as a decreasing affine function that depends upon on the value of S = i = 1 m u i , i.e., P i ( S ) = ϕ i ψ i S , where ϕ i > 0 , ψ i > 0 . The profit function for each firm i is described by F i ( u ) = P i ( S ) u i t i ( u i ) , where t i ( u i ) is the tax value and cost for generating u i . Assume that C i = [ u i min , u i max ] is the set of actions corresponding to each firm i , and the strategy for the whole model take the form as C : = C 1 × C 2 × × C n . In fact, each firm tries to reach its peak revenue by choosing the comparable stage of production on the hypothesis that other firms production is the input parameter. The technique generally employed to handle this type of model concentrates mainly on the well-known Nash equilibrium theory. We would like to remind that point u C = C 1 × C 2 × × C n is the solution of equilibrium the model if
F i ( u ) F i ( u [ u i ] ) , u i C i for all i = 1 , 2 , , n ,
with the vector u [ u i ] represent the vector get from u by taking u i with u i . Finally, we take f ( u , v ) : = φ ( u , v ) φ ( u , u ) with φ ( u , v ) : = i = 1 n F i ( u [ v i ] ) and the problem for finding the Nash equilibrium point for the model may be taken as:
Find u C : f ( u , v ) 0 , v C .
In addition, we assume that both the tax and the fee for the production of the unit are increasing as the amount of productivity increases. It follows from [19,22], the bifunction f be taken as
f ( u , v ) = P u + Q v + q , v u ,
where q R n . Also, Q P is symmetric negative definite and Q is symmetric positive semidefinite of order n with Lipschitz parameters c 1 = c 2 = 1 2 P Q (for more details see, [22]). During this Example in Section 5.1, both P , Q are arbitrary generated (Choosing two diagonal matrices randomly A 1 and A 2 with entries from [ 0 , 2 ] and [ 2 , 0 ] respectively. Two random orthogonal matrices B 1 and B 2 (RandOrthMat(n)) are able to generate a positive semi definite matrix M 1 = B 1 A 1 B 1 T and negative semi definite matrix M 2 = B 2 A 2 B 2 T . Finally, set Q = M 1 + M 1 T , S = M 2 + M 2 T and P = Q S . ) and entries of q randomly belongs to [ 1 , 1 ] . The constraint set C R n is convex and closed as
C : = { u R n : 5 u i 5 } .
The numerical results regarding Example in Section 5.1 have shown in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 and Table 1.

5.2. Example 2

Let a bifunction f define on the convex set as
f ( u , v ) = B B T + S + D u , v u ,
where D is an n × n diagonal matrix with nonnegative elements. Moreover, S is an n × n skew-symmetric matrix and B is an n × n matrix. The constraint set C R n is taken as
C = { u R n : A u b }
while A is an 100 × n matrix and nonnegative vector b . We can see that bifunction f is γ -monotone through γ = min { eig ( B B T + S + D ) } and Lipschitz-like constants are c 1 = c 2 = 1 2 max { eig ( B B T + S + D ) } . In our case we generate the random matrices ( B = r a n d ( n ) , C = r a n d ( n ) , S = 0.5 C 0.5 C T , D = d i a g ( r a n d ( n , 1 ) ) . ) and the numerical results regarding Example in Section 5.2 are shown in Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19 and Table 2.
Remark 3.
From our numerical experiments we have the following observation:
1.
There is no need to have prior knowledge of Lipschitz-constant for running algorithms on Matlab.
2.
The convergence rate of the iterative sequence is based on the convergence rate of the stepsize sequence.
3.
The convergence rate of the iterative sequence also depends on the nature of the problem and the size of the problem.
4.
Due to the variable stepsize sequence, a particular value of the stepsize that is not suited to the current iteration of the algorithm often causes disturbance and hump in the behaviour of an iterative sequence.

6. Conclusions

In this article, we established a new method by associating an inertial term with an extragradient method for dealing with a family of strongly pseudomonotone equilibrium problems. The proposed method requires a sequence of diminishing and non-summable stepsizes and carried out without previous knowledge of the Lipschitz-type constants and the strong pseudo-monotonicity modulus constant. Two numerical experiments have been reported to measure the computational efficiency of our method in comparison to other existing methods. Numerical experiments have pointed out that the method with an inertial scheme performs better than those without inertial scheme.

Author Contributions

The authors contributed equally to writing this article. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was financially supported by King Mongkut’s University of Technology Thonburi through the ‘KMUTT 55th Anniversary Commemorative Fund’. Moreover, this project was supported by Theoretical and Computational Science (TaCS) Center under Computational and Applied Science for Smart research Innovation research Cluster (CLASSIC), Faculty of Science, KMUTT. In particular, Habib ur Rehman was financed by the Petchra Pra Jom Doctoral Scholarship Academic for Ph.D. Program at KMUTT [grant number 39/2560]. Furthermore, Wiyada Kumam was financially supported by the Rajamangala University of Technology Thanyaburi (RMUTTT) (Grant No. NSF62D0604).

Acknowledgments

The first author would like to thank the “Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi”. We are very grateful to editor and the anonymous referees for their valuable and useful comments, which helps in improving the quality of this work.

Conflicts of Interest

The authors declare that they have conflict of interest.

References

  1. Blum, E. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Dafermos, S. Traffic equilibrium and variational inequalities. Transp. Sci. 1980, 14, 42–54. [Google Scholar] [CrossRef] [Green Version]
  3. Ferris, M.C.; Pang, J.S. Engineering and economic applications of complementarity problems. Siam Rev. 1997, 39, 669–713. [Google Scholar] [CrossRef] [Green Version]
  4. Patriksson, M. The Traffic Assignment Problem: Models and Methods; Courier Dover Publications: Mineola, NY, USA, 2015. [Google Scholar]
  5. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Science & Business Media: Berlin, Germany, 2007. [Google Scholar]
  6. Konnov, I. Equilibrium Models and Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2007; Volume 210. [Google Scholar]
  7. Giannessi, F.; Maugeri, A.; Pardalos, P.M. Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models; Springer Science & Business Media: Berlin, Germany, 2006; Volume 58. [Google Scholar]
  8. Moudafi, A. Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 1999, 15, 91–100. [Google Scholar]
  9. Mastroeni, G. On auxiliary principle for equilibrium problems. In Equilibrium Problems and Variational Models; Springer: Berlin, Germany, 2003; pp. 289–298. [Google Scholar]
  10. Martinet, B. Brève communication. Régularisation d’inéquations variationnelles par approximations successives. ESAIM: Mathematical Modelling and Numerical Analysis—Modélisation Mathématique et Analyse Numérique 1970, 4, 154–158. [Google Scholar] [CrossRef] [Green Version]
  11. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  12. Konnov, I. Application of the proximal point method to nonmonotone equilibrium problems. J. Optim. Theory Appl. 2003, 119, 317–333. [Google Scholar] [CrossRef]
  13. Antipin, A.S. The convergence of proximal methods to fixed points of extremal mappings and estimates of their rate of convergence. Comput. Math. Math. Phys. 1995, 35, 539–552. [Google Scholar]
  14. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  15. Flåm, S.D.; Antipin, A.S. Equilibrium programming using proximal-like algorithms. Math. Prog. 1996, 78, 29–41. [Google Scholar] [CrossRef]
  16. Van Hieu, D.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 2017, 66, 75–96. [Google Scholar] [CrossRef]
  17. Van Hieu, D. Halpern subgradient extragradient method extended to equilibrium problems. Revista de la Real Academia de Ciencias Exactas, Fisicas y Naturales—Serie A: Matematicas 2017, 111, 823–840. [Google Scholar] [CrossRef]
  18. Argyros, I.K.; d Hilout, S. Computational Methods in Nonlinear Analysis: Efficient Algorithms, Fixed Point Theory and Applications; World Scientific: Singapore, 2013. [Google Scholar]
  19. Hieua, D.V. Parallel extragradient-proximal methods for split equilibrium problems. Math. Model. Anal. 2016, 21, 478–501. [Google Scholar] [CrossRef]
  20. Iusem, A.N.; Sosa, W. Iterative algorithms for equilibrium problems. Optimization 2003, 52, 301–316. [Google Scholar] [CrossRef]
  21. Quoc, T.D.; Anh, P.N.; Muu, L.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2012, 52, 139–159. [Google Scholar] [CrossRef]
  22. Quoc Tran, D.; Le Dung, M.; Nguyen, V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  23. Santos, P.; Scheimberg, S. An inexact subgradient algorithm for equilibrium problems. Comput. Appl. Math. 2011, 30, 91–107. [Google Scholar]
  24. Takahashi, S.; Takahashi, W. Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331, 506–515. [Google Scholar] [CrossRef] [Green Version]
  25. Ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequalities Appl. 2019, 2019, 1–25. [Google Scholar] [CrossRef]
  26. Argyros, I.K.; Cho, Y.J.; Hilout, S. Numerical Methods for Equations and Its Applications; CRC Press: Boca Raton, FL, USA, 2012. [Google Scholar]
  27. Rehman, H.U.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39, 100. [Google Scholar]
  28. Rehman, H.U.; Kumam, P.; Cho, Y.J.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods. Softw. 2020, 1–32. [Google Scholar] [CrossRef]
  29. Rehman, H.U.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The Inertial Sub-Gradient Extra-Gradient Method for a Class of Pseudo-Monotone Equilibrium Problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef] [Green Version]
  30. Hieu, D.V. New extragradient method for a class of equilibrium problems in Hilbert spaces. Appl. Anal. 2017, 1–14. [Google Scholar] [CrossRef]
  31. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  32. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  33. Dong, Q.L.; Lu, Y.Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  34. Thong, D.V.; Van Hieu, D. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  35. Dong, Q.; Cho, Y.; Zhong, L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  36. Yang, J. Self-adaptive inertial subgradient extragradient algorithm for solving pseudomonotone variational inequalities. Appl. Anal. 2019, 1–12. [Google Scholar] [CrossRef]
  37. Thong, D.V.; Van Hieu, D.; Rassias, T.M. Self adaptive inertial subgradient extragradient algorithms for solving pseudomonotone variational inequality problems. Optim. Lett. 2020, 14, 115–144. [Google Scholar] [CrossRef]
  38. Vinh, N.T.; Muu, L.D. Inertial Extragradient Algorithms for Solving Equilibrium Problems. Acta Math. Vietnam. 2019, 44, 639–663. [Google Scholar] [CrossRef]
  39. Van Hieu, D. Convergence analysis of a new algorithm for strongly pseudomontone equilibrium problems. Numer. Algorithms 2018, 77, 983–1001. [Google Scholar] [CrossRef]
  40. Hieu, D.V.; Cho, Y.J.; bin Xiao, Y. Modified extragradient algorithms for solving equilibrium problems. Optimization 2018, 67, 2003–2029. [Google Scholar] [CrossRef]
  41. Goebel, K.; Reich, S. Uniform Convexity. Hyperbolic Geometry, and Nonexpansive; CRC Press: Boca Raton, FL, USA, 1984. [Google Scholar]
  42. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  43. Tiel, J.V. Convex Analysis; John Wiley: Hoboken, NJ, USA, 1984. [Google Scholar]
  44. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin, Germany, 2011; Volume 408. [Google Scholar]
  45. Tan, K.K.; Xu, H.K. Approximating fixed points of non-expansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301. [Google Scholar] [CrossRef] [Green Version]
  46. Ofoedu, E. Strong convergence theorem for uniformly L-Lipschitzian asymptotically pseudocontractive mapping in real Banach space. J. Math. Anal. Appl. 2006, 321, 722–728. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Example 5.1 for n = 10 and ξ n = 1 n + 1 .
Figure 1. Example 5.1 for n = 10 and ξ n = 1 n + 1 .
Symmetry 12 00503 g001
Figure 2. Example 5.1 for n = 10 and ξ n = log ( n + 3 ) n + 1 .
Figure 2. Example 5.1 for n = 10 and ξ n = log ( n + 3 ) n + 1 .
Symmetry 12 00503 g002
Figure 3. Example 5.1 for n = 10 and ξ n = 1 log ( n + 3 ) .
Figure 3. Example 5.1 for n = 10 and ξ n = 1 log ( n + 3 ) .
Symmetry 12 00503 g003
Figure 4. Example 5.1 for n = 50 and ξ n = 1 n + 1 .
Figure 4. Example 5.1 for n = 50 and ξ n = 1 n + 1 .
Symmetry 12 00503 g004
Figure 5. Example 5.1 for n = 50 and ξ n = log ( n + 3 ) n + 1 .
Figure 5. Example 5.1 for n = 50 and ξ n = log ( n + 3 ) n + 1 .
Symmetry 12 00503 g005
Figure 6. Example 5.1 for n = 50 and ξ n = 1 log ( n + 3 ) .
Figure 6. Example 5.1 for n = 50 and ξ n = 1 log ( n + 3 ) .
Symmetry 12 00503 g006
Figure 7. Example 5.1 for n = 100 and ξ n = 1 n + 1 .
Figure 7. Example 5.1 for n = 100 and ξ n = 1 n + 1 .
Symmetry 12 00503 g007
Figure 8. Example 5.1 for n = 100 and ξ n = log ( n + 3 ) n + 1 .
Figure 8. Example 5.1 for n = 100 and ξ n = log ( n + 3 ) n + 1 .
Symmetry 12 00503 g008
Figure 9. Example in Section 5.1 for n = 100 and ξ n = 1 log ( n + 3 ) .
Figure 9. Example in Section 5.1 for n = 100 and ξ n = 1 log ( n + 3 ) .
Symmetry 12 00503 g009
Figure 10. Example in Section 5.2 for n = 5 and ξ n = 1 ( n + 1 ) log ( n + 3 ) .
Figure 10. Example in Section 5.2 for n = 5 and ξ n = 1 ( n + 1 ) log ( n + 3 ) .
Symmetry 12 00503 g010
Figure 11. Example in Section 5.2 for n = 5 and ξ n = 1 ( n + 1 ) log ( n + 3 ) .
Figure 11. Example in Section 5.2 for n = 5 and ξ n = 1 ( n + 1 ) log ( n + 3 ) .
Symmetry 12 00503 g011
Figure 12. Example in Section 5.2 for n = 5 and ξ n = 1 n + 1 .
Figure 12. Example in Section 5.2 for n = 5 and ξ n = 1 n + 1 .
Symmetry 12 00503 g012
Figure 13. Example in Section 5.2 for n = 5 and ξ n = 1 n + 1 .
Figure 13. Example in Section 5.2 for n = 5 and ξ n = 1 n + 1 .
Symmetry 12 00503 g013
Figure 14. Example in Section 5.1 for n = 5 when ξ n = log ( n + 3 ) n + 1 .
Figure 14. Example in Section 5.1 for n = 5 when ξ n = log ( n + 3 ) n + 1 .
Symmetry 12 00503 g014
Figure 15. Example in Section 5.1 for n = 5 when ξ n = log ( n + 3 ) n + 1 .
Figure 15. Example in Section 5.1 for n = 5 when ξ n = log ( n + 3 ) n + 1 .
Symmetry 12 00503 g015
Figure 16. Example in Section 5.1 for n = 5 when ξ n = 1 n + 1 .
Figure 16. Example in Section 5.1 for n = 5 when ξ n = 1 n + 1 .
Symmetry 12 00503 g016
Figure 17. Example in Section 5.1 for n = 5 when ξ n = 1 n + 1 .
Figure 17. Example in Section 5.1 for n = 5 when ξ n = 1 n + 1 .
Symmetry 12 00503 g017
Figure 18. Example in Section 5.1 for n = 5 when ξ n = 1 log ( n + 3 ) .
Figure 18. Example in Section 5.1 for n = 5 when ξ n = 1 log ( n + 3 ) .
Symmetry 12 00503 g018
Figure 19. Example in Section 5.1 for n = 5 when ξ n = 1 log ( n + 3 ) .
Figure 19. Example in Section 5.1 for n = 5 when ξ n = 1 log ( n + 3 ) .
Symmetry 12 00503 g019
Table 1. The numerical findings of the Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9.
Table 1. The numerical findings of the Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9.
Algo1 [30]Algo2 [39]Algo3 [40]Algo4
nN ξ n iter.timeiter.timeiter.timeiter.time
1010 1 n + 1 830.8633560.4295350.2929190.1319
1010 log ( n + 3 ) n + 1 520.4297640.4862400.3040230.1896
1010 1 log ( n + 3 ) 940.87614005.05013053.4549820.6732
5010 1 n + 1 1361.25451070.9765690.7521540.4691
5010 log ( n + 3 ) n + 1 860.6913800.7453550.4792380.3128
5010 1 log ( n + 3 ) 1000.84272052.24371751.7925860.7685
10010 1 n + 1 2223.09131501.81051051.1990760.8656
10010 log ( n + 3 ) n + 1 1001.1624921.0639690.7964360.4207
10010 1 log ( n + 3 ) 1131.31102112.75241882.4022981.1311
Table 2. The experimental results for Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19.
Table 2. The experimental results for Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18 and Figure 19.
Algo1 [30]Algo2 [39]Algo3 [40]Algo4
nN ξ n iter.timeiter.timeiter.timeiter.time
510 1 ( n + 1 ) log ( n + 3 ) 2122.53602252.65801793.37461221.3161
510 1 n + 1 2002.17172543.42951641.86371371.7299
510 log ( n + 3 ) n + 1 1812.66881942.36461581.87031061.1469
510 1 n + 1 890.95501321.5186720.7889520.5644
510 1 log ( n + 3 ) 1371.51271521.8906890.9427800.8514

Share and Cite

MDPI and ACS Style

Rehman, H.u.; Kumam, P.; Argyros, I.K.; Deebani, W.; Kumam, W. Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem. Symmetry 2020, 12, 503. https://doi.org/10.3390/sym12040503

AMA Style

Rehman Hu, Kumam P, Argyros IK, Deebani W, Kumam W. Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem. Symmetry. 2020; 12(4):503. https://doi.org/10.3390/sym12040503

Chicago/Turabian Style

Rehman, Habib ur, Poom Kumam, Ioannis K. Argyros, Wejdan Deebani, and Wiyada Kumam. 2020. "Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem" Symmetry 12, no. 4: 503. https://doi.org/10.3390/sym12040503

APA Style

Rehman, H. u., Kumam, P., Argyros, I. K., Deebani, W., & Kumam, W. (2020). Inertial Extra-Gradient Method for Solving a Family of Strongly Pseudomonotone Equilibrium Problems in Real Hilbert Spaces with Application in Variational Inequality Problem. Symmetry, 12(4), 503. https://doi.org/10.3390/sym12040503

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop