Next Article in Journal
A Gyrogeometric Mean in the Einstein Gyrogroup
Next Article in Special Issue
A Symmetric FBF Method for Solving Monotone Inclusions
Previous Article in Journal
Fuzzy TOPSIS-EW Method with Multi-Granularity Linguistic Assessment Information for Emergency Logistics Performance Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Analysis of Self-Adaptive Inertial Extra-Gradient Method for Solving a Family of Pseudomonotone Equilibrium Problems with Application

1
Mathematics English Program, Faculty of Education, Valaya Alongkorn Rajabhat University under the Royal Patronage, Pathumthani 13180, Thailand
2
KMUTTFixed Point Research Laboratory, KMUTT-Fixed Point Theory and Applications Research Group, SCL 802 Fixed Point Laboratory, Department of Mathematics, Faculty of Science, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
3
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
4
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
5
Program in Applied Statistics, Department of Mathematics and Computer Science, Faculty of Science and Technology, Rajamangala University of Technology Thanyaburi, Thanyaburi, Pathumthani 12110, Thailand
*
Authors to whom correspondence should be addressed.
Symmetry 2020, 12(8), 1332; https://doi.org/10.3390/sym12081332
Submission received: 24 June 2020 / Revised: 4 August 2020 / Accepted: 6 August 2020 / Published: 10 August 2020
(This article belongs to the Special Issue Symmetry in Optimization and Control with Real World Applications)

Abstract

:
In this article, we propose a new modified extragradient-like method to solve pseudomonotone equilibrium problems in real Hilbert space with a Lipschitz-type condition on a bifunction. This method uses a variable stepsize formula that is updated at each iteration based on the previous iterations. The advantage of the method is that it operates without prior knowledge of Lipschitz-type constants and any line search method. The weak convergence of the method is established by taking mild conditions on a bifunction. In the context of an application, fixed-point theorems involving strict pseudo-contraction and results for pseudomonotone variational inequalities are considered. Many numerical results have been reported to explain the numerical behavior of the proposed method.

1. Introduction

Let C be a nonempty, closed and convex subset of a real Hilbert space H and R , N be the sets of real numbers and natural numbers, respectively. Assume that f is a bifunction f : H × H R and E P ( f , C ) denotes the solution set of an equilibrium problem over the set C . Now, consider the following definitions of a bifunction monotonicity (see [1,2] for more details). A function f : H × H R on C for γ > 0 is said to be:
(1)
γ-strongly monotone if
f ( z 1 , z 2 ) + f ( z 2 , z 1 ) γ z 1 z 2 2 , z 1 , z 2 C ;
(2)
monotone if
f ( z 1 , z 2 ) + f ( z 2 , z 1 ) 0 , z 1 , z 2 C ;
(3)
γ-strongly pseudomonotone if
f ( z 1 , z 2 ) 0 f ( z 2 , z 1 ) γ z 1 z 2 2 , z 1 , z 2 C ;
(4)
pseudomonotone if
f ( z 1 , z 2 ) 0 f ( z 2 , z 1 ) 0 , z 1 , z 2 C .
It is clear from the definitions mentioned above that they have the following consequences:
( 1 ) ( 2 ) ( 4 ) and ( 1 ) ( 3 ) ( 4 ) .
In general, the converses are not true. A bifunction f : H × H R is said to be Lipschitz-type continuous on C if there exist two positive constants c 1 , c 2 such that
f ( z 1 , z 3 ) f ( z 1 , z 2 ) + f ( z 2 , z 3 ) + c 1 z 1 z 2 2 + c 2 z 2 z 3 2 , z 1 , z 2 , z 3 C .
Let C be a nonempty closed convex subset of H and f : H × H R be a bifunction with f ( z 1 , z 1 ) = 0 , for all z 1 C . An equilibrium problem [1,3] for f on the set C is to
find u * C such that f ( u * , z 1 ) 0 , z 1 C .
An equilibrium problem (1) had many mathematical problems as a particular case, i.e., the variational inequality problems (VIP), optimization problems, fixed point problems, complementarity problems, the Nash equilibrium of non-cooperative games, saddle point problems and the vector optimization problem (for details see [1,4,5]). The equilibrium problem is also known as the famous Ky Fan inequality [3]. However, the particular format of an equilibrium problem (1) was initiated by Muu and Oettli [6] in 1992 and further investigation on its theoretical properties were provided by Blum and Oettli [1]. The construction of new iterative schemes and the modification of existing methods, as well as the study their convergence analysis, constitute an important research direction in equilibrium problem theory. Several methods have been developed in the past few years to approximate the solution of an equilibrium problem in finite and infinite dimensional real Hilbert spaces, i.e., extragradient methods [7,8,9,10,11,12,13,14,15,16], subgradient methods [17,18,19,20,21,22], inertial methods [23,24,25] and methods for particular classes of equilibrium problems [26,27,28,29,30,31,32,33,34,35].
In particular, a proximal method [36] was used to solve equilibrium problems based on solving minimization problems. This approach was also known as the two-step extragradient-like method in [7] due to the early contribution of the Korpelevich [37] extragradient method to solve the saddle point problems. More precisely, Tran et al. introduced a method in [7], and an iterative sequence { u n } was generated as follows:
u 0 C , v n = arg min { λ f ( u n , v ) + 1 2 u n v 2 : v C } , u n + 1 = arg min { λ f ( v n , v ) + 1 2 u n v 2 : v C } ,
where 0 < λ < min 1 2 c 1 , 1 2 c 2 . The iterative sequence generated from the above-mentioned method provides a weak convergent iterative sequence and in order to operate it, prior information regarding the Lipschitz-type constants is required. These Lipschitz-type constants are mostly unknown or hard to compute. To overcome this situation, Hieu et al. [14] introduced an extension of the method in [38] for solving the equilibrium problem as follows: Let [ t ] + : = max { t , 0 } and choose u 0 C , μ ( 0 , 1 ) with λ 0 > 0 such that
v n = arg min { λ n f ( u n , v ) + 1 2 u n v 2 : v C } , u n + 1 = arg min { λ n f ( v n , v ) + 1 2 u n v 2 : v C } ,
where the stepsize sequence { λ n } is updated in the following way:
λ n + 1 = min λ n , μ ( u n v n 2 + u n + 1 v n 2 ) 2 [ f ( u n , u n + 1 ) f ( u n , v n ) f ( v n , u n + 1 ) ] + .
Recently, Vinh and Muu proposed an inertial iterative algorithm in [39] to solve a pseudomonotone equilibrium problem. Their main contribution is the availability of an inertial effect in the algorithm that is used to improve the convergence rate of the iterative sequence. The iterative sequence { u n } has been generated in the following manner:
(i)
Choose u 1 , u 0 C , ϑ [ 0 , 1 ) , 0 < λ < min { 1 2 c 1 , 1 2 c 2 } while a sequence { ϵ n } [ 0 , + ) is satisfying the following condition:
n = 0 + ϵ n < + .
(ii)
Choose ϑ n such that 0 ϑ n ϑ n ¯ where
ϑ n ¯ = min ϑ , ϵ n u n u n 1 if u n u n 1 , ϑ otherwise .
(iii)
Determine
η n = u n + ϑ n ( u n u n 1 ) , v n = arg min { λ f ( η n , v ) + 1 2 η n v 2 : v C } , u n + 1 = arg min { λ f ( v n , v ) + 1 2 η n v 2 : v C } .
This article focuses on projection methods that are well-known and easy to execute due to their efficient and straightforward mathematical computation. Motivated by the works of [14,40], we formulate an inertial explicit subgradient extragradient algorithm to solve the pseudomonotone equilibrium problem. The proposed algorithm can be seen as the modification of the methods that appear in [7,14,39]. Under certain mild conditions, a weak convergence result has been proven to correspond to the iterative sequence of the algorithm. Moreover, experimental studies have shown that the proposed method tends to be more efficient compared to the existing method [39].
The remainder of this paper is arranged as follows: Section 2 contains some definitions and basic results used in the paper. Section 3 contains our main algorithm and proves its convergence. Section 4 and Section 5 incorporate the implementation of our results. Section 6 carries out the numerical results that demonstrates the computational effectiveness of our proposed algorithm.

2. Background

   Let h : C R be a convex function on a nonempty, closed and convex subset C of a real Hilbert space H , and the subdifferential of a function h at z 1 C is defined as:
h ( z 1 ) = { z 3 H : h ( z 2 ) h ( z 1 ) z 3 , z 2 z 1 , z 2 C } .
Let C be a nonempty, closed and convex subset of a real Hilbert space H and a normal cone of C at z 1 C is defined by:
N C ( z 1 ) = { z 3 H : z 3 , z 2 z 1 0 , z 2 C } .
The metric projection P C ( z 1 ) for z 1 H onto a closed and convex subset C of H is defined by:
P C ( z 1 ) = arg min { z 2 z 1 : z 2 C } .
Lemma 1
([41]). Let C be a nonempty, closed and convex subset of a real Hilbert space H and P C : H C be a metric projection from H onto C .
(i)
Let z 1 C and z 2 H ; we have
z 1 P C ( z 2 ) 2 + P C ( z 2 ) z 2 2 z 1 z 2 2 .
(ii)
z 3 = P C ( z 1 ) if and only if
z 1 z 3 , z 2 z 3 0 , z 2 C .
(iii)
For z 2 C and z 1 H
z 1 P C ( z 1 ) z 1 z 2 .
Lemma 2
([42]). Let h : C R be a convex, subdifferentiable and lower semicontinuous function on C , where  C is a nonempty, convex and closed subset of a real Hilbert space H . Then, an element z 1 C is a minimizer of a function h if and only if 0 h ( z 1 ) + N C ( z 1 ) , where h ( z 1 ) and N C ( z 1 ) represent the subdifferential of h at z 1 C and normal cone of C at z 1 , respectively.
Lemma 3
([43]). Let { u n } be a sequence in H and C H such that the following conditions hold:
(i)
For each u C , the lim n u n u exists;
(ii)
Each sequentially weak cluster limit point of the sequence { u n } belongs to C .
Then, the sequence { u n } weakly converges to some element in C .
Lemma 4.
[44] Let { q n } and { p n } be sequences of non-negative real numbers satisfying q n + 1 q n + p n , for each n N . If p n < , then lim n q n exists.
Assume that a bifunction f satisfies the following conditions:
(f1)
f ( z 2 , z 2 ) = 0 , for all z 2 C and f is pseudomonotone on C ;
(f2)
f satisfies the Lipschitz-type condition on H through c 1 > 0 and c 2 > 0 ;
(f3)
lim sup n f ( z n , v ) f ( z * , v ) for every v C and { z n } C satisfying z n z * ;
(f4)
f ( z 1 , . ) needs to be convex and subdifferentiable on H for each z 1 H .

3. Convergence Analysis for an Algorithm

We provide a method consisting of two strongly convex minimization problems through an inertial factor and an explicit stepsize formula, which are being used to improve the convergence rate of the iterative sequence and to make the method independent of the Lipschitz constants. The detailed method is provided below Algorithm 1:
Algorithm 1 (Inertial methods for pseudomonotone equilibrium problems)
  • Initialization: Choose u 1 , u 0 C , μ ( 0 , 1 ) , λ 0 > 0 , ϑ [ 0 , 1 ) and a sequence { ϵ n } [ 0 , + ) satisfying
    n = 0 + ϵ n < + .
  • Iterative steps: Choose ϑ n satisfying 0 ϑ n ϑ n ¯ and
    ϑ n ¯ = min ϑ , ϵ n u n u n 1 if u n u n 1 , ϑ otherwise .
  • Step 1: Determine
    v n = arg min y C { λ n f ( η n , y ) + 1 2 η n y 2 } ,
    where η n = u n + ϑ n ( u n u n 1 ) . If η n = v n ; STOP. Otherwise, go to next step.
  • Step 2: Determine a half-space
    H n = { z H : η n λ n ω n v n , z v n 0 } ,
    where ω n 2 f ( η n , v n ) and evaluate
    u n + 1 = arg min y H n { λ n f ( v n , y ) + 1 2 η n y 2 } .
  • Step 3: Set d 1 = f ( η n , u n + 1 ) f ( η n , v n ) f ( v n , u n + 1 ) and evaluate
    λ n + 1 = min λ n , μ η n v n 2 + μ u n + 1 v n 2 2 d 1 if d 1 > 0 , λ n otherwise .
    Set n : = n + 1 and go back to Iterative steps.
Lemma 5.
The sequence { λ n } is decreasing monotonically with a lower bound min μ 2 max { c 1 , c 2 } , λ 0 and converges to λ > 0 .
Proof. 
From the definition of { λ n } , we see that this sequence is monotone and non-increasing. It is given that f satisfies the Lipschitz-type condition with constants c 1 and c 2 . Let f ( η n , u n + 1 ) f ( η n , v n ) f ( v n , u n + 1 ) > 0 , such that
μ ( η n v n 2 + u n + 1 v n 2 ) 2 [ f ( η n , u n + 1 ) f ( η n , v n ) f ( v n , u n + 1 ) ] μ ( η n v n 2 + u n + 1 v n 2 ) 2 [ c 1 η n v n 2 + c 2 u n + 1 v n 2 ] μ 2 max { c 1 , c 2 } .
The above implies that the sequence { λ n } has a lower bound min μ 2 max { c 1 , c 2 } , λ 0 . Moreover, there exists a real number λ > 0 , such that lim n λ n = λ .  □
Remark 1.
Due to the summability of n = 0 + ϵ n , Expression (5) implies that:
n = 1 ϑ n u n u n 1 n = 1 ϑ n ¯ u n u n 1 n = 1 ϑ u n u n 1 < ,
which implies that:
lim n ϑ u n u n 1 = 0 .
Lemma 6.
Assume that a bifunction f : H × H R satisfies the conditions (f1)(f4). For each u * E P ( f , C ) , we have
u n + 1 u * 2 η n u * 2 1 μ λ n λ n + 1 η n v n 2 1 μ λ n λ n + 1 u n + 1 v n 2 .
Proof. 
From the value of u n + 1 , we have
0 2 λ n f ( v n , y ) + 1 2 η n y 2 ( u n + 1 ) + N H n ( u n + 1 ) .
For some ω f ( v n , u n + 1 ) there exists ω ¯ N H n ( u n + 1 ) such that
λ n ω + u n + 1 η n + ω ¯ = 0 .
The above equality implies that
η n u n + 1 , y u n + 1 = λ n ω , y u n + 1 + ω ¯ , y u n + 1 , y H n .
Since ω ¯ N H n ( u n + 1 ) , it follows that ω ¯ , y u n + 1 0 , for all y H n . Thus, we have
η n u n + 1 , y u n + 1 λ n ω , y u n + 1 , y H n .
Further, ω f ( v n , u n + 1 ) and due to the definition of subdifferential, we have
f ( v n , y ) f ( v n , u n + 1 ) ω , y u n + 1 , y H .
Combining Expressions (9) and (10), we obtain
λ n f ( v n , y ) λ n f ( v n , u n + 1 ) η n u n + 1 , y u n + 1 , y H n .
From the definition of H n , we can write
λ n ω n , u n + 1 v n η n v n , u n + 1 v n .
Due to ω n f ( η n , v n ) , we have
f ( η n , y ) f ( η n , v n ) ω n , y v n , y H .
By substituting y = u n + 1 in the above expression, we have
f ( η n , u n + 1 ) f ( η n , v n ) ω n , u n + 1 v n , y H .
Combining Expressions (12) and (13), we obtain
λ n f ( η n , u n + 1 ) f ( η n , v n ) η n v n , u n + 1 v n .
By substituting y = u * in Expression (11), we have
λ n f ( v n , u * ) λ n f ( v n , u n + 1 ) η n u n + 1 , u * u n + 1 .
Since u * E P ( f , C ) , we have f ( u * , v n ) 0 . From the pseudomonotonicity of bifunction f, we obtain f ( v n , u * ) 0 . Hence, it follows from Expression (15) that
η n u n + 1 , u n + 1 u * λ n f ( v n , u n + 1 ) .
From the definition of λ n + 1 , we obtain
f ( η n , u n + 1 ) f ( η n , v n ) f ( v n , u n + 1 ) μ η n v n 2 + μ u n + 1 v n 2 2 λ n + 1
From Expressions (16) and (17), we have
η n u n + 1 , u n + 1 u * λ n { f ( η n , u n + 1 ) f ( η n , v n ) } μ λ n 2 λ n + 1 η n v n 2 μ λ n 2 λ n + 1 u n + 1 v n 2 .
Combining Expressions (14) and (18), we obtain
η n u n + 1 , u n + 1 u * η n v n , u n + 1 v n μ λ n 2 λ n + 1 η n v n 2 μ λ n 2 λ n + 1 u n + 1 v n 2 .
We have the following formulas:
2 η n u n + 1 , u n + 1 u * = η n u * 2 + u n + 1 η n 2 + u n + 1 u * 2 .
2 v n η n , v n u n + 1 = η n v n 2 + u n + 1 v n 2 η n u n + 1 2 .
Combining the relations (19)–(21), we get
u n + 1 u * 2 η n u * 2 1 μ λ n λ n + 1 η n v n 2 1 μ λ n λ n + 1 u n + 1 v n 2 .
 □
Theorem 1.
Assume that a bifunction f : H × H R satisfies the conditions (f1)(f4) and u * belongs to solution set E P ( f , C ) . Then, the sequences { η n } , { u n } and { v n } generated by Algorithm 1 converge weakly to the u * solution of the problem (1). In addition, lim n P E P ( f , C ) ( u n ) = u * .
Proof. 
Since λ n λ , there exists a fixed number ϵ ( 0 , 1 μ ) such that
lim n 1 μ λ n λ n + 1 = 1 μ > ϵ > 0 .
Thus, there is a finite number n 1 N such that
1 μ λ n λ n + 1 > ϵ > 0 , n n 1 .
By Lemma 6, we obtain
u n + 1 u * 2 η n u * 2 , n n 1 .
From the definition of η n in Algorithm 1, we have
(24) η n u * 2 = u n + ϑ n ( u n u n 1 ) u * 2 = ( 1 + ϑ n ) ( u n u * ) ϑ n ( u n 1 u * ) 2 = ( 1 + ϑ n ) u n u * 2 ϑ n u n 1 u * 2 + ϑ n ( 1 + ϑ n ) u n u n 1 2 (25) ( 1 + ϑ n ) u n u * 2 ϑ n u n 1 u * 2 + 2 ϑ u n u n 1 2 .
Expression (23) can be written as
u n + 1 u * 2 ( 1 + ϑ n ) u n u * 2 ϑ n u n 1 u * 2 + 2 ϑ u n u n 1 2 , n n 1 .
From the definition of the η n , we also have
η n u * = u n + ϑ n ( u n u n 1 ) u * u n u * + ϑ n u n u n 1 .
Combining relations (23) and (27), we obtain
u n + 1 u * u n u * + ϑ u n u n 1 , n n 1 .
By using Lemma 4 with (7) and (28), we have
lim n u n u * = l , for some finite l 0 .
From Equality (8), we have
lim n u n u n 1 = 0 .
By letting n in Expression (24), we obtain
lim n η n u * = l .
From Lemma 6 and Expression (25), we have
u n + 1 u * 2 ( 1 + ϑ n ) u n u * 2 ϑ n u n 1 u * 2 + 2 ϑ u n u n 1 2 1 μ λ n λ n + 1 η n v n 2 1 μ λ n λ n + 1 u n + 1 v n 2 ,
which further implies that (for n n 1 )
ϵ η n v n 2 + ϵ v n u n + 1 2 u n u * 2 u n + 1 u * 2 + ϑ n u n u * 2 u n 1 u * 2 + 2 ϑ u n u n 1 2 .
By letting n in (33), we obtain
lim n η n v n = lim n v n u n + 1 = 0 .
By using the Cauchy inequality and Expression (34), we obtain
lim n η n u n + 1 lim n η n v n + lim n u n + 1 v n = 0 .
From Expressions (31) and (34), we also obtain
lim n v n u * = l .
It follows from Expressions (29), (31), and (36) that the sequences { η n } , { u n } , and { v n } are bounded. Next, we need to use Lemma 3, for it is compulsory to prove that all sequential weak cluster limit points of the sequence { u n } belong to the solution set E P ( f , C ) . Assume that z is any weak cluster limit point of the sequence { u n } , i.e., there exists a subsequence { u n k } of { u n } such that u n k z . Since u n v n 0 , it follows that { v n k } also weakly converges to z and so z C . Now, it remains to prove that z E P ( f , C ) . By Expression (11), the definition of λ n + 1 , and (14), we have
λ n k f ( v n k , y ) λ n k f ( v n k , u n k + 1 ) + η n k u n k + 1 , y u n k + 1 λ n k f ( η n k , u n k + 1 ) λ n k f ( η n k , v n k ) μ λ n k 2 λ n k + 1 η n k v n k 2 μ λ n k 2 λ n k + 1 v n k u n k + 1 2 + η n k u n k + 1 , y u n k + 1 η n k v n k , u n k + 1 v n k μ λ n k 2 λ n k + 1 η n k v n k 2 μ λ n k 2 λ n k + 1 v n k u n k + 1 2 + η n k u n k + 1 , y u n k + 1 ,
where y H n . It follows from (30), (34), (35), and the boundedness of { u n } that the right hand side tends to zero. Due to λ n k > 0 , condition (f3), and v n k z , we have
0 lim sup k f ( v n k , y ) f ( z , y ) , y H n .
Since C H n , it follows that f ( z , y ) 0 , y C . This implies that z E P ( f , C ) . Finally, from Lemma 3, the sequences { η n } , { u n } , and { v n } converge weakly to u * as n .
Moreover, the renaming part consists of proving that lim n P E P ( f , C ) ( u n ) = u * . Let q n : = P E P ( f , C ) ( u n ) , n N . For any u * E P ( f , C ) , we have
q n q n u n + u n u * u n + u n .
The above expression implies that the sequence { q n } is bounded. Next, we prove that { q n } is a Cauchy sequence. By Lemma 1(iii) and (27), we have
u n + 1 q n + 1 u n + 1 q n u n q n + ϑ u n u n 1 , n n 1 .
Lemma 4 provides the existence of lim n u n q n . From Expression (27) for all m > n n 1 , we have
q n u m q n u m 1 + ϑ u n u n 1 q n u n + ϑ k = n m 1 u n u n 1 .
Suppose that q m , q n E P ( f , C ) for m > n n 1 . By using Lemma 1(i) and Expression (40), we have
q n q m 2 q n u m 2 q m u m 2 q n u n 2 + ϑ k = n m 1 u n u n 1 2 + 2 ϑ q n u n k = n m 1 u n u n 1 q m u m 2 .
The existence of lim n u n q n and the summability of the series n u n u n 1 imply that lim n q n q m = 0 , for all m > n . As a result, { q n } is a Cauchy sequence and due to the closeness of a solution set E P ( f , C ) the sequence { q n } strongly converges to q * E P ( f , C ) . Next, we show that q * = u * . Due to Lemma 1(ii) and u * , q * E P ( f , C ) , we can write
u n q n , u * q n 0 .
Due to q n q * and u n u * , we obtain
u * q * , u * q * 0
which gives that u * = q * = lim n P E P ( f , C ) ( u n ) .  □

4. Applications to Solve Fixed Point Problems

Now, consider the applications of our results from Section 3 to solve fixed-point problems involving κ -strict pseudo-contraction. A mapping T : C C is said to be
(i)
κ -strict pseudo-contraction [45] on C if
T z 1 T z 2 2 z 1 z 2 2 + κ ( z 1 T z 1 ) ( z 2 T z 2 ) 2 , z 1 , z 2 C ,
which is equivalent to
T z 1 T z 2 , z 1 z 2 z 1 z 2 2 1 κ 2 ( z 1 T z 1 ) ( z 2 T z 2 ) 2 , z 1 , z 2 C ;
(ii)
sequentially weakly continuous on C if
T ( u n ) T ( p ) for every sequence in C satisfying u n p ( weakly converges ) .
The fixed point problem for a mapping T : C C is formulated in the following way:
Find u * C such that T ( u * ) = u * .
Note: If we define bifunction f ( x , y ) = x T x , y x , x , y C . Then, the equilibrium problem (1) converts into the fixed point problem with 2 c 1 = 2 c 2 = 3 2 κ 1 κ . From the value of v n in Algorithm 1, we have
v n = arg min y C { λ n f ( η n , y ) + 1 2 η n y 2 } = arg min y C { λ n η n T ( η n ) , y η n + 1 2 η n y 2 } = arg min y C { λ n η n T ( η n ) , y η n + 1 2 η n y 2 + λ n 2 2 η n T ( η n ) 2 λ n 2 2 η n T ( η n ) 2 } = arg min y C { 1 2 y η n + λ n ( η n T ( η n ) ) 2 } = P C η n λ n ( η n T ( η n ) ) = P C ( 1 λ n ) η n + λ n T ( η n ) .
Since ω n 2 f ( η n , v n ) , it follows from the definition of the subdifferential that we have
ω n , y v n η n T ( η n ) , y η n η n T ( η n ) , v n η n , y H η n T ( η n ) , y v n , y H ,
and consequently 0 η n T ( η n ) ω n , y v n . This implies that
( 1 λ n ) η n + λ n T ( η n ) v n , y v n ( 1 λ n ) η n + λ n T ( η n ) v n , y v n + λ n η n T ( η n ) ω n , y v n η n λ n ω n v n , y v n .
Similarly to Expression (45), we obtain
u n + 1 = P H n η n λ n ( v n T ( v n ) ) .
As a consequence of the results in Section 3, we have the following fixed point theorem:
Corollary 1.
Let C be a subset of a Hilbert space H and T : C C be a κ-strict pseudocontraction and weakly continuous with F i x ( T ) . The sequences η n , u n , and v n are generated in the following way:
(i)
Fix u 1 , u 0 C , λ 0 > 0 , μ ( 0 , 1 ) and ϑ [ 0 , 1 ) with a sequence { ϵ n } [ 0 , + ) such that
n = 0 + ϵ n < + .
(ii)
Choose ϑ n such that 0 ϑ n ϑ n ¯ and
ϑ n ¯ = min ϑ , ϵ n u n u n 1 if u n u n 1 , ϑ o t h e r w i s e .
(iii)
Evaluate
η n = u n + ϑ n ( u n u n 1 ) , v n = P C η n λ n ( η n T ( η n ) ) , u n + 1 = P H n η n λ n ( v n T ( v n ) ) ,
where H n = { z H : ( 1 λ n ) η n + λ n T ( η n ) v n , z v n 0 } .
(iv)
Set d 2 = ( η n v n ) ( T ( η n ) T ( v n ) ) , u n + 1 v n and revise the stepsize λ n + 1 in the following way:
λ n + 1 = min λ n , μ η n v n 2 + μ u n + 1 v n 2 2 d 2 if d 2 > 0 , λ n o t h e r w i s e .
Then, sequences { η n } , { u n } , and { v n } weakly converge to u * F i x ( T ) .

5. Application to Solve Variational Inequality Problems

Now, consider the applications of our results from in Section 3 to solve variational inequality problems involving a pseudomonotone and Lipschitz-type continuous operator. An operator K : H H is said to be
(i)
L-Lipschitz continuous on C if
K ( z 1 ) K ( z 2 ) L z 1 z 2 , z 1 , z 2 C ;
(ii)
pseudomonotone on C if
K ( z 1 ) , z 2 z 1 0 K ( z 2 ) , z 1 z 2 0 , z 1 , z 2 C .
The variational inequality problem for a operator K : H H is formulated in the following way:
Find u * C such that K ( u * ) , y u * 0 , y C .
Note: If we define a bifunction f ( x , y ) : = K ( x ) , y x , x , y C . Thus, the equilibrium problem (1) translates into a variational inequality problem with L = 2 c 1 = 2 c 2 . From the value of v n , we have
v n = arg min y C λ n f ( η n , y ) + 1 2 η n y 2 = arg min y C λ n K ( η n ) , y η n + 1 2 η n y 2 + λ n 2 2 K ( η n ) 2 λ n 2 2 K ( η n ) 2 = arg min y C 1 2 y ( η n λ n K ( η n ) ) 2 = P C [ η n λ n K ( η n ) ] .
Since ω n 2 f ( η n , v n ) , it follows from the subdifferential definition that we have
ω n , y v n K ( η n ) , y η n K ( η n ) , v n η n , y H = K ( η n ) , y v n , y H ,
and consequently 0 K ( η n ) ω n , y v n . This implies that
η n λ n K ( η n ) v n , y v n η n λ n K ( η n ) v n , y v n + λ n K ( η n ) ω n , y v n η n λ n ω n v n , y v n .
In similar way to Expression (52), we have
u n + 1 = P H n [ η n λ n K ( v n ) ] .
Suppose that K satisfies the following conditions:
(K1)
K is pseudomonotone on C with V I ( K , C ) ;
(K2)
K is L-Lipschitz continuous on C with L > 0 ;
(K3)
lim sup n K ( u n ) , y u n K ( p ) , y p , y C and { u n } C satisfying u n p .
Corollary 2.
Assume that a operator K : C H satisfies the conditions (K1)(K3) and that the sequences { η n } , { u n } , and { v n } are generated in the following way:
(i)
Choose u 1 , u 0 C , λ 0 > 0 , μ ( 0 , 1 ) and ϑ [ 0 , 1 ) with { ϵ n } [ 0 , + ) such that
n = 0 + ϵ n < + .
(ii)
Choose ϑ n satisfying 0 ϑ n ϑ n ¯ such that
ϑ n ¯ = min ϑ , ϵ n u n u n 1 if u n u n 1 , ϑ o t h e r w i s e .
(iii)
Set η n = u n + ϑ n ( u n u n 1 ) and compute
v n = P C [ η n λ n K ( η n ) ] , u n + 1 = P H n [ η n λ n K ( v n ) ] ,
where H n = { z H : η n λ n K ( η n ) v n , z v n 0 } .
(iv)
Set d 3 = K ( η n ) K ( v n ) , u n + 1 v n and stepsize λ n + 1 is revised in the following way:
λ n + 1 = min λ n , μ η n v n 2 + μ u n + 1 v n 2 2 d 3 if d 3 > 0 , λ n o t h e r w i s e .
Then, the sequences { η n } , { u n } , and { v n } weakly converge to u * V I ( K , C ) .

6. Numerical Experiments

The computational results are presented in this section to illustrate the effectiveness of our proposed Algorithm 1 (EiEGM) compared to Algorithm 1 (iEGM) in [39]. The MATLAB program was operated on a PC (with Intel(R) Core(TM)i3-4010U CPU @ 1.70GHz 1.70GHz, RAM 4.00 GB) in MATLAB version 9.5 (R2018b). We used the built-in MATLAB fmincon function to solve the minimization problems.
Example 1.
Let f : C × C R be defined by
f ( u , v ) = i = 2 5 ( v i u i ) u , u , v R 5 ,
where C = ( u 1 , , u 5 ) : u 1 1 , u i 1 , i = 2 , , 5 . The bifunction f is Lipschitz-type continuous operator with constants c 1 = c 2 = 2 , and it satisfies conditions (f1)–(f4). To evaluate the best possible value of the control parameters, two tests were performed taking into consideration the variation of the control parameters λ , λ 0 and inertial factor ϑ . The numerical results are shown in the Table 1 and Table 2 by choosing u 1 = u 0 = ( 2 , 3 , 2 , 5 , 5 ) , μ = 0 . 33 and D n = η n v n ϵ = 10 4 .
Example 2.
Consider the Nash–Cournot equilibrium model that appeared in the paper [7]. The bifunction f has been defined in the following way:
f ( u , v ) = A u + B v + c , v u
where c R 5 and matrices A, B are
A = 3.1 2 0 0 0 2 3.6 0 0 0 0 0 3.5 2 0 0 0 2 3.3 0 0 0 0 0 3 B = 1.6 1 0 0 0 1 1.6 0 0 0 0 0 1.5 1 0 0 0 1 1.5 0 0 0 0 0 2 c = 1 2 1 2 1
while Lipschitz constants c 1 = c 2 = 1 2 A B (see for more details [7,46,47]). The set C R 5 is C : = { u R 5 : 5 u i 5 } . Figure 1 and Figure 2 and Table 3 report the numerical results by choosing u 1 = u 0 = ( 1 , , 1 ) , μ = 0 . 33 and ϵ = 10 6 .
Example 3.
Let f ( p ˘ , q ˘ ) = F ( p ˘ ) , q ˘ p ˘ and F ( p ˘ ) = G ( p ˘ ) + H ( p ˘ ) , where
G ( p ˘ ) = g 1 ( p ˘ ) , g 2 ( p ˘ ) , , g n ( p ˘ ) , H ( p ˘ ) = E p ˘ + c , c = ( 1 , 1 , , 1 )
and
g i ( p ˘ ) = p ˘ i 1 2 + p ˘ i 2 + p ˘ i 1 p ˘ i + p ˘ i p ˘ i + 1 , i = 1 , 2 , , n , p ˘ 0 = p ˘ n + 1 = 0 .
The entries of a square matrix E are taken in the following way:
e i , j = 4 j = i 1 i j = 1 2 i j = 1 0 o t h e r w i s e ,
where C = ( u 1 , , u n ) R n : u i 1 , i = 2 , , n . To see the optimum values of the control parameters, some experiments were carried out taking into account the variation of the control parameters λ 0 and the inertial factor ϑ. Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 and Table 4 and Table 5 report the numerical results by choosing u 1 = u 0 = ( 1 , , 1 ) , μ = 0.33 and ϵ = 10 6 .
Example 4.
Suppose that H = L 2 ( [ 0 , 1 ] ) is a Hilbert space with an inner product u , v = 0 1 u ( t ) v ( t ) d t , u , v H and the induced norm
u = 0 1 | u ( t ) | 2 d t .
Assume that C : = { u L 2 ( [ 0 , 1 ] ) : u 1 } is the unit ball. Let K : C H be
K ( u ) ( t ) = 0 1 u ( t ) H ( t , s ) f ( u ( s ) ) d s + g ( t ) ,
where
H ( t , s ) = 2 t s e ( t + s ) e e 2 1 , f ( u ) = c o s x , g ( t ) = 2 t e t e e 2 1 .
We can see in [48] that K is monotone and Lipschitz-continuous with a Lipschitz constant of L = 2 . Figure 9 and Figure 10 and Table 6 show the numerical results by choosing different values of u 0 and ϵ = 10 6 .
Discussion on the Numerical Experiments: We have the following findings about the above-mentioned experiments:
(1)
The proposed Algorithm 1 does not depend on the Lipschitz constants, unlike Algorithm 1 in [39]. Algorithm 1 uses a variable stepsize that is updated for each iteration and depends on some of the previous iterations. The key advantage of Algorithm 1 is that it works without prior knowledge of the Lipschitz-type constants c 1 and c 2 , unlike Algorithm 1 in [39].
(2)
Four examples were discussed to compare our proposed method with Algorithm 1 in [39]. In particular, information on Lipschitz constants is missing in Example 3. Due to the missing information of the Lipschitz constants we cannot run Algorithm 1 in [39] because the stepsize λ is dependent on Lipschitz constants, i.e., 0 < λ < min { 1 2 c 1 , 1 2 c 2 } . However, we can use the proposed Algorithm 1 to solve Example 3.
(3)
It is noted that the selection of the ϑ value is always important, and precisely the value ϑ ( 3 , 6 ) is better than most other values.
(4)
Choosing of the λ 0 value is critical and the proposed algorithm performs better when λ 0 is closer to 1 .
(5)
It can also be acknowledged that the efficiency of an algorithms significantly depends on the nature of the problem and tolerance. More time and a considerable number of iterations are needed for large-scale problems. In this case, we could see that a certain stepsize value improves the efficiency of the algorithm and improves the convergence rate.
(6)
Figure 9 and Figure 10 and Table 6 suggest that the choice of initial points and the complexity of the bifunction have an effect on the performance of algorithms in terms of number of iterations and time elapsed.

Author Contributions

Conceptualization, T.B., P.K. and H.u.R.; Writing-Original Draft Preparation, T.B., N.P. and H.u.R.; Writing-Review & Editing, T.B., N.P., H.u.R., P.K. and W.K.; Methodology, N.P. and H.u.R.; Visualization, T.B. and N.P.; Software, H.u.R.; Funding Acquisition, P.K. and W.K.; Supervision, P.K. and W.K.; Project Administration; P.K. and W.K.; Resources; P.K. and W.K. All authors have read and agreed to the published version of this manuscript.

Funding

This research work was financially supported by King Mongkut’s University of Technology Thonburi through the "KMUTT 55th Anniversary Commemorative Fund". Moreover, this project was supported by the Theoretical and Computational Science (TaCS) Center under the Computational and Applied Science for Smart research Innovation research Cluster (CLASSIC), Faculty of Science, KMUTT. In particular, Nuttapol Pakkaranang was supported by the Thailand Research Fund (TRF) through the Royal Golden Jubilee Ph.D. (RGJ-PHD) Program [Grant No. PHD/0205/2561]. Habib ur Rehman was supported by the Petchra Pra Jom Doctoral Academic Scholarship for a Ph.D. Program at KMUTT [grant number 39/2560]. Wiyada Kumam was financially supported by the Rajamangala University of Technology Thanyaburi (RMUTTT) [grant No. NSF62D0604].

Acknowledgments

We are very grateful to the editor and the anonymous referees for their valuable and useful comments, which helped in improving the quality of this work. The second author would like to thank the “Thailand Research Fund (TRF) through the Royal Golden Jubilee Ph.D. (RGJ-PHD) Program (Grant No. PHD/0205/2561)”. The third author would like to thank the “Petchra Pra Jom Klao PhD Research Scholarship from the King Mongkut’s University of Technology Thonburi”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Blum, E. From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  2. Bianchi, M.; Schaible, S. Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 1996, 90, 31–43. [Google Scholar] [CrossRef]
  3. Fan, K. A Minimax Inequality and Applications, Inequalities III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972. [Google Scholar]
  4. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Science & Business Media: Berlin, Germany, 2007. [Google Scholar]
  5. Konnov, I. Equilibrium Models and Variational Inequalities; Elsevier: Amsterdam, The Netherlands, 2007; Volume 210. [Google Scholar]
  6. Muu, L.D.; Oettli, W. Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. Theory Methods Appl. 1992, 18, 1159–1166. [Google Scholar] [CrossRef]
  7. Quoc Tran, D.; Le Dung, M.N.V.H. Extragradient algorithms extended to equilibrium problems. Optimization 2008, 57, 749–776. [Google Scholar] [CrossRef]
  8. Ur Rehman, H.; Kumam, P.; Cho, Y.J.; Yordsorn, P. Weak convergence of explicit extragradient algorithms for solving equilibirum problems. J. Inequalities Appl. 2019, 2019. [Google Scholar] [CrossRef]
  9. Quoc, T.D.; Anh, P.N.; Muu, L.D. Dual extragradient algorithms extended to equilibrium problems. J. Glob. Optim. 2011, 52, 139–159. [Google Scholar] [CrossRef]
  10. Lyashko, S.I.; Semenov, V.V. A new two-step proximal algorithm of solving the problem of equilibrium programming. In Optimization and Its Applications in Control and Data Sciences; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 315–325. [Google Scholar] [CrossRef]
  11. Takahashi, S.; Takahashi, W. Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331, 506–515. [Google Scholar] [CrossRef] [Green Version]
  12. Ur Rehman, H.; Kumam, P.; Je Cho, Y.; Suleiman, Y.I.; Kumam, W. Modified Popov’s explicit iterative algorithms for solving pseudomonotone equilibrium problems. Optim. Methods Softw. 2020, 1–32. [Google Scholar] [CrossRef]
  13. Anh, P.N.; Hai, T.N.; Tuan, P.M. On ergodic algorithms for equilibrium problems. J. Glob. Optim. 2015, 64, 179–195. [Google Scholar] [CrossRef]
  14. Hieu, D.V.; Quy, P.K.; Vy, L.V. Explicit iterative algorithms for solving equilibrium problems. Calcolo 2019, 56. [Google Scholar] [CrossRef]
  15. Hieu, D.V. New extragradient method for a class of equilibrium problems in Hilbert spaces. Appl. Anal. 2017, 97, 811–824. [Google Scholar] [CrossRef]
  16. Ur Rehman, H.; Kumam, P.; Abubakar, A.B.; Cho, Y.J. The extragradient algorithm with inertial effects extended to equilibrium problems. Comput. Appl. Math. 2020, 39. [Google Scholar] [CrossRef]
  17. Santos, P.; Scheimberg, S. An inexact subgradient algorithm for equilibrium problems. Comput. Appl. Math. 2011, 30, 91–107. [Google Scholar]
  18. Ur Rehman, H.; Kumam, P.; Kumam, W.; Shutaywi, M.; Jirakitpuwapat, W. The inertial sub-gradient extra-gradient method for a class of pseudo-monotone equilibrium problems. Symmetry 2020, 12, 463. [Google Scholar] [CrossRef] [Green Version]
  19. Hieu, D.V. Halpern subgradient extragradient method extended to equilibrium problems. Rev. Real Acad. Cienc. Exactas Físicas y Nat. Ser. A Matemáticas 2016, 111, 823–840. [Google Scholar] [CrossRef]
  20. Anh, P.N.; An, L.T.H. The subgradient extragradient method extended to equilibrium problems. Optimization 2012, 64, 225–248. [Google Scholar] [CrossRef]
  21. Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Deebani, W.; Kumam, W. Inertial extra-gradient method for solving a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces with application in variational inequality problem. Symmetry 2020, 12, 503. [Google Scholar] [CrossRef] [Green Version]
  22. Muu, L.D.; Quoc, T.D. Regularization algorithms for solving monotone Ky Fan inequalities with application to a Nash-Cournot equilibrium model. J. Optim. Theory Appl. 2009, 142, 185–204. [Google Scholar] [CrossRef]
  23. Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Alreshidi, N.A.; Kumam, W.; Jirakitpuwapat, W. A self-adaptive extra-gradient methods for a family of pseudomonotone equilibrium programming with application in different classes of variational inequality problems. Symmetry 2020, 12, 523. [Google Scholar] [CrossRef] [Green Version]
  24. Ur Rehman, H.; Kumam, P.; Argyros, I.K.; Shutaywi, M.; Shah, Z. Optimization based methods for solving the equilibrium problems with applications in variational inequality problems and solution of Nash equilibrium models. Mathematics 2020, 8, 822. [Google Scholar] [CrossRef]
  25. Ur Rehman, H.; Kumam, P.; Shutaywi, M.; Alreshidi, N.A.; Kumam, W. Inertial optimization based two-step methods for solving equilibrium problems with applications in variational inequality problems and growth control equilibrium models. Energies 2020, 13, 3292. [Google Scholar] [CrossRef]
  26. Gibali, A.; Hieu, D.V. A new inertial double-projection method for solving variational inequalities. J. Fixed Point Theory Appl. 2019, 21. [Google Scholar] [CrossRef]
  27. Thong, D.V.; Hieu, D.V. Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 2017, 79, 597–610. [Google Scholar] [CrossRef]
  28. Thong, D.V.; Hieu, D.V. Inertial extragradient algorithms for strongly pseudomonotone variational inequalities. J. Comput. Appl. Math. 2018, 341, 80–98. [Google Scholar] [CrossRef]
  29. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2012, 61, 1119–1132. [Google Scholar] [CrossRef]
  30. Yordsorn, P.; Kumam, P.; Rehman, H.U. Modified two-step extragradient method for solving the pseudomonotone equilibrium programming in a real Hilbert space. Carpathian J. Math. 2020, 36, 313–330. [Google Scholar]
  31. Hammad, H.A.; ur Rehman, H.; la Sen, M.D. Advanced algorithms and common solutions to variational inequalities. Symmetry 2020, 12, 1198. [Google Scholar] [CrossRef]
  32. Gibali, A. A new non-Lipschitzian projection method for solving variational inequalities in Euclidean spaces. J. Nonlinear Anal. Optim. Theory Appl. 2015, 6, 41–51. [Google Scholar]
  33. Dong, Q.L.; Jiang, D.; Gibali, A. A modified subgradient extragradient method for solving the variational inequality problem. Numer. Algorithms 2018, 79, 927–940. [Google Scholar] [CrossRef] [Green Version]
  34. Abubakar, J.; Kumam, P.; ur Rehman, H.; Ibrahim, A.H. Inertial iterative schemes with variable step sizes for variational inequality problem involving pseudomonotone operator. Mathematics 2020, 8, 609. [Google Scholar] [CrossRef]
  35. Abubakar, J.; Sombut, K.; ur Rehman, H.; Ibrahim, A.H. An accelerated subgradient extragradient algorithm for strongly pseudomonotone variational inequality problems. Thai J. Math. 2019, 18. [Google Scholar]
  36. Flåm, S.D.; Antipin, A.S. Equilibrium programming using proximal-like algorithms. Math. Program. 1996, 78, 29–41. [Google Scholar] [CrossRef]
  37. Korpelevich, G. The extragradient method for finding saddle points and other problems. Matecon 1976, 12, 747–756. [Google Scholar]
  38. Yang, J.; Liu, H.; Liu, Z. Modified subgradient extragradient algorithms for solving monotone variational inequalities. Optimization 2018, 67, 2247–2258. [Google Scholar] [CrossRef]
  39. Vinh, N.T.; Muu, L.D. Inertial extragradient algorithms for solving equilibrium problems. Acta Math. Vietnam. 2019, 44, 639–663. [Google Scholar] [CrossRef]
  40. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2010, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Kreyszig, E. Introductory Functional Analysis with Applications, 1st ed.; Wiley: New York, NY, USA, 1989. [Google Scholar]
  42. Tiel, J.V. Convex Analysis: An Introductory Text, 1st ed.; Wiley: New York, NY, USA, 1984. [Google Scholar]
  43. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–598. [Google Scholar] [CrossRef] [Green Version]
  44. Tan, K.; Xu, H. Approximating Fixed Points of Nonexpansive Mappings by the Ishikawa Iteration Process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef] [Green Version]
  45. Browder, F.; Petryshyn, W. Construction of fixed points of nonlinear mappings in Hilbert space. J. Math. Anal. Appl. 1967, 20, 197–228. [Google Scholar] [CrossRef] [Green Version]
  46. Ur Rehman, H.; Pakkaranang, N.; Hussain, A.; Wairojjana, N. A modified extra-gradient method for a family of strongly pseudomonotone equilibrium problems in real Hilbert spaces. J. Math. Comput. Sci. 2020, 22, 38–48. [Google Scholar] [CrossRef]
  47. Yordsorn, P.; Kumam, P.; ur Rehman, H.; Ibrahim, A.H. A weak convergence self-adaptive method for solving pseudomonotone equilibrium problems in a real Hilbert space. Mathematics 2020, 8, 1165. [Google Scholar] [CrossRef]
  48. Van Hieu, D.; Anh, P.K.; Muu, L.D. Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 2017, 66, 75–96. [Google Scholar] [CrossRef]
Figure 1. Example 2: numerical behavior of Algorithm 3.1 in [39] by choosing different values of λ .
Figure 1. Example 2: numerical behavior of Algorithm 3.1 in [39] by choosing different values of λ .
Symmetry 12 01332 g001
Figure 2. Example 2: numerical behavior of Algorithm 1 by choosing different values of λ 0 .
Figure 2. Example 2: numerical behavior of Algorithm 1 by choosing different values of λ 0 .
Symmetry 12 01332 g002
Figure 3. Numerical conduct of Algorithm 1 in R 50 by choosing different values of λ 0 .
Figure 3. Numerical conduct of Algorithm 1 in R 50 by choosing different values of λ 0 .
Symmetry 12 01332 g003
Figure 4. Numerical conduct of Algorithm 1 in R 50 by choosing different values of λ 0 .
Figure 4. Numerical conduct of Algorithm 1 in R 50 by choosing different values of λ 0 .
Symmetry 12 01332 g004
Figure 5. Numerical conduct of Algorithm 1 in R 200 by choosing different values of λ 0 .
Figure 5. Numerical conduct of Algorithm 1 in R 200 by choosing different values of λ 0 .
Symmetry 12 01332 g005
Figure 6. Numerical conduct of Algorithm 1 in R 200 by choosing different values of λ 0 .
Figure 6. Numerical conduct of Algorithm 1 in R 200 by choosing different values of λ 0 .
Symmetry 12 01332 g006
Figure 7. Numerical conduct of Algorithm 1 in R 50 by choosing different values of ϑ .
Figure 7. Numerical conduct of Algorithm 1 in R 50 by choosing different values of ϑ .
Symmetry 12 01332 g007
Figure 8. Numerical conduct of Algorithm 1 in R 50 by choosing different values of ϑ .
Figure 8. Numerical conduct of Algorithm 1 in R 50 by choosing different values of ϑ .
Symmetry 12 01332 g008
Figure 9. Algorithm 1 comparison with Algorithm 1 in [39] by choosing values of u 1 = u 0 = 1 .
Figure 9. Algorithm 1 comparison with Algorithm 1 in [39] by choosing values of u 1 = u 0 = 1 .
Symmetry 12 01332 g009
Figure 10. Algorithm 1 comparison with Algorithm 1 in [39] by choosing values of u 1 = u 0 = t .
Figure 10. Algorithm 1 comparison with Algorithm 1 in [39] by choosing values of u 1 = u 0 = t .
Symmetry 12 01332 g010
Table 1. Example 1: Algorithm 1 numerical comparison with Algorithm 1 in [39].
Table 1. Example 1: Algorithm 1 numerical comparison with Algorithm 1 in [39].
Number of IterationsExecution Time in Seconds
ϑ λ λ 0 iEGMEiEGMiEGMEiEGM
0.450.221.001270.86750.5324
0.450.160.801370.88150.5423
0.450.100.601771.09150.5212
0.450.050.402181.41190.5567
0.450.010.202591.72290.5881
Table 2. Example 1: Algorithm 1 numerical comparison with Algorithm 1 in [39].
Table 2. Example 1: Algorithm 1 numerical comparison with Algorithm 1 in [39].
Number of IterationsExecution Time in Seconds
ϑ λ λ 0 iEGMEiEGMiEGMEiEGM
0.950.200.501971.14820.4911
0.750.200.501470.96760.5026
0.550.200.501370.96540.4991
0.350.200.501280.91230.5092
0.150.200.501791.07150.5098
Table 3. Figure 1 and Figure 2: Algorithm 1 numerical comparison with Algorithm 1 in [39].
Table 3. Figure 1 and Figure 2: Algorithm 1 numerical comparison with Algorithm 1 in [39].
Number of IterationsExecution Time in Seconds
ϑ λ λ 0 iEGMEiEGMiEGMEiEGM
0.500.050.1598642.21741.6342
0.500.100.3564501.68151.3452
0.500.150.5554461.57121.2011
0.500.200.7550421.51961.0845
0.500.250.9545381.38591.0023
Table 4. Numerical results for Algorithm 1 in R 50 by choosing different values of λ 0 and ϑ .
Table 4. Numerical results for Algorithm 1 in R 50 by choosing different values of λ 0 and ϑ .
EiEGMEiEGM
λ 0 Number of IterationsCPU Time ϑ Number of IterationsExecution Time
1.00640.92760.901161.8066
0.80741.09750.70801.2464
0.60841.28490.50731.1262
0.40911.37400.30881.3788
0.20971.62980.101021.6159
Table 5. Numerical results for Algorithm 1 in R 200 by choosing different values of λ 0 and ϑ .
Table 5. Numerical results for Algorithm 1 in R 200 by choosing different values of λ 0 and ϑ .
EiEGMEiEGM
λ 0 Number of IterationsCPU Time ϑ Number of IterationsExecution Time
1.006713.39670.9010520.6972
0.807815.35660.708015.5770
0.608817.34710.508015.4838
0.409618.88940.309419.4532
0.2010219.97050.1010821.9745
Table 6. Example 4: numerical comparison of Algorithm 1 with Algorithm 1 in [39].
Table 6. Example 4: numerical comparison of Algorithm 1 with Algorithm 1 in [39].
Number of IterationsExecution Time in Seconds
u 1 = u 0 ϑ λ λ 0 iEGMEiEGMiEGMEiEGM
10.500.200.5031250.01580.0158
t0.500.200.5031270.01580.0158
2 t 2 0.500.200.5033300.01580.0158
sin ( t ) 0.500.200.5037300.01580.0158
exp ( t ) 0.500.200.5042320.01580.0158

Share and Cite

MDPI and ACS Style

Bantaojai, T.; Pakkaranang, N.; ur Rehman, H.; Kumam, P.; Kumam, W. Convergence Analysis of Self-Adaptive Inertial Extra-Gradient Method for Solving a Family of Pseudomonotone Equilibrium Problems with Application. Symmetry 2020, 12, 1332. https://doi.org/10.3390/sym12081332

AMA Style

Bantaojai T, Pakkaranang N, ur Rehman H, Kumam P, Kumam W. Convergence Analysis of Self-Adaptive Inertial Extra-Gradient Method for Solving a Family of Pseudomonotone Equilibrium Problems with Application. Symmetry. 2020; 12(8):1332. https://doi.org/10.3390/sym12081332

Chicago/Turabian Style

Bantaojai, Thanatporn, Nuttapol Pakkaranang, Habib ur Rehman, Poom Kumam, and Wiyada Kumam. 2020. "Convergence Analysis of Self-Adaptive Inertial Extra-Gradient Method for Solving a Family of Pseudomonotone Equilibrium Problems with Application" Symmetry 12, no. 8: 1332. https://doi.org/10.3390/sym12081332

APA Style

Bantaojai, T., Pakkaranang, N., ur Rehman, H., Kumam, P., & Kumam, W. (2020). Convergence Analysis of Self-Adaptive Inertial Extra-Gradient Method for Solving a Family of Pseudomonotone Equilibrium Problems with Application. Symmetry, 12(8), 1332. https://doi.org/10.3390/sym12081332

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop