Next Article in Journal
Classical and Bayesian Inference for a Progressive First-Failure Censored Left-Truncated Normal Distribution
Previous Article in Journal
Perturbative RG Analysis of the Condensate Dependence of the Axial Anomaly in the Three-Flavor Linear Sigma Model
Previous Article in Special Issue
Monotone Iterative Technique for the Periodic Solutions of High-Order Delayed Differential Equations in Abstract Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Projected-Reflected Subgradient-Extragradient Method and Its Real-World Applications

1
Department of Mathematics, ORT Braude College, Karmiel 2161002, Israel
2
The Center for Mathematics and Scientific Computation, U. Haifa, Mt. Carmel, Haifa 3498838, Israel
3
Department of Mathematics and Physical Sciences, California University of Pennsylvania, California, PA 15419, USA
4
Department of Mathematics, Prairie View A&M University, Prairie View, TX 77446, USA
5
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2021, 13(3), 489; https://doi.org/10.3390/sym13030489
Submission received: 5 February 2021 / Revised: 10 March 2021 / Accepted: 11 March 2021 / Published: 16 March 2021
(This article belongs to the Special Issue Advances in Nonlinear, Discrete, Continuous and Hamiltonian Systems)

Abstract

:
Our main focus in this work is the classical variational inequality problem with Lipschitz continuous and pseudo-monotone mapping in real Hilbert spaces. An adaptive reflected subgradient-extragradient method is presented along with its weak convergence analysis. The novelty of the proposed method lies in the fact that only one projection onto the feasible set in each iteration is required, and there is no need to know/approximate the Lipschitz constant of the cost function a priori. To illustrate and emphasize the potential applicability of the new scheme, several numerical experiments and comparisons in tomography reconstruction, Nash–Cournot oligopolistic equilibrium, and more are presented.

1. Introduction

In this paper, we focus on the classical variational inequality (VI) problem, as can be found in Fichera [1,2], Stampacchia [3], and Kinderlehrer and Stampacchia [4], defined in real Hilbert space H. Given a nonempty, closed, and convex set of C H and a continuous mapping A : H H , the variational inequality (VI) problem consists of finding a point x * C such that:
A x * , x x * 0 x C .
V I ( C , A ) is used to denote the solution set of VIP(1) for simplicity. A wide range of mathematical and applied sciences rely heavily on variational inequalities in both theory and algorithms. Due to the importance of the variational inequality problem and many of its applications in different fields, several notable researchers have extensively studied this class of problems in the literature, and many more new ideas are emerging in connection with the problems. In the case of finite-dimensional setting, the current state-of-the-art results can be found in [5,6,7] including the substantial references therein.
Many algorithms (iterative methods) for solving the variational inequality (1) have been developed and well studied; see [5,6,7,8,9,10,11,12,13,14,15] and the references therein. One of the famous methods is the so-called extragradient method (EGM), which was developed by Korpelevich [16] (also by Antipin [17] independently) in the finite-dimensional Euclidean space for a monotone and Lipschitz continuous operator. The extragradient method has been modified in different ways and later was extended to infinite-dimensional spaces. Many of these extensions were well studied in [18,19,20,21,22,23] and the references therein.
One feature that renders the Korpelevich algorithm less acceptable has to do with the fact that two projections onto the feasible set are required in every iteration. For this reason, there is a need to solve a minimum distance problem twice every iteration. Therefore, the efficiency of this method (Korpelevich algorithm) is affected, which limits its application as well.
A remedy to the second drawback was presented in Censor et al. [18,19,20]. The authors introduced the subgradient-extragradient method (SEGM). Given x 1 H ,
y n = P C ( x n λ A ( x n ) ) , T n : = { w H : x n λ A ( x n ) y n , w y n 0 } , x n + 1 = P T n ( x n λ A ( y n ) )
In this algorithm, A is an L-Lipschitz-continuous and monotone mapping and 0 < λ < 1 L . One of the novelties in the proposed SEGM (2) is the replacement of the second projection onto a feasible set with a projection onto a half-space. Recently, weak and strong convergence results of SEGM (2) have been obtained in the literature; see [24,25] and the references therein.
Thong and Hieu in [26] came up with inertial subgradient-extragradient method in the following algorithm. Given x 0 , x 1 H ,
w n = x n + θ n ( x n x n 1 ) , y n = P C ( w n λ A ( w n ) ) , T n : = { w H : w n λ A ( w n ) y n , w y n 0 } , x n + 1 = P T n ( w n λ A ( y n ) ) .
The authors proved the weak convergence of the sequence { x n } generated by (3) to a solution of variational inequality (VI), Equation (1), for the case where A is monotone and an L-Lipschitz-continuous mapping. For some:
0 < δ < 1 2 2 θ 1 2 θ 2 ,
the parameter λ is chosen to satisfy:
0 < λ L 1 2 2 θ 1 2 θ 2 δ 1 2 θ + 1 2 θ 2 .
The sequence { θ n } is non-decreasing with 0 θ n θ < 5 2 .
Malitsky in [21] introduced the following projected reflected gradient method, which solves VI (1) when A is Lipschitz continuous and monotone: choose x 1 , x 0 C :
w n = 2 x n x n 1 , x n + 1 = P C ( x n λ A w n ) ,
where λ ( 0 , 2 1 L ) , and we obtain weak convergence results in real Hilbert spaces.
Recently, Bo̧t et al. [27] introduced Tseng’s forward-backward-forward algorithm with relaxation parameters in Algorithm 1 to solve VI (1).
In [28], the following adaptive golden ratio method in Algorithm 2 for solving VI (1) was proposed.
Algorithm 1: Tseng’s forward-backward-forward algorithm with relaxation parameters.
Initialization: Choose ρ n ( 0 , 1 ] with the given parameters λ 0 > 0 and 0 < μ < 1 . Let x 1 H be arbitrary.Iterative steps: x n + 1 is calculated, with the current iterate x n given as follows:
Step 1. Compute:
y n = P C ( x n λ n A x n ) .
If x n = y n or A y n = 0 , then stop, and y n is a solution of V I ( C , A ) . Otherwise:
Step 2. Compute:
x n + 1 = ( 1 ρ n ) x n + ρ n ( y n + λ n ( A x n A y n ) ) ,
Update:
λ n + 1 = min { μ x n y n A x n A y n , λ n } i f A x n A y n 0 , λ n o t h e r w i s e .
Set n : = n + 1 , and go to Step 1.
Algorithm 2: Adaptive golden ratio method.
Initialization: Choose x 0 , x 1 H , λ 0 > 0 , ϕ ( 0 , 5 + 1 2 ] , λ ¯ > 0 . Set x ¯ 0 = x 1 , θ 0 = 1 , ρ = 1 ϕ + 1 ϕ 2 .
Iterative steps: x n + 1 is calculated, with the current iterate x n given as follows:
Step 1. Compute:
λ n = min ρ λ n 1 , ϕ θ n 1 4 λ n 1 x n x n 1 2 A x n A x n 1 2 , λ ¯ .
Step 2. Compute:
x ¯ n = ( ϕ 1 ) x n + x ¯ n 1 ϕ ,
and:
x n + 1 = P C ( x ¯ n λ n A x n ) .
Update:
θ n = λ n λ n 1 ϕ
Set n : = n + 1 , and go to Step 1.
Motivated by the recent works in [18,19,20,21,26,27,28], our aim in this paper is to introduce a reflected subgradient-extragradient method that solves variational inequalities and obtain weak convergence in the case where the cost function is Lipschitz continuous and a pseudo-monotone operator in real Hilbert spaces. This pseudo-monotone operator is in the sense of Karamardian [29]. Our method uses self-adaptive step sizes, and the convergence of the proposed algorithm is proven without any assumption of prior knowledge of the Lipschitz constant of the cost function.
The outline of the paper is as follows. We start with recalling some basic definitions and results in Section 2. Our algorithm and weak convergence analysis are presented in Section 3. In Section 4, we give some numerical experiments to demonstrate the performances of our method compared with other related algorithms.

2. Preliminaries

In this section, we provide necessary definitions and results needed in the sequel.
Definition 1.
An operator T : H H is said to be L-Lipschitz continuouswith L > 0 if the following inequality is satisfied:
T x T y   L x y x , y H .
Definition 2.
An operator T : H H is said to bemonotoneif the following inequality is satisfied:
T x T y , x y 0 x , y H .
Definition 3.
An operator T : H H is said to be pseudo-monotone if the following inequality implies the other:
T x , y x 0 T y , y x 0 x , y H .
Definition 4.
An operator T : H H is said to besequentially weakly continuousif for each sequence { x n } , we have that x n converges weakly to x, which implies that { T x n } converges weakly to T x .
Recall that for any given point x chosen in H, P C ( x ) denotes the unique nearest point in C. This operator has been shown to be nonexpansive, that is,
x P C ( x )     x y y C .
The operator P C is known as the metric projection of H onto C.
Lemma 1
([30]). Given x H and z C with C a nonempty, closed, and convex subset of a real Hilbert space H, then:
z = P C ( x ) x z , z y 0 y C .
Lemma 2
([30,31]). Given x H , a real Hilbert space and letting C be a closed and convex subset of H, then the following inequalities are true:
1. 
P C x P C ( y ) 2 P C x P C ( y ) , x y y H
2. 
P C ( x ) y 2 x y 2 x P C ( x ) 2 y C .
Lemma 3
([32]). Given x H and v H , v 0 , and letting T = z H : v , z x 0 , then, for all u H , the projection P T ( u ) is defined by:
P T ( u ) = u max 0 , v , u x | | v | | 2 v .
In particular, if u T , then:
P T ( u ) = u v , u x | | v | | 2 v .
The explicit formula provided in Lemma 3 is very important in computing the projection of any point onto a half-space.
Lemma 4
([33], Lemma 2.1). Let A : C H be continuous and pseudo-monotone where C is a nonempty, closed, and convex subset of a real Hilbert space H. Then, x * is a solution of V I ( C , A ) if and only if:
A x , x x * 0 x C .
Lemma 5
([34]). Let { x n } be a sequence in H and C a nonempty subset of H with the following conditions satisfied:
(i) 
every sequential weak cluster point of { x n } is in C;
(ii) 
lim n x n x exists for every x C .
Then, the sequence { x n } converges weakly to a point in C.
The following lemmas were given in [35].
Lemma 6.
Let h be a real-valued function on a real Hilbert space H, and define K : = { x H : h ( x ) 0 } . If h is Lipschitz continuous on H with modulus θ > 0 and K is nonempty, then:
d i s t ( x , K ) 1 θ max { 0 , h ( x ) } x H ,
where d i s t ( x , K ) is the distance function from x to K.
Lemma 7.
Let H be a real Hilbert space. The following statements are satisfied.
(i) 
For all x , y H , x + y 2 = x 2 + 2 x , y + y 2 ;
(ii) 
For all x , y H , x + y 2 x 2 + 2 y , x + y ;
(iii) 
For all x , y H , x + y 2 = 2 x 2 + 2 y 2 x y 2 .
Lemma 8
(Maingé [36]). Let { δ n } , { φ n } , and { θ n } be sequences defined in [ 0 , + ) satisfying the following:
φ n + 1 φ n + θ n ( φ n φ n 1 ) + δ n , n 1 , n = 1 + δ n < + ,
and there exists θ, a real number, with 0 θ n θ < 1 for all n N . Then, the following hold:
(i) 
n = 1 + [ φ n φ n 1 ] + < + , where [ t ] + : = max { t , 0 } ;
(ii) 
there exists φ * [ 0 , + ) such that lim n φ n = φ * .
In the work that follows, x n x as n denotes the strong convergence of { x n } n = 1 to a point x, and x n x as n denotes the weak convergence of { x n } n = 1 to a point x.

3. Main Results

We first provide the following conditions upon which the convergence analysis of our method is based and then present our method in Algorithm 3.
Condition 1.
The feasible set C is a nonempty, closed, and convex subset of H.
Condition 2.
The VI (1) associated operator A : H H is pseudo-monotone, sequentially weakly and Lipschitz continuous on a real Hilbert space H.
Condition 3.
The solution set of VI (1) is nonempty, that is V I ( C , A ) Ø .
In addition, we also make the following parameter choices. 0 < α α n α n + 1 < 1 2 + δ : = ϵ , δ > 0 .
Remark 1.
We point out here that the proposed Algorithm 3 is different from the method (4) in that the projection step in Algorithm 3 is P C ( w n λ n A w n ) , while the projection step in (4) is P C ( x n λ A w n ) . Furthermore, A is assumed to be pseudo-monotone in our Algorithm 3, while A is assumed to be monotone in (4).
Algorithm 3: Adaptive projected reflected subgradient extragradient method.
Initialization: Given λ 0 > 0 , μ ( 0 , 1 ) , let x 0 , x 1 H be arbitrary
Iterative steps: Given the current iterate x n , calculate x n + 1 as follows:
Step 1. Compute:
w n = 2 x n x n 1 y n = P C ( w n λ n A w n ) .
If x n = w n = y n = x n + 1 , then stop. Otherwise:
Step 2. Compute:
x n + 1 = ( 1 α n ) x n + α n P T n ( w n ) ,
where:
T n : = { x H : h n ( x ) 0 }
and
h n ( x ) = w n y n λ n ( A w n A y n ) , x y n .
Update:
λ n + 1 = min μ w n y n A w n A y n , λ n i f A w n A y n 0 , λ n o t h e r w i s e .
Set n : = n + 1 , and go to Step 1.
The first step towards the convergence proof of Algorithm 3 is to show that the sequence { λ n } generated by (9) is well defined. This is done using similar arguments as in [25].
Lemma 9.
The sequence { λ n } generated by (9) is a nonincreasing sequence and:
lim n λ n = λ min λ 0 , μ L .
Proof. 
Clearly, by (9), { λ n } is nonincreasing since λ n + 1 λ n for all n N . Next, using the fact that A is L-Lipschitz continuous, we have:
A w n A y n   L w n y n .
Therefore, we obtain:
μ w n y n A w n A y n μ L if A w n A y n ,
which together with (9) implies that:
λ n min λ 0 , μ L .
Therefore, the sequence { λ n } is nonincreasing and lower bounded. Therefore, there exists lim n λ n .
Lemma 10.
Assume that Conditions 1–3 hold. Let x * be a solution of Problem (1) and the function h n be defined by (8). Then, h n ( x * ) 0 , and there exists n 0 N such that:
h n ( w n ) 1 μ 2 w n y n 2 n n 0 .
In particular, if w n y n then h n ( w n ) > 0 .
Proof. 
Using Lemma 4 and the fact that x * denotes a solution of Problem (1), we obtain the following:
A y n , x * y n 0 .
It follows from (10) and y n = P C ( w n λ n A w n ) that:
h n ( x * ) = w n y n λ n ( A w n A y n ) , x * y n = w n y n λ n A w n , x * y n + λ n A y n , x * y n 0 .
Hence, the proof of the first claim of Lemma 10 is achieved. Next, we proceed to the proof of the second claim. Clearly, from the definition of { λ n } , the following inequality is true:
A w n A y n   μ λ n + 1 w n y n n .
In fact, Inequality (11) is satisfied if A w n = A y n . Otherwise, it implies from (9) that:
λ n + 1 = min { μ w n y n A w n A y n , λ n } μ w n y n A w n A y n .
Thus,
A w n A y n μ λ n + 1 w n y n .
Hence, we can conclude from the above that Inequality (11) is true for A w n = A y n and A w n A y n .
Using (11), we obtain:
h n ( w n ) = w n y n λ n ( A w n A y n ) , w n y n =   w n y n 2 λ n A w n A y n , w n y n   w n y n 2 λ n A w n A y n w n y n   w n y n 2 μ λ n λ n + 1 w n y n 2 = ( 1 μ λ n λ n + 1 ) w n y n 2 .
Since lim n ( 1 μ λ n λ n + 1 ) = 1 μ > 1 μ 2 > 0 , there exists n 0 N such that ( 1 μ λ n λ n + 1 ) > 1 μ 2 for all n n 0 . Therefore,
h n ( w n ) 1 μ 2 w n y n 2 .
Remark 2.
Lemma 10 implies that w n T n with n n 0 . Based on Lemma 3, we can write x n + 1 in the form:
x n + 1 = w n w n y n λ n ( A w n A y n ) , w n y n w n y n λ n ( A w n A y n ) 2 ( w n y n λ n ( A w n A y n ) ) n n 0 .
We present the following result using similar arguments in [14], Theorem 3.1.
Lemma 11.
Let { w n } be a sequence generated by Algorithm 3, and assume that Conditions 1–3 hold. Let { w n } be a sequence generated by Algorithm 3. If there exists { w n k } , a subsequence of { w n } such that { w n k } converges weakly to z H and lim k w n k y n k = 0 , then z V I ( C , A ) .
Proof. 
From w n k z , lim k w n k y n k = 0 and { y n } C , we have z C .Furthermore, we have:
y n k = P C ( w n k λ n k A w n k ) .
Thus,
w n k λ n k A w n k y n k , x y n k 0 for all x C .
Equivalently, we have:
1 λ n k w n k y n k , x y n k A w n k , x y n k for all x C .
From this, we obtain:
1 λ n k w n k y n k , x y n k + A w n k , y n k w n k A w n k , x w n k x C .
We have that { w n k } is a bounded sequence and A is Lipschitz continuous on H, and we get that { A w n k } is bounded and λ n min { λ 0 , μ L } . Taking k in (12), since w n k y n k 0 , we get:
lim inf k A w n k , x w n k 0 .
On the other hand, we have:
A y n k , x y n k = A y n k A w n k , x w n k + A w n k , x w n k + A y n k , w n k y n k .
Since lim k w n k y n k = 0 and A is Lipschitz continuous on H, we get:
lim k A w n k A y n k = 0 ,
which, together with (13) and (14), implies that:
lim inf k A y n k , x y n k 0 .
Next, we show that z V I ( C , A ) .
Next, a decreasing sequence, { ϵ k } , of positive numbers, which tends to zero, is chosen. We denote by N k , for each k, the smallest positive integer satisfying the inequality:
A y n j , x y n j + ϵ k 0 for all j N k .
It should be noted that the existence of N k is guaranteed by (15). Clearly, the sequence { N k } is increasing from the fact that { ϵ k } is decreasing. Furthermore, for each k, since { y N k } C , we can suppose A y N k 0 . We get:
A y N k , v N k = 1 for each k ,
where:
v N k = A y N k A y N k 2 .
We can infer from (16) that for each k:
A y N k , x + ϵ k v N k y N k 0 .
Using the pseudo-monotonicity of the operator A on H, we obtain:
A ( x + ϵ k v N k ) , x + ϵ k v N k y N k 0 .
Hence, we have:
A x , x y N k A x A ( x + ϵ k v N k ) , x + ϵ k v N k y N k ϵ k A x , v N k .
Next, we show that:
lim k ϵ k v N k = 0 .
Using the fact that w n k z and lim k w n k y n k = 0 , we get y N k z as k . Furthermore, for A sequentially weakly continuous on C, { A y n k } converges weakly to A z . We can suppose A z 0 , since otherwise, z is a solution. Using the fact that the norm mapping is sequentially weakly lower semicontinuous, we obtain:
0 < A z lim inf k A y n k .
Since { y N k } { y n k } and ϵ k 0 as k , we get:
0 lim sup k ϵ k v N k = lim sup k ϵ k A y n k lim sup k ϵ k lim inf k A y n k = 0 .
This in fact means lim k ϵ k v N k = 0 .
Next, letting k , then the right-hand side of (17) tends to zero by A being Lipschitz continuous, { w N k } , { v N k } are bounded, and:
lim k ϵ k v N k = 0 .
Hence, we obtain:
lim inf k A x , x y N k 0 .
Therefore, for all x C , we get:
A x , x z = lim k A x , x y N k = lim inf k A x , x y N k 0 .
Finally, using Lemma 4, we have z V I ( C , A ) , which completes the proof. □
Remark 3.
Imposing the sequential weak continuity on A is not necessary when A is a monotone function; see [24].
Theorem 4.
Any sequence { x n } that is generated using Algorithm 3 converges weakly to an element of V I ( C , A ) when Conditions 1–3 are satisfied.
Proof. 
Claim 1. { x n } is a bounded sequence. Define u n : = P T n ( w n ) , and let p V I ( C , A ) . Then, we have:
u n p 2 =   P T n w n p 2   w n p 2 u n w n 2 .
Furthermore,
u n p 2 = P T n w n p 2   w n p 2 P T n w n w n 2 =   w n p 2 dist 2 ( w n , T n ) .
This implies that:
x n + 1 p 2 = ( 1 α n ) ( x n p ) + α n ( u n p ) 2 = ( 1 α n ) x n p 2 + α n u n p 2 α n ( 1 α n ) x n u n 2 ,
which in turn implies that:
x n + 1 p 2 ( 1 α n ) x n p 2 + α n w n p 2 α n ( 1 α n ) x n u n 2 .
Note that:
x n + 1 = ( 1 α n ) x n + α n u n
and this implies:
u n x n = 1 α n ( x n + 1 x n ) .
Using (22) in (21), we obtain:
x n + 1 p 2 ( 1 α n ) x n p 2 + α n w n p 2 ( 1 α n ) α n x n + 1 x n 2 .
Furthermore, by Lemma 7 (iii),
w n p 2 = 2 x n x n 1 p 2 = ( x n p ) + ( x n x n 1 ) 2 = 2 x n p 2 x n 1 p 2 + 2 x n x n 1 2 .
Using (24) in (23):
x n + 1 p 2 ( 1 α n ) x n p 2 + 2 α n x n p 2 α n x n 1 p 2 + 2 α n x n x n 1 2 1 α n α n x n + 1 x n 2 = ( 1 + α n ) x n p 2 α n x n 1 p 2 + 2 α n x n x n 1 2 1 α n α n x n + 1 x n 2 .
Define:
Γ n : =   x n p 2 α n x n 1 p 2 + 2 α n x n x n 1 2 , n 1 .
Since α n α n + 1 , we have:
Γ n + 1 Γ n = x n + 1 p 2 ( 1 + α n + 1 ) x n p 2 + α n x n 1 p 2 + 2 α n + 1 x n + 1 x n 2 2 α n x n x n 1 2 x n + 1 p 2 ( 1 + α n ) x n p 2 + α n x n 1 p 2 + 2 α n + 1 x n + 1 x n 2 2 α n x n x n 1 2 .
Now, using (25) in (26), one gets:
Γ n + 1 Γ n 1 α n α n x n + 1 x n 2 + 2 α n + 1 x n + 1 x n 2 = 1 α n α n 2 α n + 1 x n + 1 x n 2 .
Observe that:
1 α n α n 2 α n + 1 = 1 α n 1 2 α n + 1 2 + δ 1 2 2 + δ = δ + δ 2 + δ δ .
Using (28) in (27), we get:
Γ n + 1 Γ n δ x n + 1 x n 2 .
Hence, { Γ n } is non-increasing. In a similar way, we obtain:
Γ n = x n p 2 α n x n 1 p 2 + 2 α n x n x n 1 2 x n p 2 α n x n 1 p 2 .
Note that:
α n < 1 2 + δ = ϵ < 1 .
From (30), we have:
x n p 2 α n x n 1 p 2 + Γ n ϵ x n 1 p 2 + Γ 1 ϵ n x 0 p 2 + ( 1 + + ϵ n 1 ) Γ 1 ϵ n x 0 p 2 + Γ 1 1 ϵ .
Consequently,
Γ n + 1 = x n + 1 p 2 α n + 1 x n p 2 + 2 α n + 1 x n + 1 x n 2 α n + 1 x n p 2
and this means from (31) that:
Γ n + 1 α n + 1 x n p 2 ϵ x n p 2 ϵ n + 1 x 0 p 2 + ϵ Γ 1 1 ϵ .
By (29) and (32), we get:
δ n = 1 k x n + 1 x n 2 Γ 1 Γ k + 1 ϵ k + 1 x 0 p 2 + Γ 1 1 ϵ .
This then implies that:
n = 1 x n + 1 x n 2 Γ 1 δ ( 1 ϵ ) < + .
Hence, lim n x n + 1 x n = 0 . We also have from Algorithm 3 that:
w n x n   =   x n x n 1 0 , n .
From (25), we obtain:
x n + 1 p 2 ( 1 + α n ) x n p 2 α n x n 1 p 2 + 2 x n x n 1 2 .
Using Lemma 8 in (36) (noting (34)), we get:
lim n x n p 2 = l < .
This implies that lim n x n p exists. Therefore, the sequence { x n } is bounded, and so is { y n } .
Claim 2. There exists M > 1 such that:
1 M 1 μ 2 w n y n 2 2 w n p 2 u n p 2 n n 0 .
We know that { A w n } , { A y n } are bounded using the fact that { x n } , { y n } , { w n } are bounded. Hence, there exists M > 1 such that:
w n y n λ n ( A w n A y n )   M for all n .
Therefore, for all u , v H , we obtain:
h n ( u ) h n ( v ) =   w n y n λ n ( A w n A y n ) , u v   w n y n λ n ( A w n A y n ) u v M u v .
Then, we have that h n ( · ) is M-Lipschitz continuous on H. From Lemma 6, we get:
dist ( w n , T n ) 1 M h n ( w n ) ,
from which, together with Lemma 10, we get:
dist ( w n , T n ) 1 M 1 μ 2 w n y n 2 n n 0 .
Combining (19) and (38), we obtain:
u n p 2 w n p 2 1 M 1 μ 2 w n y n 2 2 n n 0 .
This complete the proof of Claim 2.
Claim 3. The sequence { x n } converges weakly to an element of V I ( C , A ) . Indeed, since { x n } is a bounded sequence, there exists the subsequence { x n k } of { x n } such that x n k z C . Since { w n } and { u n } are bounded, there exists M * > 0 such that n n 0 , and we have from (39):
1 M 1 μ 2 w n y n 2 2 w n p 2 u n p 2 = w n p u n p w n p + u n p w n u n w n p + u n p M * w n u n .
From (40), we have:
lim n w n y n = 0 .
Consequently,
x n y n     w n y n + w n x n 0 , n .
Furthermore, { w n k } of { w n } converges weakly to z C . This implies from Lemma 11 and (41) that z V I ( C , A ) . Therefore, we proved that if p V I ( C , A ) , then lim n x n p exists, and each sequential weak cluster point of the sequence { x n } is in V I ( C , A ) . By Lemma 5, the sequence { x n } converges weakly to an element of V I ( C , A ) .

4. Numerical Illustrations

In this section, we consider many examples in which some are real-life applications for numerical implementations of our proposed Algorithm 3. For a broader overview of the efficiency and accuracy of our proposed algorithm, we investigate and compare the performance of the proposed Algorithm 3 with Algorithm 1 proposed by Boţ et al. in [27] (Bot Alg.), Algorithm 2 proposed by Malitsky in [28] (Malitsky Alg.), the algorithm proposed by Shehu and Iyiola in [37] (Shehu Alg.), the subgradient-extragradient method (SEM) (2), and the extragradient method (EGM) in [16].
Example 1
(Tomography reconstruction model). In this example, we consider the linear inverse problem:
B x = b ^ ,
where x R k is the unknown image, B R m × k is the projection matrix, and b ^ R m is the given sinogram (set of projections). The aim then is to recover a slice image of an object from a sinogram. To be realistic, we consider noisy b = b ^ + ϵ , where ϵ R m . Problem (42) can be presented as a convex feasibility problem (CFP) with the sets (hyper-planes) C i = { x : a i , x = b i } . Since, in practice, the projection matrix B is often rank-deficient, so b range(B); thus, we may assume that the CFP has no solution (also called inconsistent), so we consider the least squares model min x i = 1 m dist ( x , C i ) 2 .
Recall that the projection onto the hyper-plane C i has a closed formula P C i x = x a i , x b i a i 2 a i . Therefore, the evaluation of T x reduces to a matrix-vector multiplication, and this can be realized very efficiently, where T : = 1 m ( P C 1 + + P C m ) and A : = I T . Note that our approach only exploits feasibility constraints, which is definitely not a state-of-the-art model for tomography reconstruction. More involved methods would solve this problem with the use of some regularization techniques, but we keep such a simple model for illustration purposes only.
As a particular problem, we wish to reconstruct the Shepp–Logan phantom image 128 × 128 (thus, x R k with k = 2 8 ) from the far less measurement m = 2 7 . We generate the matrix B R m × k randomly and define b = B x + ϵ , where ϵ R m is a random vector, whose entries are drawn from N ( 0 , 1 ) .
Using this example, we give some comparisons of our proposed Algorithm 3, Algorithm 1 proposed by Boţ et al. in [27] (Bot Alg.), and Algorithm 2 proposed by Malitsky et al. in [28] (Malitsky Alg.) using the residual e n 2   : = x n + 1 x n 2 ϵ , where ϵ = 10 4 , as our stopping criterion. For our proposed algorithm, the starting point x 0 is chosen randomly with x 1 = ( 0 , , 0 ) and α n = 0.002 , for Boţ et al., the starting point x 0 = ( 0 , , 0 ) and ρ n = 0.2 , while for Malitsky et al., the starting point x 0 = x ¯ 0 = ( 0 , , 0 ) with x 1 randomly chosen, θ 0 = 1 , and λ ¯ = 1 . All results are reported in Table 1 and Table 2 and Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6.
Example 2
(Equilibrium-optimization model). Here, we consider the Nash–Cournot oligopolistic equilibrium model in electricity markets. Given m companies, such that the i-th company possesses I i generating units, denote by x the power vector, that is each of its j-th entries x j corresponds to the power generated by unit j. Assume that the price function p i is an affine decreasing function of s : = j = 1 N x j where N is the number of all generating units. Therefore, p i ( s ) : = α β i s . We can now present the profit of company i as f i ( x ) : = p i ( s ) j I i x j j I i c j ( x j ) , where c j ( x j ) is the cost for generating x j by generating unit j. Denote by K i the strategy set of the i-th company i. Clearly, j I i x j K i for each i, and the overall strategy set is C : = K 1 × K 2 × × K m .
The Nash equilibrium concept with regards to the above data is that each company wishes to maximize its profit by choosing the corresponding production level under the presumption that the production of the other companies is a parametric input.
Recall that x * C = K 1 × K 2 × × K m is an equilibrium point if:
f i ( x * ) f i ( x * [ x i ] ) x i K i , i = 1 , 2 , , m ,
where the vector x * [ x i ] stands for the vector obtained from x * by replacing x i * with x i . Define:
f ( x , y ) : = ψ ( x , y ) ψ ( x , x )
with:
ψ ( x , y ) : = i = 1 n f i ( x * [ y i ] ) .
Therefore, finding a Nash equilibrium point is formulated as:
X * C : f ( x * , x ) 0 x C .
Suppose for every j, the cost c j for production and the environmental fee g are increasingly convex functions. This convexity assumption implies that (43) is equivalent to (see [38]):
x C : B x a + φ ( x ) , y x 0 y C ,
where:
a : = ( α , α , , α ) T B 1 = β 1 0 0 0 0 β 2 0 0 0 0 0 0 β m B = 0 β 1 β 1 β 1 β 2 0 β 2 β 2 β m β m β m β m φ ( x ) : = x T B 1 x + j = 1 N c j ( x j ) .
Note that c j is differentiable convex for every j.
Our proposed scheme is tested with the following cost function:
c j ( x j ) = 1 2 x j T D x j + d T x j .
The parameters β j for all j = 1 , , m , matrix D, and vector d were generated randomly in the interval ( 0 , 1 ] , [ 1 , 40 ] , and [ 1 , 40 ] , respectively.
The numerical experiments involve the initial points x 0 and x 1 generated randomly in [ 1 , 40 ] and m = 10 . The stopping role of the algorithm is chosen as e n 2   : = x n + 1 x n 2 ϵ , where ϵ = 10 4 . Let us assume that each company has the same lower production bound one and upper production bound 40, that is,
K i : = { x i : 1 x i 40 } , i = 1 , , 10 .
We compare our proposed Algorithm 3 with Algorithm 1 proposed by Boţ et al. in [27] (Bot Alg.), Algorithm 2 proposed by Malitsky et al. in [28] (Malitsky Alg.), and Shehu and Iyiola’s proposed Algorithm 3.2 in [37] (Shehu Alg.). For our proposed algorithm, we choose α n = 0.49 , for Boţ et al., ρ n = 0.02 , for Malitsky et al., the starting point x 0 = x ¯ 0 , θ 0 = 1 , and λ ¯ = 1 , while for Shehu and Iyiola, ρ = 1 and σ = 0.5 . All results are reported in Table 3 and Table 4 and Figure 7, Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12.
Example 3.
This example is taken from [39]. First, generate the following matrices randomly B, S, D in R m × m where S is skew-symmetric and D is a positive definite diagonal matrix. Then, define the operator A by A ( x ) : = M x + q with M = B B T + S + D . The symmetric property of S implies that the operator does not arise from an optimization problem, and the positive definiteness of D implies the uniqueness of the solution to the corresponding variational inequality problem.
We choose here q = 0 . Choose random matrix B R k × m and b R k with nonnegative entries, and define the VI feasible set C by B x b . Clearly, the origin is in C, and it is the unique solution of the corresponding variational inequality. Projections onto C are computed via the MATLAB routine fmincon, and thus, it is costly. We test the algorithm’s performances (number of iterations and CPU time in seconds) for different m’s and inequality constraints k.
For this example, the stopping criterion is chosen as e n 2   : = x n 2 ϵ , where ϵ = 0.002 . We experiment with different values of k ( 30 , and   50 ) and m ( 10 , 20 , 30 , and   40 ) . We randomly generate vector b and matrices B, S, and D. We choose λ 0 and μ appropriately in Algorithm 3. In (2), L = M is used. Algorithm 3 proposed in this paper is compared with the subgradient-extragradient method (SEM) (2). For our proposed algorithm, we choose μ = 0.999 , λ 0 = 0.5 , and α n = 0.499 while for SEM, λ = 0.125 4 L . All results are reported in Table 5 and Figure 13, Figure 14, Figure 15, Figure 16, Figure 17, Figure 18, Figure 19, Figure 20, Figure 21 and Figure 22.
Example 4.
Consider VI(1) with:
A ( x ) = 0.5 x 1 x 2 2 x 2 10 7 4 x 1 + 0.1 x 2 2 10 7
and:
C : = { x R 2 : ( x 1 2 ) 2 + ( x 2 2 ) 2 1 } .
Then, A is not monotone on C, but pseudo-monotone. Furthermore, VI (1) has the unique solution x * = ( 2.707 , 2.707 ) T . A comparison of our method is made with the extragradient method [16]. We denote the parameter in EGM [16] as λ n * to differentiate it from λ n in our proposed Algorithm 3. We terminate the iterations if:
e n 2   : = x n x * 2 ε
with ε = 10 3 . In this, our proposed Algorithm 3 is compared with the extragradient method (EGM) in [16]. For our proposed algorithm, we choose x 0 = ( 1 , 2 ) T and α n = 0.499 , while for EGM, λ n * = 0.00000001 . All results are reported in Table 6, Table 7 and Table 8 and Figure 23, Figure 24, Figure 25, Figure 26, Figure 27, Figure 28, Figure 29, Figure 30, Figure 31 and Figure 32.
Example 5.
Consider H : = L 2 ( [ 0 , 1 ] ) and C   : = { x H :   x 2 } . Define A : L 2 ( [ 0 , 1 ] ) L 2 ( [ 0 , 1 ] ) by:
A ( u ) ( t ) : = e u 2 0 t u ( s ) d s , u L 2 ( [ 0 , 1 ] ) , t [ 0 , 1 ] .
It can also be shown that A is pseudo-monotone, but not monotone on H, Lipschitz continuous with L = 2 e + 1 2 π , and sequentially weakly-to-weak;y continuous on H (see Example 2.1 of [27]).
Our proposed Algorithm 3 is compared with Algorithm 1 proposed by Boţ et al. in [27] (Bot Alg.), Algorithm 2 proposed by Malitsky et al. in [28] (Malitsky Alg.), and Shehu and Iyiola’s proposed Algorithm 3.2 in [37] (Shehu Alg.). For our proposed algorithm, we choose x 0 = 1 9 e t sin ( t ) and α n = 0.49 , for Boţ et al., ρ n = 0.02 , for Malitsky et al., θ 0 = 1 , and ϕ = 1.1 , while for Shehu and Iyiola, ρ = 1 , σ = 0.005 , and γ = 0.9 . All algorithms are terminated using the stopping criterion e n 2 : = w n y n 2 ε with ε = 10 4 . All results are reported in Table 9 and Table 10 and Figure 33, Figure 34, Figure 35 and Figure 36.

5. Discussion

The weak convergence analysis of the reflected subgradient-extragradient method for variational inequalities in real Hilbert spaces is obtained in this paper. We provide and intensive numerical illustration and comparison with related works for several applications such as tomography reconstruction and Nash–Cournot oligopolistic equilibrium models. Our result is one of the few results on the subgradient-extragradient method with the reflected step in the literature. Our next project is to modify our results to bilevel variational inequalities.

Author Contributions

Writing—review and editing, A.G., O.S.I., L.A. and Y.S. All authors contributed equally to this work which included mathematical theory and analysis and code implementation. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The study does not report any data.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
VIvariational inequality problem
EGMextragradient method
SEGMsubgradient-extragradient method

References

  1. Fichera, G. Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei VIII Ser. Rend. Cl. Sci. Fis. Mat. Nat. 1963, 34, 138–142. [Google Scholar]
  2. Fichera, G. Problemi elastostatici con vincoli unilaterali: Il problema di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei Mem. Cl. Sci. Fis. Mat. Nat. Sez. I VIII Ser. 1964, 7, 91–140. [Google Scholar]
  3. Stampacchia, G. Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. 1964, 258, 4413–4416. [Google Scholar]
  4. Kinderlehrer, D.; Stampacchia, G. An Introduction to Variational Inequalities and Their Applications; Academic: New York, NY, USA, 1980. [Google Scholar]
  5. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems. Volume I; Springer Series in Operations Research; Springer: New York, NY, USA, 2003. [Google Scholar]
  6. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequalities and Complementarity Problems. Volume II; Springer Series in Operations Research; Springer: New York, NY, USA, 2003. [Google Scholar]
  7. Konnov, I.V. Combined Relaxation Methods for Variational Inequalities; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  8. Iusem, A.N. An iterative algorithm for the variational inequality problem. Comput. Appl. Math. 1994, 13, 103–114. [Google Scholar]
  9. Iusem, A.N.; Svaiter, B.F. A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization 1997, 42, 309–321. [Google Scholar] [CrossRef]
  10. Kanzow, C.; Shehu, Y. Strong convergence of a double projection-type method for monotone variational inequalities in Hilbert spaces. J. Fixed Point Theory Appl. 2018, 20, 51. [Google Scholar] [CrossRef]
  11. Khobotov, E.N. Modifications of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 1987, 27, 120–127. [Google Scholar] [CrossRef]
  12. Maingé, P.E. A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 2008, 47, 1499–1515. [Google Scholar] [CrossRef]
  13. Marcotte, P. Application of Khobotov’s algorithm to variational inequalities and network equilibrium problems. Inf. Syst. Oper. Res. 1991, 29, 258–270. [Google Scholar] [CrossRef]
  14. Vuong, P.T. On the weak convergence of the extragradient method for solving pseudo-monotone variational inequalities. J. Optim. Theory Appl. 2018, 176, 399–409. [Google Scholar] [CrossRef] [Green Version]
  15. Shehu, Y.; Iyiola, O.S. Weak convergence for variational inequalities with inertial-type method. Appl. Anal. 2020, 2020, 1–25. [Google Scholar] [CrossRef]
  16. Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. Matematicheskie Metody 1976, 12, 747–756. [Google Scholar]
  17. Antipin, A.S. On a method for convex programs using a symmetrical modification of the Lagrange function. Ekon. Matematicheskie Metody 1976, 12, 1164–1173. [Google Scholar]
  18. Censor, Y.; Gibali, A.; Reich, S. The subgradient-extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [Green Version]
  19. Censor, Y.; Gibali, A.; Reich, S. Strong convergence of subgradient-extragradient methods for the variational inequality problem in Hilbert space. Optim. Methods Softw. 2011, 26, 827–845. [Google Scholar] [CrossRef]
  20. Censor, Y.; Gibali, A.; Reich, S. Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 2011, 61, 1119–1132. [Google Scholar] [CrossRef]
  21. Malitsky, Y.V. Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. 2015, 25, 502–520. [Google Scholar] [CrossRef] [Green Version]
  22. Malitsky, Y.V.; Semenov, V.V. A hybrid method without extrapolation step for solving variational inequality problems. J. Glob. Optim. 2015, 61, 193–202. [Google Scholar] [CrossRef] [Green Version]
  23. Solodov, M.V.; Svaiter, B.F. A new projection method for variational inequality problems. SIAM J. Control Optim. 1999, 37, 765–776. [Google Scholar] [CrossRef]
  24. Denisov, S.V.; Semenov, V.V.; Chabak, L.M. Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 2015, 51, 757–765. [Google Scholar] [CrossRef]
  25. Yang, J.; Liu, H. Strong convergence result for solving monotone variational inequalities in Hilbert space. Numer. Algorithms 2019, 80, 741–752. [Google Scholar] [CrossRef]
  26. Thong, D.V.; Hieu, D.V. Modified subgradient-extragradient method for variational inequality problems. Numer. Algorithms 2018, 79, 597–610. [Google Scholar] [CrossRef]
  27. Boţ, R.I.; Csetnek, E.R.; Vuong, P.T. The forward-backward-forward method from discrete and continuous perspective for pseudomonotone variational inequalities in Hilbert spaces. Eur. J. Oper. Res. 2020, 287, 49–60. [Google Scholar] [CrossRef]
  28. Malitsky, Y.V. Golden ratio algorithms for variational inequalities. Math. Program. 2020, 184, 383–410. [Google Scholar] [CrossRef] [Green Version]
  29. Karamardian, S. Complementarity problems over cones with monotone and pseudo-monotone maps. J. Optim. Theory Appl. 1976, 18, 445–454. [Google Scholar] [CrossRef]
  30. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
  31. Kopecká, E.; Reich, S. A note on alternating projections in Hilbert spaces. J. Fixed Point Theory Appl. 2012, 12, 41–47. [Google Scholar] [CrossRef] [Green Version]
  32. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Lecture Notes in Mathematics; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  33. Cottle, R.W.; Yao, J.C. Pseudo-monotone complementarity problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  34. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  35. He, Y.R. A new double projection algorithm for variational inequalities. J. Comput. Appl. Math. 2006, 185, 166–173. [Google Scholar] [CrossRef] [Green Version]
  36. Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef] [Green Version]
  37. Shehu, Y.; Iyiola, O.S. Iterative algorithms for solving fixed point problems and variational inequalities with uniformly continuous monotone operators. Numer. Algorithms 2018, 79, 529–553. [Google Scholar] [CrossRef]
  38. Yen, L.H.; Muu, L.D.; Huyen, N.T.T. An algorithm for a class of split feasibility problems: Application to a model in electricity production. Math. Methods Oper. Res. 2016, 84, 549–565. [Google Scholar] [CrossRef]
  39. Harker, P.T.; Pang, J.-S. A damped-Newton method for the linear complementarity problem. In Computational Solution of Nonlinear Systems of Equations, Lectures in Applied Mathematics; Allgower, G., Georg, K., Eds.; AMS: Providence, RI, USA, 1990; Volume 26, pp. 265–284. [Google Scholar]
Figure 1. Example 1: μ = ϕ = 0.9 and λ 0 = 0.1 . Alg., Algorithm.
Figure 1. Example 1: μ = ϕ = 0.9 and λ 0 = 0.1 . Alg., Algorithm.
Symmetry 13 00489 g001
Figure 2. Example 1: μ = ϕ = 0.9 and λ 0 = 1 .
Figure 2. Example 1: μ = ϕ = 0.9 and λ 0 = 1 .
Symmetry 13 00489 g002
Figure 3. Example 1: μ = ϕ = 0.9 and λ 0 = 5 .
Figure 3. Example 1: μ = ϕ = 0.9 and λ 0 = 5 .
Symmetry 13 00489 g003
Figure 4. Example 1: μ = ϕ = 0.9 and λ 0 = 10 .
Figure 4. Example 1: μ = ϕ = 0.9 and λ 0 = 10 .
Symmetry 13 00489 g004
Figure 5. Example 1: μ = ϕ = 0.9 .
Figure 5. Example 1: μ = ϕ = 0.9 .
Symmetry 13 00489 g005
Figure 6. Example 1: λ 0 = 1 .
Figure 6. Example 1: λ 0 = 1 .
Symmetry 13 00489 g006
Figure 7. Example 2: μ = ϕ = γ = 0.1 and λ 0 = 1 .
Figure 7. Example 2: μ = ϕ = γ = 0.1 and λ 0 = 1 .
Symmetry 13 00489 g007
Figure 8. Example 2: μ = ϕ = γ = 0.3 and λ 0 = 1 .
Figure 8. Example 2: μ = ϕ = γ = 0.3 and λ 0 = 1 .
Symmetry 13 00489 g008
Figure 9. Example 2: μ = ϕ = γ = 0.7 and λ 0 = 1 .
Figure 9. Example 2: μ = ϕ = γ = 0.7 and λ 0 = 1 .
Symmetry 13 00489 g009
Figure 10. Example 2: μ = ϕ = γ = 0.999 and λ 0 = 1 .
Figure 10. Example 2: μ = ϕ = γ = 0.999 and λ 0 = 1 .
Symmetry 13 00489 g010
Figure 11. Example 2: λ = 1 .
Figure 11. Example 2: λ = 1 .
Symmetry 13 00489 g011
Figure 12. Example 2: μ = 0.999 .
Figure 12. Example 2: μ = 0.999 .
Symmetry 13 00489 g012
Figure 13. Example 3: k = 30 and m = 10 .
Figure 13. Example 3: k = 30 and m = 10 .
Symmetry 13 00489 g013
Figure 14. Example 3: k = 30 and m = 20 .
Figure 14. Example 3: k = 30 and m = 20 .
Symmetry 13 00489 g014
Figure 15. Example 3: k = 30 and m = 30 .
Figure 15. Example 3: k = 30 and m = 30 .
Symmetry 13 00489 g015
Figure 16. Example 3: k = 30 and m = 40 .
Figure 16. Example 3: k = 30 and m = 40 .
Symmetry 13 00489 g016
Figure 17. Example 3: k = 50 and m = 10 .
Figure 17. Example 3: k = 50 and m = 10 .
Symmetry 13 00489 g017
Figure 18. Example 3: k = 50 and m = 20 .
Figure 18. Example 3: k = 50 and m = 20 .
Symmetry 13 00489 g018
Figure 19. Example 3: k = 50 and m = 30 .
Figure 19. Example 3: k = 50 and m = 30 .
Symmetry 13 00489 g019
Figure 20. Example 3: k = 50 and m = 40 .
Figure 20. Example 3: k = 50 and m = 40 .
Symmetry 13 00489 g020
Figure 21. Example 3: k = 30 .
Figure 21. Example 3: k = 30 .
Symmetry 13 00489 g021
Figure 22. Example 3: k = 50 .
Figure 22. Example 3: k = 50 .
Symmetry 13 00489 g022
Figure 23. Example 4: k = 50 and m = 10 .
Figure 23. Example 4: k = 50 and m = 10 .
Symmetry 13 00489 g023
Figure 24. Example 4: k = 50 and m = 20 .
Figure 24. Example 4: k = 50 and m = 20 .
Symmetry 13 00489 g024
Figure 25. Example 4: k = 50 and m = 30 .
Figure 25. Example 4: k = 50 and m = 30 .
Symmetry 13 00489 g025
Figure 26. Example 4: k = 50 and m = 40 .
Figure 26. Example 4: k = 50 and m = 40 .
Symmetry 13 00489 g026
Figure 27. Example 4: λ 0 = 1 and μ = 0.1 .
Figure 27. Example 4: λ 0 = 1 and μ = 0.1 .
Symmetry 13 00489 g027
Figure 28. Example 4: λ 0 = 5 and μ = 0.999 .
Figure 28. Example 4: λ 0 = 5 and μ = 0.999 .
Symmetry 13 00489 g028
Figure 29. Example 4: λ 0 = 5 and x 1 = ( 2 , 1 ) T .
Figure 29. Example 4: λ 0 = 5 and x 1 = ( 2 , 1 ) T .
Symmetry 13 00489 g029
Figure 30. Example 4: μ = 0.999 and x 1 = ( 2 , 1 ) T .
Figure 30. Example 4: μ = 0.999 and x 1 = ( 2 , 1 ) T .
Symmetry 13 00489 g030
Figure 31. Example 5: λ 0 = 1 , μ = 0.9 and x 1 = 1 12 ( t 2 2 t + 1 ) .
Figure 31. Example 5: λ 0 = 1 , μ = 0.9 and x 1 = 1 12 ( t 2 2 t + 1 ) .
Symmetry 13 00489 g031
Figure 32. Example 5: λ 0 = 1 , μ = 0.9 and x 1 = 1 9 e t sin ( t ) .
Figure 32. Example 5: λ 0 = 1 , μ = 0.9 and x 1 = 1 9 e t sin ( t ) .
Symmetry 13 00489 g032
Figure 33. Example 5: λ 0 = 1 , μ = 0.9 and x 1 = 1 21 t 2 cos ( t ) .
Figure 33. Example 5: λ 0 = 1 , μ = 0.9 and x 1 = 1 21 t 2 cos ( t ) .
Symmetry 13 00489 g033
Figure 34. Example 5: λ 0 = 1 , μ = 0.9 and x 1 = 1 7 ( 3 t 2 ) e t .
Figure 34. Example 5: λ 0 = 1 , μ = 0.9 and x 1 = 1 7 ( 3 t 2 ) e t .
Symmetry 13 00489 g034
Figure 35. Example 5: μ = 0.9 .
Figure 35. Example 5: μ = 0.9 .
Symmetry 13 00489 g035
Figure 36. Example 5: λ 0 = 2 .
Figure 36. Example 5: λ 0 = 2 .
Symmetry 13 00489 g036
Table 1. Example 1 comparison: proposed Algorithm 3, Bot Algorithm 1, and Malitsky Algorithm 2 with μ = ϕ = 0.9 .
Table 1. Example 1 comparison: proposed Algorithm 3, Bot Algorithm 1, and Malitsky Algorithm 2 with μ = ϕ = 0.9 .
Proposed Algorithm 3Bot Algorithm 1Malitsky Algorithm 2
λ 0 No. of Iter.CPU TimeNo. of Iter.CPU TimeNo. of Iter.CPU Time
0.12 5.8318 × 10 3 980.2590710.1533
12 4.3773 × 10 3 610.1643390.0826
512 4.0209 × 10 2 380.1012220.0491
106 1.8319 × 10 2 2070.5764720.1359
Table 2. Example 1: proposed Algorithm 3 with λ 0 = 1 for different μ values.
Table 2. Example 1: proposed Algorithm 3 with λ 0 = 1 for different μ values.
λ 0 = 0.1 λ 0 = 0.3 λ 0 = 0.7 λ 0 = 0.9
No. of Iter.6632
CPU Time 2.6506 × 10 2 2.7113 × 10 2 7.1002 × 10 3 4.7982 × 10 3
Table 3. Example 2 comparison: proposed Algorithm 3, Bot Algorithm 1, Malitsky Algorithm 2, and Shehu Alg. [37] with λ 0 = 1 and μ = ϕ = γ .
Table 3. Example 2 comparison: proposed Algorithm 3, Bot Algorithm 1, Malitsky Algorithm 2, and Shehu Alg. [37] with λ 0 = 1 and μ = ϕ = γ .
Proposed Algorithm 3Bot Algorithm 1Malitsky Algorithm 2Shehu Alg. [37]
μ No. of Iter.CPU TimeNo. of Iter.CPU TimeNo. of Iter.CPU TimeNo. of Iter.CPU Time
0.122 8.7749 × 10 3 5820.137233 8.4833 × 10 3 156808.7073
0.324 8.8605 × 10 3 5940.144047 9.3949 × 10 3 130477.2304
0.736 1.8575 × 10 2 6190.191481 1.9522 × 10 2 147369.8807
0.99947 3.9262 × 10 2 5810.301618090.743870485.4446
Table 4. Example 2: proposed Algorithm 3 with μ = 0.999 for different λ 0 values.
Table 4. Example 2: proposed Algorithm 3 with μ = 0.999 for different λ 0 values.
λ 0 = 0.1 λ 0 = 1 λ 0 = 5 λ 0 = 10
No. of Iter.45524450
CPU Time 2.6624 × 10 2 2.1446 × 10 2 1.7123 × 10 2 1.9696 × 10 2
Table 5. Comparison of proposed Algorithm 3 and the subgradient-extragradient method (SEM) (2) for Example 3.
Table 5. Comparison of proposed Algorithm 3 and the subgradient-extragradient method (SEM) (2) for Example 3.
m = 10 m = 20 m = 30 m = 40
k = 30 Iter.CPU TimeIter.CPU TimeIter.CPU TimeIter.CPU Time
Proposed Algorithm 31572.73271623.97591444.49501284.8193
SEM (2)378564.875213,980243.901918,994345.368630,777567.8440
m = 10 m = 20 m = 30 m = 40
k = 50 Iter.CPU TimeIter.CPU TimeIter.CPU TimeIter.CPU Time
Proposed Algorithm 31854.06911734.37981284.88171306.2658
SEM (2)417677.68938645150.426721,262381.099130,956561.4559
Table 6. Comparison of proposed Algorithm 3 and the extragradient method (EGM) [16] for Example 4 with λ 0 = 1 and μ = 0.1 .
Table 6. Comparison of proposed Algorithm 3 and the extragradient method (EGM) [16] for Example 4 with λ 0 = 1 and μ = 0.1 .
Proposed Algorithm 3EGM [16]
x 1 No. of Iter.CPU TimeNo. of Iter.CPU Time
( 2 , 1 ) T 13 5.9390 × 10 4 62 1.8851 × 10 3
( 1 , 2 ) T 33 5.2230 × 10 4 62 1.9666 × 10 3
( 1.5 , 1.5 ) T 12 4.7790 × 10 4 14 3.955 × 10 4
( 1.25 , 1.75 ) T 13 6.7980 × 10 4 58 1.9138 × 10 3
Table 7. Proposed Algorithm 3 for Example 4 with λ 0 = 5 and μ = 0.999 .
Table 7. Proposed Algorithm 3 for Example 4 with λ 0 = 5 and μ = 0.999 .
Proposed Algorithm 3
x 1 No. of Iter.CPU Time
( 2 , 1 ) T 18 7.5980 × 10 4
( 1 , 2 ) T 15 6.0910 × 10 4
( 1.5 , 1.5 ) T 13 5.5840 × 10 4
( 1.25 , 1.75 ) T 16 6.9110 × 10 4
Table 8. Example 4: proposed Algorithm 3 with x 1 = ( 2 , 1 ) T for different μ and λ 0 values.
Table 8. Example 4: proposed Algorithm 3 with x 1 = ( 2 , 1 ) T for different μ and λ 0 values.
λ 0 = 5
μ = 0 . 1 μ = 0 . 3 μ = 0 . 7 μ = 0.999
No. of Iter.14141618
CPU Time 6.8600 × 10 4 7.1170 × 10 4 8.1080 × 10 4 8.6850 × 10 4
μ = 0.999
λ 0 = 0 . 1 λ 0 = 1 λ 0 = 5 λ 0 = 10
No. of Iter.12181818
CPU Time 5.6360 × 10 4 9.3920 × 10 4 9.0230 × 10 4 8.9310 × 10 4
Table 9. Example 5 comparison: proposed Algorithm 3, Bot Algorithm 1, Malitsky Algorithm 2, and Shehu Alg. [37] with λ 0 = 1 and μ = 0.9 .
Table 9. Example 5 comparison: proposed Algorithm 3, Bot Algorithm 1, Malitsky Algorithm 2, and Shehu Alg. [37] with λ 0 = 1 and μ = 0.9 .
Proposed Algorithm 3Bot Algorithm 1Malitsky Algorithm 2Shehu Alg. [37]
x 1 Iter.CPU TimeIter.CPU TimeIter.CPU TimeIter.CPU Time
1 12 ( t 2 2 t + 1 ) 23 5.1433 × 10 3 21590.23341371 3.7625 × 10 2 37,13510.8377
1 9 e t sin ( t ) 18 3.2111 × 10 3 16810.18084374 3.9159 × 10 2 70,74132.5843
1 21 t 2 cos ( t ) 14 2.4545 × 10 3 43440.48573373 3.8413 × 10 2 17,7415.6272
1 7 ( 3 t 2 ) e t 43 8.7538 × 10 3 27740.29515351 3.7544 × 10 2 28,7587.3424
Table 10. Example 5: proposed Algorithm 3 with x 1 = t 2 2 t + 1 12 for different μ and λ 0 values.
Table 10. Example 5: proposed Algorithm 3 with x 1 = t 2 2 t + 1 12 for different μ and λ 0 values.
μ = 0.9
λ 0 = 0 . 1 λ 0 = 1 λ 0 = 2 λ 0 = 3
No. of Iter.16723118
CPU Time 3.4828 × 10 2 4.2089 × 10 3 2.2288 × 10 3 1.3899 × 10 3
λ 0 = 2
μ = 0 . 1 μ = 0 . 3 μ = 0 . 7 μ = 0 . 999
No. of Iter.11111111
CPU Time 2.0615 × 10 3 2.1237 × 10 3 1.9886 × 10 3 2.0384 × 10 3
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Gibali, A.; Iyiola, O.S.; Akinyemi, L.; Shehu, Y. Projected-Reflected Subgradient-Extragradient Method and Its Real-World Applications. Symmetry 2021, 13, 489. https://doi.org/10.3390/sym13030489

AMA Style

Gibali A, Iyiola OS, Akinyemi L, Shehu Y. Projected-Reflected Subgradient-Extragradient Method and Its Real-World Applications. Symmetry. 2021; 13(3):489. https://doi.org/10.3390/sym13030489

Chicago/Turabian Style

Gibali, Aviv, Olaniyi S. Iyiola, Lanre Akinyemi, and Yekini Shehu. 2021. "Projected-Reflected Subgradient-Extragradient Method and Its Real-World Applications" Symmetry 13, no. 3: 489. https://doi.org/10.3390/sym13030489

APA Style

Gibali, A., Iyiola, O. S., Akinyemi, L., & Shehu, Y. (2021). Projected-Reflected Subgradient-Extragradient Method and Its Real-World Applications. Symmetry, 13(3), 489. https://doi.org/10.3390/sym13030489

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop