Next Article in Journal
Analytical Model and Feedback Predictor Optimization for Combined Early-HARQ and HARQ
Next Article in Special Issue
Analytical Investigation of the Existence of Solutions for a System of Nonlinear Hadamard-Type Integro-Differential Equations Based upon the Multivariate Mittag-Leffler Function
Previous Article in Journal
Polynomial Analogue of Gandy’s Fixed Point Theorem
Previous Article in Special Issue
A Link between Approximation Theory and Summability Methods via Four-Dimensional Infinite Matrices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Step Inertial Regularized Methods for Hierarchical Variational Inequality Problems Involving Generalized Lipschitzian Mappings

College of Mathematics and Computer Science, Zhejiang Normal University, Jinhua 321004, China
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(17), 2103; https://doi.org/10.3390/math9172103
Submission received: 23 July 2021 / Revised: 26 August 2021 / Accepted: 27 August 2021 / Published: 31 August 2021

Abstract

:
In this paper, we construct two multi-step inertial regularized methods for hierarchical inequality problems involving generalized Lipschitzian and hemicontinuous mappings in Hilbert spaces. Then we present two strong convergence theorems and some numerical experiments to show the effectiveness and feasibility of our new iterative methods.

1. Introduction

Throughout this paper, we let H be a real Hilbert space with the norm · and the inner product · , · , respectively, C be a nonempty closed convex subset of H and A : H H be a mapping. We recall the variational inequality problem (VIP) is to find a point x * C such that
A x * , x x * 0 , x C .
We denote the solution set of VIP (1) by VI ( C , A ) .
The variational inequality problem is one of the most important problems in nonlinear analysis. Now, this problem has been studied by many scholars.
In 2000, Tseng [1] introduced the following method:
x 1 H , y n = P C ( x n δ A x n ) , x n + 1 = y n δ ( A y n A x n ) ,
where A is monotone and L-Lipschitzian (see in Section 2, Definition 1), VI ( C , A ) , δ 0 , 1 L . This algorithm has a weak convergence result.
In 2011, Censor et al. [2] proposed the subgradient extragradient method for solving VIP (1), as follows:
x 1 H , y n = P C ( x n δ A x n ) , G n = { w H : x n δ A x n y n , w y n 0 } , x n + 1 = P G n ( x n δ A y n ) .
The conditions of A and δ are the same as those in Tseng’s method. Then the sequence { x n } converges weakly to some point z VI ( C , A ) under some conditions.
In recent years, some scholars have paid attention to the following hierarchical variational inequality problems (HVIP):
Find   x ^ VI ( C , A ) such   that F x ^ , x x ^ 0 , x VI ( C , A ) ,
where F : H H is a strongly monotone and Lipschitzian mapping.
In 2020, Hieu et al. [3,4] proposed regularized subgradient extragradient method (Algorithm 1 RSEGM) and regularized Tseng’s extragradient method (Algorithm 2 RTEGM) for solving HVIP (4). Both of these two methods have strong convergence results.
On the other hand, in order to accelerate the convergence, the inertial methods have been studied extensively by many scholars [5,6,7,8,9,10,11,12]. One of the important results is the inertial Mann algorithm which is introduced by Maingé [5] in 2007:
w n = x n + α n ( x n x n 1 ) , x n + 1 = ( 1 β n ) w n + β n T w n .
Under some conditions, the sequence { x n } converges weakly to a fixed point of T.
In 2019, Dong et al. [7] proposed the multi-step inertial Krasnosel’skiĭ-Mann algorithm for finding a fixed point of a nonexpansive mapping, as follows:
y n = x n + k S n α k , n ( x n k x n k 1 ) , w n = x n + k S n β k , n ( x n k x n k 1 ) , x n + 1 = ( 1 λ n ) y n + λ n T w n .
This algorithm has a weak convergence result under certain conditions.
In this paper, motivated by the results of [3,4,7], we construct a multi-step inertial regularized subgradient extragradient method and a multi-step inertial regularized Tseng’s extragradient method for solving HVIP (4) in a Hilbert space when F is a generalized Lipschitzian and hemicontinuous mapping (see in Section 2, Definitions 2 and 3). Then, we present two strong convergence theorems and give some numerical experiments to show the effectiveness and feasibility of our new iterative methods.
Algorithm 1 Regularized subgradient extragradient method (RSEGM)
  • Initialization: Given λ 1 > 0 , μ ( 0 , 1 ) . Choose x 0 , x 1 H arbitrarily and a sequence { β n } ( 0 , + ) such that
    lim n + β n = 0 , n = 1 + β n = + , and lim n + β n + 1 β n β n 2 = 0 .
  • Iterative step: Calculate x n + 1 for n 1 as follows:
  • Step 1. Compute
    y n = P C [ x n λ n ( A w n + β n F x n ) ] .
  • Step 2. Compute
    x n + 1 = P G n [ x n λ n ( A y n + β n F x n ) ] ,
    and
    λ n + 1 = min λ n , μ x n y n A x n A y n , if   A x n A y n , λ n , otherwise ,
    where G n = { z H : x n λ n ( A x n + β n F x n ) y n , z y n 0 } .
  • Set n : = n + 1 and go to Step 1.
Algorithm 2 Regularized Tseng’s extragradient method (RTEGM)
  • Initialization: Given λ 1 > 0 , μ ( 0 , 1 ) . Choose x 0 , x 1 H arbitrarily and a sequence { β n } ( 0 , + ) such that
    lim n + β n = 0 , n = 1 + β n = + , and lim n + β n + 1 β n β n 2 = 0 .
  • Iterative step: Calculate x n + 1 for n 1 as follows:
  • Step 1. Compute
    y n = P C [ x n λ n ( A x n + β n F x n ) ] .
  • Step 2. Compute
    x n + 1 = y n λ n ( A y n A x n ) ,
    and
    λ n + 1 = min λ n , μ x n y n A x n A y n , if   A x n A y n , λ n , otherwise .
  • Set n : = n + 1 and go to Step 1.

2. Preliminaries

In this section, we present some necessary definitions and lemmas which are needed for our main results.
Definition 1
([13]). Let A : H H be a mapping.
(i) 
A is ζ-strongly monotone ( ζ > 0 ) if
A x A y , x y ζ x y 2 , x , y H .
(ii) 
A is monotone if
A x A y , x y 0 , x , y H .
(iii) 
A is L-Lipschitzian ( L > 0 ) if
A x A y L x y , x , y H .
Let { x n } H be a sequence. We use x n z and x n z to indicate that { x n } converges strongly and weakly to z, respectively.
Definition 2
([14]). Let A : H H be a mapping. A is said to be hemicontinuous if x H , y H , δ n 0 implies A ( x + δ n y ) A x as n + .
It is obvious that a continuous mapping must be hemicontinuous, but the converse is not true.
Lemma 1
([14]). If A : H H be a hemicontinuous and strongly monotone mapping in VIP (1), then VIP (1) exists a unique solution.
Lemma 2
([15]). If A : C H be a monotone and hemicontinuous mapping. Then x ˜ VI ( C , A ) if and only if A x , x x ˜ 0 , x C .
Definition 3
([14]). Let A : H H be a mapping. A is said to be L-generalized Lipschitzian ( L > 0 ) if
A x A y L ( x y + 1 ) , x , y H .
Remark 1.
We can easily see that a Lipschitzian mapping must be a generalized Lipschitzian mapping, but the converse is not true. A generalized Lipschitzian mapping may even not be hemicontinuous. For example, consider the sign function f : R R , defined by
f ( x ) = 1 , x < 0 , 0 , x = 0 , 1 , x > 0 .
Then f is generalized Lipschitzian but not hemicontinuous [14]. A continuous generalized Lipschitzian mapping may not be Lipschitzian. For example, let g : R R be defined by
g ( x ) = x 1 , x < 1 , x 1 ( x + 1 ) 2 , 1 x < 0 , x + 1 ( x 1 ) 2 , 0 x 1 , x + 1 , x > 1 .
We can see that g is continuous and generalized Lipschitzian but not Lipschitzian. In the past, many scholars studied Lipschitzian mappings, but in this paper, we pay attention to generalized Lipschitzian. Therefore, the research in this paper is meaningful.
Recall the metric projection operator P C from H onto C, defined as follows:
P C x = arg min y C x y , x H .
Lemma 3
([16,17]). Given x H and q C , we have
(i) 
q = P C x if and only if
x q , q y 0 , y C ;
(ii) 
P C is firmly nonexpansive, i.e.,
P C y P C z , y z P C y P C z 2 , y , z H ;
(iii) 
x P C x 2 x y 2 y P C x 2 , y C .
The following lemma is important to prove the strong convergence.
Lemma 4
([18,19]). Let { ζ n } be a sequence of nonnegative real numbers satisfying
ζ n + 1 ( 1 γ n ) ζ n + γ n ξ n + σ n , n N ,
where { γ n } , { δ n } and { σ n } satisfy the following conditions:
(i) 
{ γ n } ( 0 , 1 ) with n = 1 + γ n = + ;
(ii) 
lim sup n + ξ n 0 ;
(iii) 
σ n 0 with n = 1 + σ n < + .
Then lim n + ζ n = 0 .
Now, we focus on HVIP (4) when A and F satisfy the following conditions:
(CD1) A is monotone on C and L-Lipschitzian on H.
(CD2) F is η -strongly monotone, K-generalized Lipschitzian and hemicontinuous on H.
(CD3) VI ( C , A ) is nonempty.
From Lemma 1, we know that HVIP (4) exists the unique solution if the conditions (CD1)–(CD3) are satisfied. We denote this solution by x ^ . Consider the following variational inequality problem:
Find   x * C such   that ( A + β F ) x * , x x * 0 , x C ,
where β > 0 , A and F satisfy conditions (CD1)–(CD3). It is obvious that ( A + β F ) : H H is strongly monotone and hemicontinuous, so VIP (7) has the unique solution according to Lemma 1. We denote this solution by x β .
We have the following lemmas.
Lemma 5.
x β 1 η F x ^ + x ^ .
Proof. 
For each u VI ( C , A ) , since x β is the solution of VIP (2.1), we have
A x β + β F x β , u x β 0
and
A u , x β u 0 .
Adding the inequalities above, we obtain
A x β A u + β F x β , u x β 0 .
Since A is monotone on C, we get
A u A x β , u x β 0 .
Adding (8) and (9), we obtain
β F x β , u x β 0 ,
which means
F x β , u x β 0 .
From the η -strongly monotonicity of F, we get
F u F x β , u x β η x β u 2 .
Adding (10) and (11), we obtain
F u , u x β η x β u 2 .
Hence
x β u 2 1 η F u , u x β 1 η F u u x β ,
which implies
x β u 1 η F u .
So we obtain
x β x β u + u 1 η F u + u .
Particularly, this is also true for u = x ^ .    □
Lemma 6.
For all α , β > 0 , x α x β | β α | α M , where M is a positive constant. More precisely, M = 1 η 1 + K η F x ^ + 2 K x ^ + K .
Proof. 
Since x α VI ( C , A + α F ) and x β VI ( C , A + β F ) , we have
A x α + α F x α , x β x α 0
and
A x β + β F x β , x α x β 0 .
Adding the two inequalities above, we obtain
A x α A x β , x β x α + α F x α , x β x α + β F x β , x α x β 0 ,
which, by the monotonicity of A, implies that
α F x α , x β x α + β F x β , x α x β 0 .
Hence
( β α ) F x β , x α x β α F x β F x α , x β x α .
From the η -strong monotonicity of F, we obtain
F x β F x α , x β x α η x α x β 2 .
Substituting the last relation into (15), we have
x α x β 2 1 η F x β F x α , x β x α 1 η β α α F x β , x α x β 1 η | β α | α F x β x α x β ,
which means
x α x β 1 η | β α | α F x β .
Since F is generalized Lipschitzian, by Lemma 5, we get
F x β F x β F x ^ + F x ^ K ( x β x ^ + 1 ) + F x ^ K x β + K x ^ + K + F x ^ K 1 η F x ^ + x ^ + K x ^ + K + F x ^ = 1 + K η F x ^ + 2 K x ^ + K .
Substituting (17) into (16), we obtain
x α x β 1 η | β α | α 1 + K η F x ^ + 2 K x ^ + K = | β α | α M ,
where M = 1 η 1 + K η F x ^ + 2 K x ^ + K .    □
Lemma 7.
lim β 0 + x β = x ^ .
Proof. 
From Lemma 5, we know that { x β } is bounded. So there exists a sequence { β m } ( 0 , ) such that β m 0 and x β m x ¯ as m + by the reflexivity of H. Since x β is the solution of VIP (7), we have
A x β + β F x β , x x β 0 , x C ,
which, by the monotonicity of A, implies that
A x + β F x β , x x β 0 , x C .
Replacing β with β m in the last relation, we get
A x + β m F x β m , x x β m 0 , x C .
Since { x β m } is bounded and F is generalized Lipschitzian, { F x β m } is also bounded. Taking limit in (19), we obtain
A x , x x ¯ 0 , x C .
It follows from Lemma 2 that x ¯ VI ( C , A ) . From (12), we deduce
F u , u x β m 0 , u VI ( C , A ) .
Taking limit in (21), we obtain
F u , u x ¯ 0 , u VI ( C , A ) .
It means that x ¯ is a solution of HVIP (6). Since HVIP (6) has the unique solution x ^ , we conclude x ¯ = x ^ . Thus, x β x ^ as β 0 + . Replacing u with x ^ in (12), we get
x β x ^ 2 1 η F x ^ , x ^ x β .
It follows from the fact x β x ^ that lim β 0 + x β = x ^ .    □

3. Multi-Step Inertial RSEGM

In this section, we propose a new multi-step inertial method for solving HVIP (4) based on Algorithm 1 (RSEGM). Under certain conditions, it has a strong convergence result.
We need the following lemma to analyze the convergence of { x n } generated by Algorithm 3.
Lemma 8
([20]). The sequence { λ n } generated by Algorithm 3 is non-increasing and
lim n + λ n = λ > 0 .
More precisely, we have λ min λ 1 , μ L > 0 .
Theorem 1.
Under the conditions (CD1)–(CD3), the sequence { x n } generated by Algorithm 3 converges strongly to x ^ VI ( C , A ) , where x ^ is the unique solution of HVIP (4).
Algorithm 3 Multi-step inertial RSEGM (MIRSEGM)
  • Initialization: Given λ 1 > 0 , μ ( 0 , 1 ) . Choose x 0 , x 1 H arbitrarily and a sequence { β n } ( 0 , + ) such that
    lim n + β n = 0 , n = 1 + β n = + , n = 1 + β n 2 < + and lim n + β n + 1 β n β n 2 = 0 .
    For each i = 1 , 2 , , N (where N is a chosen positive integer), choose a sequence { σ i , n } ( 0 , + ) satisfying
    lim n + σ i , n β n = 0 and n = 1 + σ i , n < + .
  • Iterative step: Calculate x n + 1 for n 1 as follows:
  • Step 1. Compute
    w n = x n + i = 1 min { N , n } α i , n ( x n i + 1 x n i ) ,
    where 0 α i , n α i for some α i H with
    α i , n = min α i , σ i , n x n i + 1 x n i , if   x n i + 1 x n i , α i , otherwise .
  • Step 2. Compute
    y n = P C [ w n λ n ( A w n + β n F w n ) ] .
  • Step 3. Compute
    x n + 1 = P T n [ w n λ n ( A y n + β n F w n ) ] ,
    and
    λ n + 1 = min λ n , μ w n y n A w n A y n , if   A w n A y n , λ n , otherwise ,
    where T n = { z H : w n λ n ( A w n + β n F w n ) y n , z y n 0 } .
  • Set n : = n + 1 and go to Step 1.
Proof. 
From Lemma 2, we know that for each n N , there exists the unique element x β n C such that
( A + β n F ) x β n , x x β n 0 , x C .
From Lemma 7, we know that x β n x ^ , so it is only to be shown that x n x β n 0 . According to Lemma 3, we have
x n + 1 x β n 2 w n λ n ( A y n + β n F w n ) x β n 2 w n λ n ( A y n + β n F w n ) x n + 1 2 = ( w n x β n ) λ n ( A y n + β n F w n ) 2 ( w n x n + 1 ) λ n ( A y n + β n F w n ) 2 = w n x β n 2 x n + 1 w n 2 + 2 λ n A y n + β n F w n , x β n x n + 1 = w n x β n 2 x n + 1 w n 2 + 2 λ n A w n + β n F w n , y n x n + 1 + 2 λ n A w n A y n , x n + 1 y n + 2 λ n A y n + β n F w n , x β n y n = w n x β n 2 x n + 1 w n 2 + 2 w n y n , y n x n + 1 + 2 w n λ n ( A w n + β n F w n ) y n , x n + 1 y n + 2 λ n A w n A y n , x n + 1 y n + 2 λ n A y n + β n F w n , x β n y n .
Since x n + 1 T n , it follows from the definition of T n that
w n λ n ( A w n + β n F w n ) y n , x n + 1 y n 0 .
It is easy to see that
2 w n y n , y n x n + 1 = x n + 1 w n 2 w n y n 2 x n + 1 y n 2 .
Substituting (25) and (26) into (24), we obtain -4.6cm0cm
x n + 1 x β n 2 w n x β n 2 w n y n 2 x n + 1 y n 2 + 2 λ n A w n A y n , x n + 1 y n + 2 λ n A y n + β n F w n , x β n y n .
From the computation of { λ n } , we deduce
2 λ n A w n A y n , x n + 1 y n 2 λ n A w n A y n x n + 1 y n 2 μ λ n λ n + 1 w n y n x n + 1 y n μ 2 λ n 2 λ n + 1 2 w n y n 2 + x n + 1 y n 2 .
Combining (27) and (28), we have
x n + 1 x β n 2 w n x β n 2 1 μ 2 λ n 2 λ n + 1 2 w n y n 2 + 2 λ n A y n + β n F w n , x β n y n .
According to the monotonicity of A, we get
2 λ n A y n + β n F w n , x β n y n = 2 λ n A y n A x β n , x β n y n + 2 λ n A x β n + β n F x β n , x β n y n + 2 λ n β n F w n F x β n , x β n y n 2 λ n A x β n + β n F x β n , x β n y n + 2 λ n β n F w n F x β n , x β n y n .
Since x β n VI ( C , A + β n F ) and y n C , we have
A x β n + β n F x β n , x β n y n 0 ,
which, by relation (30), yields that
2 λ n A y n + β n F w n , x β n y n 2 λ n β n F w n F x β n , x β n y n .
Since F is η -strongly monotone, we find
2 λ n β n F w n F x β n , x β n y n = 2 λ n β n F w n F x β n , x β n w n + 2 λ n β n F w n F x β n , w n y n 2 λ n β n η w n x β n 2 + 2 λ n β n F w n F x β n , w n y n .
Let ϵ 1 , ϵ 2 and ϵ 3 be there positive real numbers such that
2 η K ϵ 1 ϵ 2 ϵ 3 > 0 .
From Lemma 8, we know that μ 2 λ n 2 λ n + 1 2 μ 2 ( 0 , 1 ) . Since β n 0 and σ i , n β n 0 for each i = 1 , 2 , , N , there exist ϵ 4 > 0 and n 0 N such that
1 ϵ 4 μ 2 λ n 2 λ n + 1 2 λ n β n K ϵ 1 > 0 , n n 0 ,
i = 1 N σ i , n ϵ 3 λ n β n , n n 0 .
Since F is K-generalized Lipschitzian, we deduce
2 λ n β n F w n F x β n , w n y n 2 λ n β n F w n F x β n w n y n 2 λ n β n K ( w n x β n + 1 ) w n y n = 2 λ n β n K w n x β n w n y n + 2 λ n β n K w n y n ϵ 1 λ n β n K w n x β n 2 + λ n β n K ϵ 1 w n y n 2 + ϵ 4 w n y n 2 + λ n 2 β n 2 K 2 ϵ 4 ϵ 1 λ n β n K w n x β n 2 + λ n β n K ϵ 1 + ϵ 4 w n y n 2 + λ 1 2 β n 2 K 2 ϵ 4 .
Combing (31)–(33), we obtain
2 λ n A y n + β n F w n , x β n y n ( 2 η K ϵ 1 ) λ n β n w n x β n 2 + λ n β n K ϵ 1 + ϵ 4 w n y n 2 + λ 1 2 β n 2 K 2 ϵ 4 .
Substituting (34) into (29), for all n n 0 , we conclude
x n + 1 x β n 2 [ 1 ( 2 η K ϵ 1 ) λ n β n ] w n x β n 2 1 ϵ 4 μ 2 λ n 2 λ n + 1 2 λ n β n K ϵ 1 w n y n 2 + λ 1 2 β n 2 K 2 ϵ 4 [ 1 ( 2 η K ϵ 1 ) λ n β n ] w n x β n 2 + λ 1 2 β n 2 K 2 ϵ 4 .
By Lemma 6, for all n n 0 , we have
x n + 1 x β n + 1 2 = x n + 1 x β n 2 + x β n + 1 x β n 2 + 2 x n + 1 x β n , x β n x β n + 1 x n + 1 x β n 2 + x β n + 1 x β n 2 + 2 x n + 1 x β n x β n + 1 x β n x n + 1 x β n 2 + x β n + 1 x β n 2 + ϵ 2 λ n β n x n + 1 x β n 2 + 1 ϵ 2 λ n β n x β n + 1 x β n 2 = ( 1 + ϵ 2 λ n β n ) x n + 1 x β n 2 + 1 + ϵ 2 λ n β n ϵ 2 λ n β n x β n + 1 x β n 2 ( 1 + ϵ 2 λ n β n ) x n + 1 x β n 2 + 1 + ϵ 2 λ n β n ϵ 2 λ n β n β n + 1 β n β n 2 M 1 2 = ( 1 + ϵ 2 λ n β n ) x n + 1 x β n 2 + M 1 2 ( 1 + ϵ 2 λ n β n ) ϵ 2 λ n ( β n + 1 β n ) 2 β n 3 ,
where M 1 = 1 η 1 + K η F x ^ + 2 K x ^ + K is a positive constant. Since β n 0 , we know that { β n } is bounded. Hence λ 1 2 K 2 ϵ 4 ( 1 + ϵ 2 λ 1 β n ) is bounded. Substituting (35) into (36), for all n n 0 , we deduce
x n + 1 x β n + 1 2 ( 1 + ϵ 2 λ n β n ) [ 1 ( 2 η K ϵ 1 ) λ n β n ] w n x β n 2 + M 1 2 ( 1 + ϵ 2 λ n β n ) ϵ 2 λ n ( β n + 1 β n ) 2 β n 3 + ( 1 + ϵ 2 λ n β n ) λ 1 2 β n 2 K 2 ϵ 4 [ 1 ( 2 η K ϵ 1 ϵ 2 ) λ n β n ( 2 η K ϵ 1 ) ϵ 2 λ n 2 β n 2 ] w n x β n 2 + M 1 2 ( 1 + ϵ 2 λ n β n ) ϵ 2 λ n ( β n + 1 β n ) 2 β n 3 + ( 1 + ϵ 2 λ 1 β n ) λ 1 2 β n 2 K 2 ϵ 4 [ 1 ( 2 η K ϵ 1 ϵ 2 ) λ n β n ] w n x β n 2 + M 1 2 ( 1 + ϵ 2 λ n β n ) ϵ 2 λ n ( β n + 1 β n ) 2 β n 3 + M 2 β n 2 ,
where M 2 = sup n N λ 1 2 K 2 ϵ 4 ( 1 + ϵ 2 λ 1 β n ) is a positive constant. Notice, for all n n 0 ,    
w n x β n 2 = ( x n x β n ) + i = 1 N α i , n ( x n i + 1 x n i ) 2 x n x β n + i = 1 N α i , n x n i + 1 x n i 2 = x n x β n 2 + i = 1 N α i , n 2 x n i + 1 x n i 2 + 2 x n x β n i = 1 N α i , n x n i + 1 x n i + 2 1 i < j N α i , n α j , n x n i + 1 x n i x n j + 1 x n j x n x β n 2 + i = 1 N σ i , n 2 + x n x β n 2 i = 1 N σ i , n + i = 1 N σ i , n + 2 1 i < j N σ i , n σ j , n = 1 + i = 1 N σ i , n x n x β n 2 + σ ¯ n ( 1 + ϵ 3 λ n β n ) x n x β n 2 + σ ¯ n ,
where σ ¯ n = i = 1 N σ i , n 2 + i = 1 N σ i , n + 2 1 i < j N σ i , n σ j , n . From the condition of { σ i , n } , we can obviously see that n = 1 + σ ¯ n < + . Substituting (38) into (37), for all n n 0 , we conclude
x n + 1 x β n + 1 2 [ 1 ( 2 η K ϵ 1 ϵ 2 ) λ n β n ] ( 1 + ϵ 3 λ n β n ) x n x β n 2 + M 1 2 ( 1 + ϵ 2 λ n β n ) ϵ 2 λ n ( β n + 1 β n ) 2 β n 3 + M 2 β n 2 + σ ¯ n = [ 1 ( 2 η K ϵ 1 ϵ 2 ϵ 3 ) λ n β n ( 2 η K ϵ 1 ϵ 2 ) ϵ 3 λ n 2 β n 2 ] x n x β n 2 + M 1 2 ( 1 + ϵ 2 λ n β n ) ϵ 2 λ n ( β n + 1 β n ) 2 β n 3 + M 2 β n 2 + σ ¯ n [ 1 ( 2 η K ϵ 1 ϵ 2 ϵ 3 ) λ n β n ] x n x β n 2 + ( 2 η K ϵ 1 ϵ 2 ϵ 3 ) λ n β n M 1 2 ( 1 + ϵ 2 λ n β n ) ( 2 η K ϵ 1 ϵ 2 ϵ 3 ) ϵ 2 λ n 2 ( β n + 1 β n ) 2 β n 4 + M 2 β n 2 + σ ¯ n ( 1 β ¯ n ) x n x β n 2 + β ¯ n M β n + 1 β n β n 2 2 + M 2 β n 2 + σ ¯ n = ( 1 β ¯ n ) x n x β n 2 + β ¯ n δ n + M 2 β n 2 + σ ¯ n ,
where β ¯ n = ( 2 η K ϵ 1 ϵ 2 ϵ 3 ) λ n β n , M = sup n N M 1 2 ( 1 + ϵ 2 λ n β n ) ( 2 η K ϵ 1 ϵ 2 ϵ 3 ) ϵ 2 λ n 2 is a positive constant and δ n = M β n + 1 β n β n 2 2 . By the conditions of { λ n } and { β n } , we know that n = 1 + ( M 2 β n 2 + σ ¯ n ) < + , β ¯ n 0 , n = 1 + β ¯ n = + and δ n 0 . It follows from Lemma 4 that x n x β n 0 as n + .    □
Remark 2.
(i) 
In Algorithm 3, it is not neccecery to know η and K.
(ii) 
In Algorithm 3, { β n } can be taken as β n = n p , where 1 2 < p < 1 .
(iii) 
If F is strongly monotone and Lipschitz-continuous, then the condition n = 1 + β n 2 < + can be removed.
(iv) 
Let f : H H be a contractive mapping. It is obvious that I f is strongly monotone and Lipschitz continuous. If F = I f in Algorithm 3, then { x n } converges strongly to x ^ , where x ^ is the unique fixed point of P VI ( C , A ) f . Furthermore, if F = I in Algorithm 3, then { x n } converges strongly to x , where x is the mini-norm element in VI ( C , A ) , i.e., x = P VI ( C , A ) 0 .

4. Multi-Step Inertial RTEGM

In this section, we propose a new method for solving HVIP (6) based on Algorithm 2 (RTEGM). Under certain conditions, it has a strong convergence result.
Theorem 2.
Under the conditions (CD1)–(CD3), the sequence { x n } generated by Algorithm 4 converges strongly to x ^ VI ( C , A ) , where x ^ is the unique solution of HVIP (4).
Proof. 
From Lemma 2, we know that for each n N , there exists the unique element x β n C such that
( A + β n F ) x β n , x x β n 0 , x C .
By Lemma 7, we need to prove that x n x β n 0 . From the expression of x n + 1 , we have    
x n + 1 x β n 2 = ( y n x β n ) λ n ( A y n A w n ) 2 = y n x β n 2 + λ n 2 A y n A w n 2 2 λ n A y n A w n , y n x β n = y n x β n 2 + λ n 2 A y n A w n 2 + 2 λ n A w n + β n F w n , y n x β n 2 λ n A y n + β n F w n , y n x β n = y n x β n 2 + λ n 2 A y n A w n 2 + 2 w n y n , y n x β n + 2 w n λ n ( A w n + β n F w n ) y n , x β n y n 2 λ n A y n + β n F w n , y n x β n .
By Lemma 3 and the expression of y n , we get
w n λ n ( A w n + β n F w n ) y n , p y n 0 , p C .
Now
w n λ n ( A w n + β n F w n ) y n , x β n y n 0 ,
which is due to the fact that x β n C . It is easy to see that
2 w n y n , y n x β n = w n x β n 2 w n y n 2 y n x β n 2 .
Substituting (41) and (42) into (40), we obtain
x n + 1 x β n 2 w n x β n 2 w n y n 2 + λ n 2 A y n A w n 2 + 2 λ n A y n + β n F w n , x β n y n .
From the computation of { λ n } , we deduce
λ n 2 A y n A w n 2 μ 2 λ n λ n + 1 y n w n 2 .
Substituting the last inequality into (43), we have
x n + 1 x β n 2 w n x β n 2 1 μ 2 λ n 2 λ n + 1 2 w n y n 2 + 2 λ n A y n + β n F w n , x β n y n .
The rest proof is the same as that in Theorem 1.    □
Algorithm 4 Multi-step inertial RTEGM (MIRTEGM)
  • Initialization: Given λ 1 > 0 , μ ( 0 , 1 ) . Choose x 0 , x 1 H arbitrarily and a sequence { β n } ( 0 , + ) such that
    lim n + β n = 0 , n = 1 + β n = + , n = 1 + β n 2 < + , and lim n + β n + 1 β n β n 2 = 0 .
    For each i = 1 , 2 , , N (where N is a chosen positive integer), choose a sequence { σ i , n } ( 0 , + ) satisfying
    lim n + σ i , n β n = 0 and n = 1 + σ i , n < + .
  • Iterative step: Calculate x n + 1 for n 1 as follows:
  • Step 1. Compute
    w n = x n + i = 1 min { N , n } α i , n ( x n i + 1 x n i ) ,
    where 0 α i , n α i for some α i H with
    α i , n = min α i , σ i , n x n i + 1 x n i , if   x n i + 1 x n i , α i , otherwise .
  • Step 2. Compute
    y n = P C [ w n λ n ( A w n + β n F w n ) ] .
  • Step 3. Compute
    x n + 1 = y n λ n ( A y n A w n ) ,
    and
    λ n + 1 = min λ n , μ w n y n A w n A y n , if   A w n A y n , λ n , otherwise .
  • Set n : = n + 1 and go to Step 1.

5. Numerical Experiments

In this section, we give two numerical examples to illustrate the effectiveness and feasibility of our algorithms and compare with Algorithm 1 (RSEGM) and Algorithm 2 (RTEGM). We denote Algorithm 3 for N = 1 , N = 2 , N = 3 by IRSEGM, 2-MIRSEGM and 3-MIRSEGM, respectively, denote Algorithm 4 for N = 1 , N = 2 , N = 3 by IRTEGM, 2-MIRTEGM and 3-MIRTEGM, respectively. We write all the programmes in Matlab 9.0 and performed on PC Desktop Intel(R) Core(TM) i5-1035G1 CPU @ 1.00 GHz 1.19 GHz, RAM 16.0 GB.
Example 1.
Let H = R and C = [ 2 , 5 ] . Let A be a function defined as
A x : = x + sin x ,
for each x R . It is easy to see that A is monotone and Lipschitz continuous. Let F = I .
Choose x 0 = 1 , α i = 0.1 and σ i , n = n 2 for IRSEGM, 2-MIRSEGM, 3-MIRSEGM, IRTEGM, 2-MIRTEGM and 3-MIRTEGM. Choose μ = 0.6 and β n = n 3 / 4 for each algorithm. It is obvious that VI ( C , A ) = { 0 } and hence x * = 0 is the unique solution of HVIP (4). We use x n x * 10 6 for stopping criterion. We show the numerical results in Table 1 and Table 2. From these tables, we can easily see that the number of iterations of our algorithms is 10–40% less than RSEGM and RTEGM. Convergence of our algorithms is also much faster than RSEGM and RTEGM in term of elapsed time.
Example 2.
Let H = R s . We consider the HpHard problem [20,21]. Let A : R s R s be a mapping defined by
A x : = M x + q ,
for each x R s , where
M = B B T + S + D ,
B is a matrix in R s × s , S is a skew-symmetric matrix in R s × s , D is a diagonal matrix in R s × s whose diagonal entries are positive, and q R s is a vector. Thus, M is positive definite. Let C be a set defined by
C = { ( x ( 1 ) , x ( 2 ) , , x ( s ) ) T R s : 2 x ( i ) 5 , i = 1 , 2 , , s } .
It is clear that A is monotone and Lipschitz continuous. Let F = I . It is obvious that VI ( C , A ) = { ( 0 , 0 , , 0 ) T } and hence x * = ( 0 , 0 , , 0 ) T is the unique solution of HVIP (1.6).
For the experiments, all the entries of B and S are generated randomly and uniformly in (−2,2), the diagonal entries of D are generated randomly and uniformly in (0,2), q = ( 0 , 0 , , 0 ) T . We choose x 0 = ( 1 , 1 , , 1 ) T , α i = 0.1 and σ i , n = n 2 for IRSEGM, 2-MIRSEGM, 3-MIRSEGM, IRTEGM, 2-MIRTEGM and 3-MIRTEGM, choose x 1 = ( 1 , 1 , , 1 ) T , μ = 0.6 , λ 1 = 0.01 and β n = n 3 / 4 for each algorithm. We show the numerical results in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6. From these figures, we see that the algorithms we proposed have advantages over RSEGM and RTEGM.

6. Conclusions

In this paper, we constructed a multi-step inertial regularized subgradient extragradient method and a multi-step inertial Tseng’s extragradient method for solving HVIP (6) in a Hilbert space when F is a generalized Lipschitzian and hemicontinuous mapping, which are based on the multi-step inertial methods, Algorithm 1 (RSEGM) and Algorithm 2 (RTEGM). We presented two strong convergence theorems. Finally, we gave some numerical experiments to show the effectiveness and feasibility of our new iterative methods. From the numerical results, we can obviously see that our methods have advantages over Algorithms 1 and 2.
Our Algorithms 3 and 4 extend and improve Algorithms 1 and 2 in the following ways:
(i)
The inertial method is used in Algorithms 3 and 4.
(ii)
The Lipschitzian mapping F is generalized to a generalized Lipschitzian and hemicontinuous mapping.
In other words, if we let α i = 0 and L be a Lipschitzian mapping, then Algorithm 3 (or Algorithm 4) reduces to Algorithm 1 (or Algorithm 2).

Author Contributions

Conceptualization, J.-C.Y.; Data curation, B.J.; Formal analysis, Y.W.; Funding acquisition, Y.W. and J.-C.Y.; Investigation, B.J.; Methodology, B.J. and Y.W.; Resources, J.-C.Y.; Supervision, Y.W. and J.-C.Y.; Writing–original draft, B.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 11671365.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declare that they have no competing interests.

References

  1. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  2. Censor, Y.; Gibali, A.; Reich, S. The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 2011, 148, 318–335. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Hieu, D.V.; Muu, L.D.; Duong, H.N.; Thai, B.H. Strong convergence of new iterative projection methods with regularization for solving monotone variational inequalities in Hilbert spaces. Math. Meth. Appl. Sci. 2020, 43, 9745–9765. [Google Scholar] [CrossRef]
  4. Hieu, D.V.; Anh, P.K.; Muu, L.D. Strong convergence of subgradient extragradient method with regularization for solving variational inequalities. Optim. Eng. 2020. [Google Scholar] [CrossRef]
  5. Maingé, P.E. Inertial iterative process for fixed points of certain quasi-nonexpansive mapping. Set-Valued Anal. 2007, 15, 67–79. [Google Scholar] [CrossRef]
  6. Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  7. Dong, Q.L.; Huang, J.Z.; Li, X.H.; Cho, Y.J.; Rassias, T.M. MiKM: Multi-step inertial Krasnosel’skiǐ–Mann algorithm and its applications. J. Glob. Optim. 2019, 73, 801–824. [Google Scholar] [CrossRef]
  8. Pan, C.; Wang, Y. Convergence theorems for modified inertial viscosity splitting methods in Banach spaces. Mathematics 2019, 7, 379. [Google Scholar] [CrossRef] [Green Version]
  9. Ceng, L.C.; Qin, X.; Shehu, Y.; Yao, J.C. Mildly inertial subgradient extragradient method for variational inequalities involving an asymptotically nonexpansive and finitely many nonexpansive mappings. Mathematics 2019, 7, 881. [Google Scholar] [CrossRef] [Green Version]
  10. Tian, M.; Jiang, B.N. Inertial Haugazeau’s hybrid subgradient extragradient algorithm for variational inequality problems in Banach spaces. Optimization 2021, 70, 987–1007. [Google Scholar] [CrossRef]
  11. Ceng, L.C.; Petruşel, A.; Qin, X.; Yao, J.C. Two inertial subgradient extragradient algorithms for variational inequalities with fixed-point constraints. Optimization 2021, 70, 1337–1358. [Google Scholar] [CrossRef]
  12. Shehu, Y.; Liu, L.; Mu, X.; Dong, Q.L. Analysis of versions of relaxed inertial projection and contraction method. Appl. Numer. Math. 2021, 165, 1–21. [Google Scholar] [CrossRef]
  13. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert space. Inverse Probl. 2010, 26, 10518. [Google Scholar] [CrossRef]
  14. Zhou, H.; Zhou, Y.; Feng, G. Iterative methods for solving a class of monotone variational inequality problems with applications. J. Inequal. Appl. 2015, 2015, 68. [Google Scholar] [CrossRef] [Green Version]
  15. Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
  16. Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
  17. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
  18. Liu, L.S. Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappings in Banach spaces. J. Math. Anal. Appl. 1995, 194, 114–125. [Google Scholar] [CrossRef] [Green Version]
  19. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  20. Yang, J.; Liu, H. Strong convergence result for solving monotone variational inequalities in Hilbert space. Numer. Algorithm 2019, 80, 741–752. [Google Scholar] [CrossRef]
  21. Malitsky, Y.V. Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. 2015, 25, 502–520. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Comparison of RSEGM, IRSEGM, 2-MIRSEGM and 3-MIRSEGM in Example 2 with s = 20 .
Figure 1. Comparison of RSEGM, IRSEGM, 2-MIRSEGM and 3-MIRSEGM in Example 2 with s = 20 .
Mathematics 09 02103 g001
Figure 2. Comparison of RTEGM, IRTEGM, 2-MIRTEGM and 3-MIRTEGM in Example 2 with s = 20 .
Figure 2. Comparison of RTEGM, IRTEGM, 2-MIRTEGM and 3-MIRTEGM in Example 2 with s = 20 .
Mathematics 09 02103 g002
Figure 3. Comparison of RSEGM, IRSEGM, 2-MIRSEGM and 3-MIRSEGM in Example 2 with s = 30 .
Figure 3. Comparison of RSEGM, IRSEGM, 2-MIRSEGM and 3-MIRSEGM in Example 2 with s = 30 .
Mathematics 09 02103 g003
Figure 4. Comparison of RTEGM, IRTEGM, 2-MIRTEGM and 3-MIRTEGM in Example 2 with s = 30 .
Figure 4. Comparison of RTEGM, IRTEGM, 2-MIRTEGM and 3-MIRTEGM in Example 2 with s = 30 .
Mathematics 09 02103 g004
Figure 5. Comparison of RSEGM, IRSEGM, 2-MIRSEGM and 3-MIRSEGM in Example 2 with s = 40 .
Figure 5. Comparison of RSEGM, IRSEGM, 2-MIRSEGM and 3-MIRSEGM in Example 2 with s = 40 .
Mathematics 09 02103 g005
Figure 6. Comparison of RTEGM, IRTEGM, 2-MIRTEGM and 3-MIRTEGM in Example 2 with s = 40 .
Figure 6. Comparison of RTEGM, IRTEGM, 2-MIRTEGM and 3-MIRTEGM in Example 2 with s = 40 .
Mathematics 09 02103 g006
Table 1. Numerical results of RSEGM, IRSEGM, 2-MIRSEGM and 3-MIRSEGM as regards Example 1.
Table 1. Numerical results of RSEGM, IRSEGM, 2-MIRSEGM and 3-MIRSEGM as regards Example 1.
x 1 λ 1 RSEGMIRSEGM2-MIRSEGM3-MIRSEGM
Iter.Time [s]Iter.Time [s]Iter.Time [s]Iter.Time [s]
10.5490.6557440.6287340.5419280.4817
0.1760.9035690.8155570.7259420.6009
0.051431.37151281.30371111.1908870.9745
20.5510.6835460.6415350.5482280.4879
0.1800.9425730.8266600.7397370.5490
0.051511.48221361.38761191.2319971.0492
Table 2. Numerical results of RTEGM, IRTEGM, 2-MIRTEGM and 3-MIRTEGM as regards Example 1.
Table 2. Numerical results of RTEGM, IRTEGM, 2-MIRTEGM and 3-MIRTEGM as regards Example 1.
x 1 λ 1 RTEGMIRTEGM2-MIRTEGM3-MIRTEGM
Iter.Time [s]Iter.Time [s]Iter.Time [s]Iter.Time [s]
10.5490.6543440.6119340.5208340.5178
0.1760.8435690.7972570.7043380.5724
0.051431.41721231.27591111.1249880.9873
20.5510.6771460.6325350.5372340.5230
0.1800.9263730.8278610.7532430.6107
0.051511.47391371.39621191.2103981.0871
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiang, B.; Wang, Y.; Yao, J.-C. Multi-Step Inertial Regularized Methods for Hierarchical Variational Inequality Problems Involving Generalized Lipschitzian Mappings. Mathematics 2021, 9, 2103. https://doi.org/10.3390/math9172103

AMA Style

Jiang B, Wang Y, Yao J-C. Multi-Step Inertial Regularized Methods for Hierarchical Variational Inequality Problems Involving Generalized Lipschitzian Mappings. Mathematics. 2021; 9(17):2103. https://doi.org/10.3390/math9172103

Chicago/Turabian Style

Jiang, Bingnan, Yuanheng Wang, and Jen-Chih Yao. 2021. "Multi-Step Inertial Regularized Methods for Hierarchical Variational Inequality Problems Involving Generalized Lipschitzian Mappings" Mathematics 9, no. 17: 2103. https://doi.org/10.3390/math9172103

APA Style

Jiang, B., Wang, Y., & Yao, J. -C. (2021). Multi-Step Inertial Regularized Methods for Hierarchical Variational Inequality Problems Involving Generalized Lipschitzian Mappings. Mathematics, 9(17), 2103. https://doi.org/10.3390/math9172103

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop