Next Article in Journal
A QCA Analysis of Knowledge Co-Creation Based on University–Industry Relationships
Next Article in Special Issue
New Fixed Point Results in Orthogonal B-Metric Spaces with Related Applications
Previous Article in Journal
Applying the Crow Search Algorithm for the Optimal Integration of PV Generation Units in DC Networks
Previous Article in Special Issue
Study of Fractional Differential Equations Emerging in the Theory of Chemical Graphs: A Robust Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Relaxed Inertial Tseng’s Extragradient Method for Solving Split Variational Inequalities with Multiple Output Sets

by
Timilehin Opeyemi Alakoya
* and
Oluwatosin Temitope Mewomo
School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4041, South Africa
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(2), 386; https://doi.org/10.3390/math11020386
Submission received: 12 December 2022 / Revised: 27 December 2022 / Accepted: 5 January 2023 / Published: 11 January 2023
(This article belongs to the Special Issue Advances in Fixed Point Theory and Its Applications)

Abstract

:
Recently, the split inverse problem has received great research attention due to its several applications in diverse fields. In this paper, we study a new class of split inverse problems called the split variational inequality problem with multiple output sets. We propose a new Tseng extragradient method, which uses self-adaptive step sizes for approximating the solution to the problem when the cost operators are pseudomonotone and non-Lipschitz in the framework of Hilbert spaces. We point out that while the cost operators are non-Lipschitz, our proposed method does not involve any linesearch procedure for its implementation. Instead, we employ a more efficient self-adaptive step size technique with known parameters. In addition, we employ the relaxation method and the inertial technique to improve the convergence properties of the algorithm. Moreover, under some mild conditions on the control parameters and without the knowledge of the operators’ norm, we prove that the sequence generated by our proposed method converges strongly to a minimum-norm solution to the problem. Finally, we apply our result to study certain classes of optimization problems, and we present several numerical experiments to demonstrate the applicability of our proposed method. Several of the existing results in the literature in this direction could be viewed as special cases of our results in this study.

1. Introduction

Let H be a real Hilbert space endowed with inner product · , · and induced norm | | · | | . Let C be a nonempty, closed and convex subset of H , and let A : H H be an operator. Recall that the variational inequality problem (VIP) is formulated as finding an element p C such that
x p , A p 0 , x C .
The solution set of the VIP (1) is denoted by V I ( C , A ) . Fichera [1] and Stampacchia [2] were the first to introduce and initiate a study independently on variational inequality theory. The variational inequality model is known to provide a general and useful framework for solving several problems in engineering, optimal control, data sciences, mathematical programming, economics, etc. (see [3,4,5,6,7,8] and the references therein). In recent times, the VIP has received great research attention owing to its several applications in diverse fields, such as economics, operations research, optimization theory, structural analysis, sciences and engineering (see [9,10,11,12,13,14] and the references therein). Several methods have been proposed and analyzed by authors for solving the VIP (see [15,16,17,18,19] and references therein).
One of the well-known and highly efficient methods is the Tseng extragradient method [20] (which is also known as the forward–backward–forward algorithm). The method is a two-step projection iterative method, which only requires single computation of the projection onto the feasible set per iteration. Several authors have modified and improved on the Tseng extragradient method to approximate the solution of the VIP (1) (for instance, see [19,21,22,23] and the references therein).
Another active area of research interest in recent years is the split inverse problem (SIP). The SIP finds applications in various fields, such as in medical image reconstruction, intensity-modulated radiation therapy, signal processing, phase retrieval, data compression, etc. (for instance, see [24,25,26,27]). The SIP model is presented as follows:
Find x ^ H 1 that solves IP 1
such that
y ^ : = T x ^ H 2 solves IP 2 ,
where H 1 and H 2 are real Hilbert spaces, IP 1 denotes an inverse problem formulated in H 1 , and IP 2 denotes an inverse problem formulated in H 2 , and T : H 1 H 2 is a bounded linear operator.
The first instance of the SIP, called the split feasibility problem (SFP), was introduced in 1994 by Censor and Elfving [26] for modeling inverse problems that arise from medical image reconstruction. The SFP has numerous areas of applications, for instance, in signal processing, biomedical engineering, control theory, approximation theory, geophysics, communications, etc. [25,27,28]. The SFP is formulated as follows:
Find x ^ C such that y ^ = T x ^ Q ,
where C and Q are nonempty, closed and convex subsets of Hilbert spaces H 1 and H 2 , respectively, and T : H 1 H 2 is a bounded linear operator.
A well-known method for solving the SFP is the CQ method proposed by Byrne [29]. The CQ method has been improved and extended by several researchers. Moreover, many authors have proposed and analyzed several other iterative methods for approximating the solution of SFP (4) both in the framework of Hilbert and Banach spaces (for instance, see [25,27,28,30,31]).
Censor et al. [32] introduced an important generalization of the SFP called the split variational inequality problem (SVIP). The SVIP is defined as follows:
Find x ^ C that solves A 1 x ^ , x x ^ 0 , x C
such that
y ^ = T x ^ H 2 solves A 2 y ^ , y y ^ 0 , y Q ,
where A 1 : H 1 H 1 , A 2 : H 2 H 2 are single-valued operators. Many authors have proposed and analyzed several iterative techniques for solving the SVIP (e.g., see [33,34,35,36]).
Very recently, Reich and Tuyen [37] introduced and studied a new split inverse problem called the split feasibility problem with multiple output sets (SFPMOS) in the framework of Hilbert spaces. Let C and Q i be nonempty, closed and convex subsets of Hilbert spaces H and H i , i = 1 , 2 , , N , respectively. Let T i : H H i , i = 1 , 2 , , N be bounded linear operators. The SFPMOS is formulated as follows: find an element u H such that
u Γ : = C ( i = 1 N T i 1 ( Q i ) ) .
Reich and Tuyen [38] proposed and analyzed two iterative methods for solving the SFPMOS (7) in the framework of Hilbert spaces. The proposed algorithms are presented as follows:
x n + 1 = P C x n γ n i = 1 N T i * ( I P Q i ) T i x n ,
and
x n + 1 = α n f ( x n ) + ( 1 α n ) P C x n γ n i = 1 N T i * ( I P Q i ) T i x n ) ,
where f : C C is a strict contraction, { γ n } ( 0 , + ) and { α n } ( 0 , 1 ) . The authors obtained weak and strong convergence results for Algorithm (8) and Algorithm (9), respectively.
Motivated by the importance and several applications of the split inverse problems, in this paper, we examine a new class of split inverse problems called the split variational inequality problem with multiple output sets. Let H , H i , i = 1 , 2 , , N , be real Hilbert spaces and let C , C i be nonempty, closed and convex subsets of real Hilbert spaces H and H i , i = 1 , 2 , , N , respectively. Let T i : H H i , i = 1 , 2 , , N , be bounded linear operators and let A : H H , A i : H i H i , i = 1 , 2 , , N , be mappings. The split variational inequality problem with multiple output sets (SVIPMOS) is formulated as finding a point x * C such that
x * Ω : = V I ( C , A ) ( i = 1 N T i 1 V I ( C i , A i ) ) .
Observe that the SVIPMOS (10) is a more general problem than the SFPMOS (7).
In recent times, developing algorithms with high rates of convergence for solving optimization problems has become of great interest to researchers. There are two important techniques that are generally employed by researchers to improve the rate of convergence of iterative methods. These techniques include the inertial technique and the relaxation technique. The inertial technique first introduced by Polyak [39] originates from an implicit time discretization method (the heavy ball method) of second-order dynamical systems. The main feature of the inertial-algorithm is that the method uses the previous two iterates to generate the next iterate. We note that this small change can significantly improve the speed of convergence of an iterative method (for instance, see [21,23,40,41,42,43,44,45]). The relaxation method is another well-known technique employed by authors to improve the rate of convergence of iterative methods (see, e.g., [46,47,48]). The influence of these two techniques on the convergence properties of iterative methods was investigated in [46].
In this study, we introduce and analyze the convergence of a relaxed inertial Tseng extragradient method for solving the SVIPMOS (10) in the framework of Hilbert spaces when the cost operators are pseudomonotone and non-Lipschitz. Our proposed algorithm has the following key features:
  • The proposed method does not require the Lipschitz continuity condition often imposed by the cost operator in the literature when solving variational inequality problems. In addition, while the cost operators are non-Lipschitz, the design of our algorithm does not involve any linesearch procedure, which could be time-consuming and too expensive to implement.
  • Our proposed method does not require knowledge of the operators’ norm for its implementation. Rather, we employ a very efficient self-adaptive step size technique with known parameters. Moreover, some of the control parameters are relaxed to enlarge the range of values of the step sizes of the algorithm.
  • Our algorithm combines the relaxation method and the inertial techniques to improve its convergence properties.
  • The sequence generated by our proposed method converges strongly to a minimum-norm solution to the SVIPMOS (10). Finding the minimum-norm solution to a problem is very important and useful in several practical problems.
Finally, we apply our result to study certain classes of optimization problems, and we carry out several numerical experiments to illustrate the applicability of our proposed method.
This paper is organized as follows: In Section 2, we present some definitions and lemmas needed to analyze the convergence of the proposed algorithm, while in Section 3, we present the proposed method. In Section 4, we discuss the convergence of the proposed method, and in Section 5, we apply our result to study certain classes of optimization problems. In Section 6, we present several numerical experiments with graphical illustrations. Finally, in Section 7, we give a concluding remark.

2. Preliminaries

Definition 1
([21,22]). An operator A : H H is said to be
(i) 
α-strongly monotone, if there exists α > 0 such that
x y , A x A y α x y 2 , x , y H ;
(ii) 
monotone, if
x y , A x A y 0 , x , y H ;
(iii) 
pseudomonotone, if
A y , x y 0 A x , x y 0 , x , y H ,
(iv) 
L-Lipschitz continuous, if there exists a constant L > 0 such that
| | A x A y | | L | | x y | | , x , y H ;
(v) 
uniformly continuous, if for every ϵ > 0 , there exists δ = δ ( ϵ ) > 0 , such that
A x A y < ϵ w h e n e v e r x y < δ , x , y H ;
(vi) 
sequentially weakly continuous, if for each sequence { x n } , we have x n x H implies that A x n A x H .
Remark 1.
It is known that the following implications hold: ( i ) ( i i ) ( i i i ) but the converses are not generally true. We also note that uniform continuity is a weaker notion than Lipschitz continuity.
It is well-known that if D is a convex subset of H , then A : D H is uniformly continuous if and only if, for every ϵ > 0 , there exists a constant K < + such that
A x A y K x y + ϵ x , y D .
Lemma 1
([49]).Suppose { a n } is a sequence of nonnegative real numbers, { α n } is a sequence in ( 0 , 1 ) with n = 1 α n = + and { b n } is a sequence of real numbers. Assume that
a n + 1 ( 1 α n ) a n + α n b n f o r a l l n 1 .
If lim sup k b n k 0 for every subsequence { a n k } of { a n } satisfying lim inf k ( a n k + 1 a n k ) 0 , then lim n a n = 0 .
Lemma 2
([50]).Suppose { λ n } and { θ n } are two nonnegative real sequences such that
λ n + 1 λ n + ϕ n , n 1 .
If n = 1 ϕ n < + , then lim n λ n exists.
Lemma 3
([51]).Let H be a real Hilbert space. Then, the following results hold for all x , y H and δ ( 0 , 1 ) :
(i) 
| | x + y | | 2 | | x | | 2 + 2 y , x + y ;
(ii) 
| | x + y | | 2 = | | x | | 2 + 2 x , y + | | y | | 2 ;
(iii) 
| | δ x + ( 1 δ ) y | | 2 = δ | | x | | 2 + ( 1 δ ) | | y | | 2 δ ( 1 δ ) | | x y | | 2 .
Lemma 4
([52]).Consider the V I P (1) with C being a nonempty, closed, convex subset of a real Hilbert space H and A : C H being pseudomonotone and continuous. Then p is a solution of V I P (1) if and only if
A x , x p 0 , x C

3. Main Results

In this section, we present our proposed iterative method for solving the SVIPMOS (10). We establish our convergence result for the proposed method under the following conditions:
Let C , C i be nonempty, closed and convex subsets of real Hilbert spaces H , H i , i = 1 , 2 , , N , respectively, and let T i : H H i , i = 1 , 2 , , N be bounded linear operators with adjoints T i * . Let A : H H , A i : H i H i , i = 1 , 2 , , N , be uniformly continuous pseudomonotone operators satisfying the following property:
whenever { T i x n } C i , T i x n T i z , then A i T i z lim inf n A i T i x n , i = 0 , 1 , 2 , N , C 0 = C , A 0 = A , T 0 = I H .
Moreover, we assume that the solution set Ω and the control parameters satisfy the following conditions:
Assumption B:
(A1)
{ α n } ( 0 , 1 ) , lim n α n = 0 , n = 1 α n = + , lim n ϵ n α n = 0 , { ξ n } [ a , b ] ( 0 , 1 ) , θ > 0 ;
(A2)
0 < c i < c i < 1 , 0 < ϕ i < ϕ i < 1 , lim n c n , i = lim n ϕ n , i = 0 , λ 1 , i > 0 , i = 0 , 1 , 2 , , N ;
(A3)
{ ρ n , i } R + , n = 1 ρ n , i < + , 0 < a i δ n , i b i < 1 , i = 0 N δ n , i = 1 for each n 1 .
Now, the Algorithm 1 is presented as follows:
Algorithm 1. A Relaxed Inertial Tseng’s Extragradient Method for Solving SVIPMOS (10).
Step 0.
Select initial points x 0 , x 1 H . Let C 0 = C , T 0 = I H , A 0 = A and set n = 1 .
Step 1.
Given the ( n 1 ) t h and n t h iterates, choose θ n such that 0 θ n θ ^ n with θ ^ n defined by
θ ^ n = min θ , ϵ n x n x n 1 , if x n x n 1 , θ , otherwise .
Step 2.
Compute
w n = ( 1 α n ) ( x n + θ n ( x n x n 1 ) ) .
Step 3.
Compute
y n , i = P C i ( T i w n λ n , i A i T i w n ) .
Step 4.
Compute
u n , i = y n , i λ n , i ( A i y n , i A i T i w n ) ,
λ n + 1 , i = min { ( c n , i + c i ) T i w n y n , i A i T i w n A i y n , i , λ n , i + ρ n , i } , if A i T i w n A i y n , i 0 , λ n , i + ρ n , i , otherwise .
Step 5.
Compute
v n = i = 0 N δ n , i w n + η n , i T i * ( u n , i T i w n ) ,
where
η n , i = ( ϕ n , i + ϕ i ) T i w n u n , i 2 T i * ( T i w n u n , i ) 2 , if T i * ( T i w n u n , i ) 0 , 0 , otherwise .
Step 6.
Compute
x n + 1 = ξ n w n + ( 1 ξ n ) v n .
Set n : = n + 1 and return to Step 1.
Remark 2.
Observe that by conditions (C1) and (C2) together with (13), we have that
lim n θ n | | x n x n 1 | | = 0 a n d lim n θ n α n | | x n x n 1 | | = 0 .
Remark 3.
We also note that while the cost operators A i , i = 0 , 1 , 2 , , N are non-Lipschitz, our method does not require any linesearch procedure, which could be computationally very expensive to implement. Rather, we employ self-adaptive step size techniques that only require simple computations of known parameters per iteration. Moreover, some of the parameters are relaxed to accommodate larger intervals for the step sizes.
Remark 4.
We remark that condition (12) is a weaker assumption than the sequentially weakly continuity condition. We present the following example satisfying condition (12), which also illustrates that the condition is a weaker assumption than the sequentially weakly continuity condition.
Let A : 2 ( R ) 2 ( R ) be an operator defined by
A x = x x , x 2 ( R ) .
Suppose { z n } 2 ( R ) such that z n z . Then, by the weakly lower semi-continuity of the norm we obtain
z lim inf n + z n .
Thus, we have
A z = z 2 ( lim inf n + z n ) 2 lim inf n + z n 2 = lim inf n + A z n .
Therefore, A satisfies condition (12).
On the other hand, to establish that A is not sequentially weakly continuous, choose z n = e n + e 1 , where { e n } is a standard basis of 2 ( R ) , that is, e n = ( 0 , 0 , , 1 , ) with 1 at the n-th position. It is clear that z n e 1 and A z n = A ( e n + e 1 ) = ( e n + e 1 ) e n + e 1 2 e 1 , but A e 1 = e 1 e 1 = e 1 . Consequently, A is not sequentially weakly continuous. Therefore, condition (12) is strictly weaker than the sequentially weakly continuity condition.

4. Convergence Analysis

First, we prove some lemmas needed for our strong convergence theorem.
Lemma 5.
Let { λ n , i } be the sequence generated by Algorithm 1 such that Assumption B holds. Then { λ n , i } is well-defined for each i = 0 , 1 , 2 , , N and lim n λ n , i = λ 1 , i [ min { c i M i , λ 1 , i } , λ 1 , i + Φ i ] , where Φ i = n = 1 ρ n , i .
Proof. 
Observe that since A i is uniformly continuous for each i = 0 , 1 , 2 , , N , it follows from (11) that for any given ϵ i > 0 , there exists K i < + such that A i T i w n A i y n , i K i T i w n y n , i + ϵ i . Thus, for the case A i T i w n A i y n , i 0 for all n 1 , we obtain
( c n , i + c i ) T i w n y n , i A i T i w n A i y n , i ( c n , i + c i ) T i w n y n , i K i T i w n y n , i + ϵ i = ( c n , i + c i ) T i w n y n , i ( K i + ζ i ) T i w n y n , i = ( c n , i + c i ) M i c i M i ,
where ϵ i = ζ i T i w n y n , i for some ζ i ( 0 , 1 ) and M i = K i + ζ i . Therefore, by the definition of λ n + 1 , i , the sequence { λ n , i } has lower bound min { c i M i , λ 1 , i } and has upper bound λ 1 , i + Φ i . By Lemma 2, the limit lim n λ n , i exists and is denoted by λ i = lim n λ n , i . Clearly, λ i min { c i M i , λ 1 , i } , λ 1 , i + Φ i for each i = 0 , 1 , 2 , N .
Lemma 6.
If T i * ( T i w n u n , i ) 0 , then the sequence { η n , i } defined by (14) has a positive lower bounded for each i = 0 , 1 , 2 , , N .
Proof. 
If T i * ( T i w n u n , i ) 0 , it follows that for each i = 0 , 1 , 2 , , N
η n , i = ( ϕ n , i + ϕ i ) T i w n u n , i 2 T i * ( T i w n u n , i ) 2 .
Since T i is a bounded linear operator and lim n ϕ n , i = 0 for each i = 0 , 1 , 2 , , N , we have
( ϕ n , i + ϕ i ) T i w n u n , i 2 T i * ( T i w n u n , i ) 2 ( ϕ n , i + ϕ i ) T i w n u n , i 2 T i 2 T i w n u n , i 2 ϕ i T i 2 ,
which implies that ϕ i T i 2 is a lower bound of { η n , i } for each i = 0 , 1 , 2 , , N .
Lemma 7.
Suppose Assumption B of Algorithm 1 holds. Then, there exists a positive integer N such that
ϕ i + ϕ n , i ( 0 , 1 ) , a n d λ n , i ( c n , i + c i ) λ n + 1 , i ( 0 , 1 ) , n N .
Proof. 
Since 0 < ϕ i < ϕ i < 1 and lim n ϕ n , i = 0 for each i = 0 , 1 , 2 , , N , there exists a positive integer N 1 , i such that
0 < ϕ i + ϕ n , i ϕ i < 1 , n N 1 , i .
Similarly, since 0 < c i < c i < 1 , lim n c n , i = 0 and lim n λ n , i = λ i for each i = 0 , 1 , 2 , , N , we have
lim n 1 λ n , i ( c n , i + c i ) λ n + 1 , i = 1 c i > 1 c i > 0 .
Thus, for each i = 0 , 1 , 2 , , N , there exists a positive integer N 2 , i such that
1 λ n , i ( c n , i + c i ) λ n + 1 , i > 0 , n N 2 , i .
Now, setting N = max { N 1 , i , N 2 , i : i = 0 , 1 , 2 , , N } , we have the required result. □
Lemma 8.
Let { x n } be a sequence generated by Algorithm 1 under Assumption B. Then the following inequality holds for all p Ω :
u n , i T i p 2 T i w n T i p 2 1 λ n , i 2 λ n + 1 , i 2 ( c n , i + c i ) 2 T i w n y n , i 2 .
Proof. 
From the definition of λ n + 1 , i , we have
A i T i w n A i y n , i ( c n , i + c i ) λ n + 1 , i T i w n y n , i , n N , i = 0 , 1 , , N .
Observe that (15) holds both for A i T i w n A i y n , i = 0 and A i T i w n A i y n , i 0 . Let p Ω . Then, it follows that T i p V I ( C i , A i ) , i = 0 , 1 , 2 , , N . Using the definition of u n , i and applying Lemma 3, we have
u n , i T i p 2 = y n , i λ n , i ( A i y n , i A i T i w n ) T i p 2 = y n , i T i p 2 + λ n , i 2 A i y n , i A i T i w n 2 2 λ n , i y n , i T i p , A i y n , i A i T i w n = T i w n T i p 2 + y n , i T i w n 2 + 2 y n , i T i w n , T i w n T i p + λ n , i 2 A i y n , i A i T i w n 2 2 λ n , i y n , i T i p , A i y n , i A i T i w n = T i w n T i p 2 + y n , i T i w n 2 2 y n , i T i w n , y n , i T i w n + 2 y n , i T i w n , y n , i T i p + λ n , i 2 A i y n , i A i T i w n 2 2 λ n , i y n , i T i p , A i y n , i A i T i w n = T i w n T i p 2 y n , i T i w n 2 + 2 y n , i T i w n , y n , i T i p + λ n , i 2 A i y n , i A i T i w n 2 2 λ n , i y n , i T i p , A i y n , i A i T i w n .
Since y n , i = P C i ( T i w n λ n , i A i T i w n ) and T i p V I ( C i , A i ) , i = 0 , 1 , 2 , , N , by the property of the projection map we have
y n , i T i w n + λ n , i A i T i w n , y n , i T i p 0 ,
which is equivalent to
y n , i T i w n , y n , i T i p λ n , i A i T i w n , y n , i T i p .
Furthermore, since y n , i C i , i = 0 , 1 , 2 , , N , we have
A i T i p , y n , i T i p 0 ,
By the pseudomonotonicity of A i , it follows that A i y n , i , y n , i T i p 0 . Since λ n , i > 0 , i = 0 , 1 , 2 , , N , we obtain
λ n , i A i y n , i , y n , i T i p 0 .
Next, by applying (15), (17) and (18) in (16), we obtain
u n , i T i p 2 T i w n T i p 2 y n , i T i w n 2 2 λ n , i A i T i w n , y n , i T i p + ( c n , i + c i ) 2 λ n , i 2 λ n + 1 , i 2 T i w n y n , i 2 2 λ n , i y n , i T i p , A i y n , i A i T i w n = T i w n T i p 2 1 λ n , i 2 λ n + 1 , i 2 ( c n , i + c i ) 2 T i w n y n , i 2 2 λ n , i y n , i T i p , A i y n , i T i w n T i p 2 1 λ n , i 2 λ n + 1 , i 2 ( c n , i + c i ) 2 T i w n y n , i 2 ,
which is the required inequality. □
Lemma 9.
Suppose { x n } is a sequence generated by Algorithm 1 such that Assumption B holds. Then { x n } is bounded.
Proof. 
Let p Ω . By the definition of w n and applying the triangular inequality, we have
w n p = ( 1 α n ) ( x n + θ n ( x n x n 1 ) ) p = ( 1 α n ) ( x n p ) + ( 1 α n ) θ n ( x n x n 1 ) α n p ( 1 α n ) x n p + ( 1 α n ) θ n x n x n 1 + α n p = ( 1 α n ) x n p + α n ( 1 α n ) θ n α n x n x n 1 + p .
By Remark (2), we obtain
lim n ( 1 α n ) θ n α n x n x n 1 + p = p .
Thus, there exists M 1 > 0 such that ( 1 α n ) θ n α n x n x n 1 + p M 1 for all n N . It follows that
w n p ( 1 α n ) x n p + α n M 1 .
By Lemma 7, there exists a positive integer N such that 1 λ n k , i λ n k + 1 , i ( c n k , i + c i ) > 0 , n N , i = 0 , 1 , 2 , , N . Consequently, it follows from (19) that for all n N and i = 0 , 1 , 2 , , N
u n , i T i p 2 T i w n T i p 2 .
Next, since the function · 2 is convex, we have
v n p 2 = i = 0 N δ n , i w n + η n , i T i * ( u n , i T i w n ) p 2 i = 0 N δ n , i w n + η n , i T i * ( u n , i T i w n ) p 2 .
By Lemma 7, there exists a positive integer N such that 0 < ϕ n , i + ϕ i < 1 , i = 0 , 1 , 2 , , N for all n N . From (22) and by applying Lemma 3 and (21), we obtain
w n + η n , i T i * ( u n , i T i w n ) p 2 = w n p 2 + η n , i 2 T i * ( u n , i T i w n ) 2 + 2 η n , i w n p , T i * ( u n , i T i w n ) = w n p 2 + η n , i 2 T i * ( u n , i T i w n ) 2 + 2 η n , i T i w n T i p , u n , i T i w n = w n p 2 + η n , i 2 T i * ( u n , i T i w n ) 2 + η n , i [ u n , i T i p 2 T i w n T i p 2 u n , i T i w n 2 ] w n p 2 + η n , i 2 T i * ( u n , i T i w n ) 2 η n , i u n , i T i w n 2 = w n p 2 η n , i [ u n , i T i w n 2 η n , i T i * ( u n , i T i w n ) 2 ] .
If T i * ( u n , i T i w n ) 0 , then by the definition of η n , i , we have
u n , i T i w n 2 η n , i T i * ( u n , i T i w n ) 2 = [ 1 ( ϕ n , i + ϕ i ) ] T i w n u n , i 2 0 .
Now, applying (24) in (23) and substituting in (22), we have
v n p 2 w n p 2 i = 0 N δ n , i η n , i [ 1 ( ϕ n , i + ϕ i ) ] T i w n u n , i 2 w n p 2 .
Observe that if T i * ( u n , i T i w n ) = 0 , (25) still holds from (23).
Next, using the definition of x n + 1 , and applying (20) and (25), we have
x n + 1 p = ξ n w n + ( 1 ξ n ) v n p ξ n w n p + ( 1 ξ n ) v n p ξ n w n p + ( 1 ξ n ) w n p = w n p ( 1 α n ) x n p + α n M 1 max { x n p , M 1 } max { x N p , M 1 } ,
which implies that { x n } is bounded. Hence, { w n } , { y n , i } , { u n , i } and { v n } are all bounded. □
Lemma 10.
Let { w n } and { v n } be two sequences generated by Algorithm 1 with subsequences { w n k } and { v n k } , respectively, such that lim k w n k v n k = 0 . Suppose w n k z H , then z Ω .
Proof. 
From (25), we have
v n k p 2 w n k p 2 i = 0 N δ n k , i η n k , i [ 1 ( ϕ n k , i + ϕ i ) ] T i w n k u n k , i 2 .
From the last inequality, we obtain
i = 0 N δ n k , i η n k , i [ 1 ( ϕ n k , i + ϕ i ) ] T i w n k u n k , i 2 w n k p 2 v n k p 2 w n k v n k 2 + 2 w n k v n k v n k p
Since by the hypothesis of the lemma lim k w n k v n k = 0 , it follows from (27) that
i = 0 N δ n k , i η n k , i [ 1 ( ϕ n k , i + ϕ i ) ] T i w n k u n k , i 2 0 , k ,
which implies that
δ n k , i η n k , i [ 1 ( ϕ n k , i + ϕ i ) ] T i w n k u n k , i 2 0 , k , i = 0 , 1 , 2 , , N .
By the definition of η n , i , we have
δ n k , i ( ϕ n k , i + ϕ i ) [ 1 ( ϕ n k , i + ϕ i ) ] T i w n k u n k , i 4 T i * ( T i w n k u n k , i ) 2 0 , k , i = 0 , 1 , 2 , , N .
From this, we obtain
T i w n k u n k , i 2 T i * ( T i w n k u n k , i ) 0 , k , i = 0 , 1 , 2 , , N ,
Since { T i * ( T i w n k u n k , i ) } is bounded, it follows that
T i w n k u n k , i 0 , k , i = 0 , 1 , 2 , , N .
Hence, we have
T i * ( T i w n k u n k , i ) T i * ( T i w n k u n k , i ) = T i ( T i w n k u n k , i ) 0 , k , i = 0 , 1 , 2 , , N .
From (19), we obtain
1 λ n k , i 2 λ n k + 1 , i 2 ( c n k , i + c i ) 2 T i w n k y n k , i 2 T i w n k T i p 2 u n k , i T i p 2 T i w n k u n k , i T i w n k T i p + u n k , i T i p .
By applying (28), it follows from (30) that
1 λ n k , i 2 λ n k + 1 , i 2 ( c n k , i + c i ) 2 T i w n k y n k , i 2 0 , k , i = 0 , 1 , , N .
Consequently, we have
T i w n k y n k , i 0 , k , i = 0 , 1 , , N .
Since y n , i = P C i ( T i w n λ n , i A i T i w n ) , by the property of the projection map, we obtain
T i w n k λ n k , i A i T i w n k y n k , i , T i x y n k , i 0 , T i x C i , i = 0 , 1 , 2 , , N ,
which implies that
1 λ n k , i T i w n k y n k , i , T i x y n k , i A i T i w n k , T i x y n k , i , T i x C i , i = 0 , 1 , 2 , , N .
From the last inequality, it follows that
1 λ n k , i T i w n k y n k , i , T i x y n k , i + A i T i w n k , y n k , i T i w n k A i T i w n k , T i x T i w n k , T i x C i , i = 0 , 1 , 2 , , N .
By applying (31) and the fact that lim k λ n k , i = λ i > 0 , from (32) we obtain
lim inf k A i T i w n k , T i x T i w n k 0 , T i x C i , i = 0 , 1 , 2 , , N .
Observe that
A i y n k , i , T i x y n k , i = A i y n k , i A i T i w n k , T i x T i w n k + A i T i w n k , T i x T i w n k + A i y n k , i , T i w n k y n k , i .
By the continuity of A i , from (31) we obtain
A i T i w n k A i y n k , i 0 , k , i = 0 , 1 , 2 , , N .
Using (31) and (35), it follows from (33) and (34) that
lim inf k A i y n k , i , T i x y n k , i 0 , T i x C i , i = 0 , 1 , 2 , , N .
Next, let { ϑ k , i } be a decreasing sequence of positive numbers such that ϑ k , i 0 as k , i = 0 , 1 , 2 , , N . For each k , let N k denote the smallest positive integer such that
A i y n j , i , T i x y n j , i + ϑ k , i 0 , j N k , T i x C i , i = 0 , 1 , 2 , , N ,
where the existence of N k follows from (36). Since { ϑ k , i } is decreasing, then { N k } is increasing. Moreover, since { y N k , i } C i for each k , we can suppose A i y N k , i 0 (otherwise, y N k , i V I ( C i , A i ) , i = 0 , 1 , 2 , N ) and let
z N k , i = A i y N k , i A i y N k , i 2
Then, A i y N k , i , z N k , i = 1 for each k , i = 0 , 1 , 2 , , N . From (37), we have
A i y N k , i , T i x + ϑ k , i z N k , i y N k , i 0 , T i x C i , i = 0 , 1 , 2 , , N .
It follows from the pseudomonotonicity of A i that
A i ( T i x + ϑ k , i z N k , i ) , T i x + ϑ k , i z N k , i y N k , i 0 , T i x C i , i = 0 , 1 , 2 , , N ,
which is equivalent to
A i T i x , T i x y N k , i A i T i x A i ( T i x + ϑ k , i z N k , i ) , T i x + ϑ k , i z N k , i y N k , i ϑ k , i A i T i x , z N k , i , T i x C i , i = 0 , 1 , , N .
In order to complete the proof, we need to establish that lim k ϑ k , i z N k , i = 0 . Since w n k z and T i is a bounded linear operator for each i = 0 , 1 , 2 , , N , we have T i w n k T i z , i = 0 , 1 , 2 , , N . Thus, from (31), we obtain y n k , i T i z , i = 0 , 1 , 2 , , N . Since { y n k , i } C i , i = 0 , 1 , 2 , , N , we have T i z C i . If A i T i z = 0 , i = 0 , 1 , 2 , , N , then T i z V I ( C i , A i ) i = 0 , 1 , 2 , , N , which implies that z Ω . On the contrary, we suppose A i T i z 0 , i = 0 , 1 , 2 , , N . Since A i satisfies condition (12), we have for all i = 0 , 1 , 2 , , N
0 < A i T i z lim inf k A i y n k , i .
Applying the facts that { y N k , i } { y n k , i } and ϑ k , i 0 as k , i = 0 , 1 , 2 , N , we have
0 lim sup k ϑ k , i z N k , i = lim sup k ϑ k , i A i y n k , i lim sup k ϑ k , i lim inf k A i y n k , i = 0 ,
which implies that lim sup k ϑ k , i z N k , i = 0 . Applying the facts that A i is continuous, { y N k , i } and { z N k , i } are bounded and lim k ϑ k , i z N k , i = 0 , from (38) we get
lim inf k A i T i x , T i x y N k , i 0 , T i x C i , i = 0 , 1 , 2 , , N .
From the last inequality, we have
A i T i x , T i x T i z = lim k A i T i x , T i x y N k , i = lim inf k A i T i x , T i x y N k , i 0 , T i x C i , i = 0 , 1 , 2 , , N .
By Lemma 4, we obtain
T i z V I ( C i , A i ) , i = 0 , 1 , 2 , , N ,
which implies that
z T i 1 V I ( C i , A i ) , i = 0 , 1 , 2 , , N ,
Consequently, we have z i = 0 N T i 1 V I ( C i , A i ) , which implies that z Ω as desired. □
Lemma 11.
Suppose { x n } is a sequence generated by Algorithm 1 under Assumption B. Then, the following inequality holds for all p Ω :
x n + 1 p 2 ( 1 α n ) x n p 2 + α n d n ( 1 ξ n ) i = 0 N δ n , i η n , i [ 1 ( ϕ n , i + ϕ i ) ] T i w n u n , i 2 ξ n ( 1 ξ n ) w n v n 2 .
Proof. 
Let p Ω . By applying Lemma 3 together with the definition of w n , we obtain
w n p 2 = ( 1 α n ) ( x n p ) + ( 1 α n ) θ n ( x n x n 1 ) α n p 2 ( 1 α n ) ( x n p ) + ( 1 α n ) θ n ( x n x n 1 ) 2 + 2 α n p , w n p ( 1 α n ) 2 x n p 2 + 2 ( 1 α n ) θ n x n p x n x n 1 + ( 1 α n ) 2 θ n 2 x n x n 1 2 + 2 α n p , w n x n + 1 + 2 α n p , x n + 1 p ( 1 α n ) x n p 2 + 2 θ n x n p x n x n 1 + θ n 2 x n x n 1 2 + 2 α n p w n x n + 1 + 2 α n p , p x n + 1 .
Now, using the definition of x n + 1 , (25), (39) and applying Lemma 3, we obtain
x n + 1 p 2 = ξ n w n + ( 1 ξ n ) v n p 2 = ξ n w n p 2 + ( 1 ξ n ) v n p 2 ξ n ( 1 ξ n ) w n v n 2 ξ n w n p 2 + ( 1 ξ n ) w n p 2 i = 0 N δ n , i η n , i [ 1 ( ϕ n , i + ϕ i ) ] T i w n u n , i 2 ξ n ( 1 ξ n ) w n v n 2 = w n p 2 ( 1 ξ n ) i = 0 N δ n , i η n , i [ 1 ( ϕ n , i + ϕ i ) ] T i w n u n , i 2 ξ n ( 1 ξ n ) w n v n 2 ( 1 α n ) x n p 2 + 2 θ n x n p x n x n 1 + θ n 2 x n x n 1 2 + 2 α n p w n x n + 1 + 2 α n p , p x n + 1 ( 1 ξ n ) i = 0 N δ n , i η n , i [ 1 ( ϕ n , i + ϕ i ) ] T i w n u n , i 2 ξ n ( 1 ξ n ) w n v n 2 = ( 1 α n ) x n p 2 + α n [ 2 x n p θ n α n x n x n 1 + θ n x n x n 1 θ n α n x n x n 1 + 2 p w n x n + 1 + 2 p , p x n + 1 ] ( 1 ξ n ) i = 0 N δ n , i η n , i [ 1 ( ϕ n , i + ϕ i ) ] T i w n u n , i 2 ξ n ( 1 ξ n ) w n v n 2 = ( 1 α n ) x n p 2 + α n d n ( 1 ξ n ) i = 0 N δ n , i η n , i [ 1 ( ϕ n , i + ϕ i ) ] T i w n u n , i 2 ξ n ( 1 ξ n ) w n v n 2 ,
where d n = 2 x n p θ n α n x n x n 1 + θ n x n x n 1 θ n α n x n x n 1 + 2 p w n x n + 1 + 2 p , p x n + 1 , which is the required inequality. □
Theorem 1.
Let { x n } be a sequence generated by Algorithm 1 under Assumption B. Then, { x n } converges strongly to x ^ Ω , where x ^ = min { p : p Ω } .
Proof. 
Let x ^ = min { p : p Ω } , that is, x ^ = P Ω ( 0 ) . Then, from Lemma 11, we obtain
x n + 1 x ^ 2 ( 1 α n ) x n x ^ 2 + α n d ^ n ,
where d ^ n = 2 x n x ^ θ n α n x n x n 1 + θ n x n x n 1 θ n α n x n x n 1 + 2 x ^ w n x n + 1 + 2 x ^ , x ^ x n + 1 .
Next, we claim that the sequence { x n x ^ } converges to zero. To do this, in view of Lemma 1 it suffices to show that lim sup k d ^ n k 0 for every subsequence { x n k x ^ } of { x n x ^ } satisfying
lim inf k x n k + 1 x ^ x n k x ^ 0 .
Suppose that { x n k x ^ } is a subsequence of { x n x ^ } such that (41) holds. Again, from Lemma 11, we obtain
( 1 ξ n k ) i = 0 N δ n k , i η n k , i [ 1 ( ϕ n k , i + ϕ i ) ] T i w n k u n k , i 2 + ξ n k ( 1 ξ n k ) w n k v n k 2 ( 1 α n k ) x n k x ^ 2 x n k + 1 x ^ 2 + α n k d ^ n k .
By (41), Remark 2 and the fact that lim k α n k = 0 , we obtain
( 1 ξ n k ) i = 0 N δ n k , i η n k , i [ 1 ( ϕ n k , i + ϕ i ) ] T i w n k u n k , i 2 + ξ n k ( 1 ξ n k ) w n k v n k 2 0 , k .
Consequently, we obtain
lim k w n k v n k = 0 ; lim k T i w n k u n k , i = 0 , i = 0 , 1 , 2 , , N .
From the definition of w n and by Remark 2, we have
w n k x n k = ( 1 α n k ) ( x n k + θ n k ( x n k x n k 1 ) ) x n k = ( 1 α n k ) ( x n k x n k ) + ( 1 α n k ) θ n k ( x n k x n k 1 ) α n k x n k ( 1 α n k ) x n k x n k + ( 1 α n k ) θ n k x n k x n k 1 + α n k x n k 0 , k .
Using (42) and (43), we obtain
v n k x n k 0 , k .
From the definition of x n + 1 and by applying (43) and (44), we obtain
x n k + 1 x n k = ξ n k w n k + ( 1 ξ n k ) v n k x n k ξ n k w n k x n k + ( 1 ξ n k ) v n k x n k 0 , k .
Next, by combining (43) and (45), we obtain
w n k x n k + 1 0 , k .
Since { x n } is bounded, w ω ( x n ) . We choose an element x * w ω ( x n ) arbitrarily. Then, there exists a subsequence { x n k } of { x n } such that x n k x * . From (42), it follows that w n k x * . Now, by invoking Lemma 10 and applying (42), we obtain x * Ω . Since x * w ω ( x n ) was selected arbitrarily, it follows that w ω ( x n ) Ω .
Next, by the boundedness of { x n k } , there exists a subsequence { x n k j } of { x n k } such that x n k j q and
lim sup k x ^ , x ^ x n k = lim j x ^ , x ^ x n k j .
Since x ^ = P Ω ( 0 ) , it follows from the property of the metric projection map that
lim sup k x ^ , x ^ x n k = lim j x ^ , x ^ x n k j = x ^ , x ^ q 0 ,
Thus, from (45) and (47), we obtain
lim sup k x ^ , x ^ x n k + 1 0 .
Next, by Remark 2, (46) and (48) we have lim sup k d ^ n k 0 . Therefore, by invoking Lemma 1, it follows from (40) that { x n x ^ } converges to zero as required. □

5. Applications

In this section, we apply our result to study related optimization problems.

5.1. Generalized Split Variational Inequality Problem

First, we apply our result to study and approximate the solution of the generalized split variational inequality problem (see [37]). Let D i be nonempty, closed and convex subsets of real Hilbert spaces H i , i = 1 , 2 , , N , and let S i : H i H i + 1 , i = 1 , 2 , , N 1 , be bounded linear operators, such that S i 0 . Let B i : H i H i , i = 1 , 2 , , N , be single-valued operators. The generalized split variational inequality problem (GSVIP) is formulated as finding a point x * D 1 such that
x * Γ : = V I ( D 1 , B 1 ) S 1 1 ( V I ( D 2 , B 2 ) ) S 1 1 ( S 2 1 ( S N 1 1 ( V I ( D N , B N ) ) ) ) ;
that is, x * D 1 such that
x * V I ( D 1 , B 1 ) , S 1 x * V I ( D 2 , B 2 ) , , S N 1 ( S N 2 S 1 x * ) V I ( D N , B N ) .
We note that by setting C = D 1 , C i = D i + 1 , A = B 1 , A i = B i + 1 , 1 i N 1 , T 1 = S 1 , T 2 = S 2 S 1 , , and T N 1 = S N 1 S N 2 S 1 , then the SVIPMOS (10) becomes the GSVIP (49). Consequently, we obtain the following strong convergence theorem for finding the solution of GSVIP (49) in Hilbert spaces when the cost operators are pseudomonotone and uniformly continuous.
Theorem 2.
Let D i be nonempty, closed and convex subsets of real Hilbert spaces H i , i = 1 , 2 , , N , and suppose S i : H i H i + 1 , i = 1 , 2 , , N 1 , are bounded linear operators with adjoints S i * such that S i 0 . Let B i : H i H i , 1 , 2 , , N be uniformly continuous pseudomonotone operators that satisfy condition (12), and suppose Assumption B of Theorem 1 holds and the solution set Γ . Then, the sequence { x n } generated by the following Algorithm 2 converges in norm to x ^ Γ , where x ^ = min { p : p Γ } .
Algorithm 2. A Relaxed Inertial Tseng’s Extragradient Method for Solving GSVIP (49).
Step 0.
Select initial points x 0 , x 1 H 1 . Let S 0 = I H 1 , S ^ i 1 = S 1 1 S i 2 S 0 , S ^ i 1 * = S 0 * S 1 * S i 1 * , i = 1 , 2 , , N and set n = 1 .
Step 1.
Given the ( n 1 ) t h and n t h iterates, choose θ n such that 0 θ n θ ^ n with θ ^ n defined by
θ ^ n = min θ , ϵ n x n x n 1 , if x n x n 1 , θ , otherwise .
Step 2.
Compute
w n = ( 1 α n ) ( x n + θ n ( x n x n 1 ) ) .
Step 3.
Compute
y n , i = P D i ( S ^ i 1 w n λ n , i B i S ^ i 1 w n ) .
Step 4.
Compute
u n , i = y n , i λ n , i ( B i y n , i B i S ^ i 1 w n ) ,
λ n + 1 , i = min { ( c n , i + c i ) S ^ i 1 w n y n , i B i S ^ i 1 w n B i y n , i , λ n , i + ρ n , i } , if B i S ^ i 1 w n B i y n , i 0 , λ n , i + ρ n , i , otherwise .
Step 5.
Compute
v n = i = 1 N δ n , i w n + η n , i S ^ i 1 * ( u n , i S ^ i 1 w n ) ,
where
η n , i = ( ϕ n , i + ϕ i ) S ^ i 1 w n u n , i 2 S ^ i 1 * ( S ^ i 1 w n u n , i ) 2 , if S ^ i 1 * ( S ^ i 1 w n u n , i ) 0 , 0 , otherwise .
Step 6.
Compute
x n + 1 = ξ n w n + ( 1 ξ n ) v n .
Set n : = n + 1 and return to Step 1.

5.2. Split Convex Minimization Problem with Multiple Output Sets

Let C be a nonempty, closed and convex subset of a real Hilbert space H . The convex minimization problem is defined as finding a point x * C , such that
g ( x * ) = min x C g ( x ) ,
where g is a real-valued convex function. The solution set of Problem (50) is denoted by arg min g .
Let C , C i be nonempty, closed and convex subsets of real Hilbert spaces H , H i , i = 1 , 2 , , N , respectively, and let T i : H H i , i = 1 , 2 , , N , be bounded linear operators with adjoints T i * . Let g : H R , g i : H i R be convex and differentiable functions. In this subsection, we apply our result to find the solution of the following split convex minimization problem with multiple output sets (SCMPMOS): Find x * C such that
x * Ψ : = arg min g i = 1 N T i 1 arg min g i .
The following lemma is required to establish our next result.
Lemma 12
([53]).Suppose C is a nonempty, closed and convex subset of a real Banach space E , and let g be a convex function of E into R . If g is Fréchet differentiable, then x is a solution of Problem (50) if and only if x V I ( C , g ) , where g is the gradient of g .
Applying Theorem 1 and Lemma 12, we obtain the following strong convergence theorem for finding the solution of the SCMPMOS (51) in the framework of Hilbert spaces.
Theorem 3.
Let C , C i be nonempty, closed and convex subsets of real Hilbert spaces H , H i , i = 1 , 2 , , N , respectively, and suppose T i : H H i , i = 1 , 2 , , N , are bounded linear operators with adjoints T i * . Let g : H R , g i : H i R be fréchet differentiable convex functions such that g , g i are uniformly continuous. Suppose that Assumption B of Theorem 1 holds and the solution set Ψ . Then, the sequence { x n } generated by the following Algorithm 3 converges strongly to x ^ Ψ , where x ^ = min { p : p Ψ } .
Algorithm 3. A Relaxed Inertial Tseng’s Extragradient Method for Solving SCMPMOS (51).
Step 0.
Select initial points x 0 , x 1 H . Let C 0 = C , T 0 = I H , g 0 = g and set n = 1 .
Step 1.
Given the ( n 1 ) t h and n t h iterates, choose θ n such that 0 θ n θ ^ n with θ ^ n defined by
θ ^ n = min θ , ϵ n x n x n 1 , if x n x n 1 , θ , otherwise .
Step 2.
Compute
w n = ( 1 α n ) ( x n + θ n ( x n x n 1 ) ) .
Step 3.
Compute
y n , i = P C i ( T i w n λ n , i g i T i w n ) .
Step 4.
Compute
u n , i = y n , i λ n , i ( g i y n , i g i T i w n ) ,
λ n + 1 , i = min { ( c n , i + c i ) T i w n y n , i g i T i w n g i y n , i , λ n , i + ρ n , i } , if g i T i w n g i y n , i 0 , λ n , i + ρ n , i , otherwise .
Step 5.
Compute
v n = i = 0 N δ n , i w n + η n , i T i * ( u n , i T i w n ) ,
where
η n , i = ( ϕ n , i + ϕ i ) T i w n u n , i 2 T i * ( T i w n u n , i ) 2 , if T i * ( T i w n u n , i ) 0 , 0 , otherwise .
Step 6.
Compute
x n + 1 = ξ n w n + ( 1 ξ n ) v n .
Set n : = n + 1 and return to Step 1.
Proof. 
We know that since g i , i = 0 , 1 , 2 , , N are convex, then g i are monotone [53] and, hence, pseudomonotone. Therefore, the required result follows by applying Lemma 12 and taking A i = g i in Theorem 1. □

6. Numerical Experiments

Here, we carry out some numerical experiments to demonstrate the applicability of our proposed method (Proposed Algorithm 1). For simplicity, in all the experiments, we consider the case when N = 5 . All numerical computations were carried out using Matlab version R2021(b).
In all the computations, we choose α n = 1 3 n + 2 , ϵ n = 5 ( 3 n + 2 ) 3 , ξ n = n + 1 2 n + 1 , θ = 1.50 , λ 1 , i = i + 1.25 , c i = 0.10 , ϕ i = 0.20 , ρ n , i = 50 n 2 , δ n , i = 1 6 .
Now, we consider the following numerical examples both in finite and infinite dimensional Hilbert spaces for the proposed algorithm.
Example 1.
For each i = 0 , 1 , , 5 , we define the feasible set C i = R m , T i x = 3 x i + 3 and A i ( x ) = M x , where M is a square m × m matrix given by
a j , k = 1 , i f k = m + 1 j a n d k > j , 1 i f k = m + 1 j a n d k j , 0 , o t h e r w i s e .
We note that M is a Hankel-type matrix with a nonzero reverse diagonal.
Example 2.
Let H i = R 2 and C i = [ 2 i , 2 + i ] 2 , i = 0 , 1 , , 5 . We define T i x = 2 x i + 2 , and the cost operator A i : R 2 R 2 is defined by
A i ( x , y ) = ( i + 1 ) ( x e y , y ) , ( i = 0 , 1 , , 5 ) .
Finally, we consider the last example in infinite dimensional Hilbert spaces.
Example 3.
Let H i = 2 : = { x = ( x 1 , x 2 , , x i , ) : j = 1 | x j | 2 < + } , i = 0 , 1 , , 5 . Let r i , R i R + be such that R i k i + 1 < r i k i < r i < R i for some k i > 1 . The feasible sets are defined as follows for each i = 0 , 1 , , 5 :
C i = { x H i : x r i } .
The cost operators A i : H i H i are defined by
A i ( x ) = ( R i x ) x .
Then A i are pseudomonotone and uniformly continuous. We choose R i = 1.4 + i , r i = 0.8 + i , k i = 1.2 + i , and we define T i x = 4 x i + 4 .
We test Examples 1–3 under the following experiments:
Experiment 1.
In this experiment, we check the behavior of our method by fixing the other parameters and varying c n , i in Example 1. We do this to check the effects of this parameter and the sensitivity of our method on it.
We consider c n , i { 0 , 20 n 0.1 , 40 n 0.01 , 60 n 0.001 , 80 n 0.0001 } with m = 20 , m = 40 , m = 60 and m = 80 .
Using x n + 1 x n < 10 3 as the stopping criterion, we plot the graphs of x n + 1 x n against the number of iterations for each m . The numerical results are reported in Figure 1, Figure 2, Figure 3 and Figure 4 and Table 1.
Experiment 2.
In this experiment, we check the behavior of our method by fixing the other parameters and varying c n , i in Example 2. We do this to check the effects of this parameter and the sensitivity of our method to it.
We consider c n , i { 0 , 20 n 0.1 , 40 n 0.01 , 60 n 0.001 , 80 n 0.0001 } with the following two cases of initial values x 0 and x 1 :
Case I:
x 0 = ( 2 , 1 ) ; x 1 = ( 0 , 3 ) ;
Case II:
x 0 = ( 3 , 2 ) ; x 1 = ( 1 , 1 ) .
Using x n + 1 x n < 10 3 as the stopping criterion, we plot the graphs of x n + 1 x n against the number of iterations in each case. The numerical results are reported in Figure 5 and Figure 6 and Table 2.
Finally, we test Example 3 under the following experiment:
Experiment 3.
In this experiment, we check the behavior of our method by fixing the other parameters and varying c n , i in Example 3. We do this to check the effects of these parameters and the sensitivity of our method to it.
We consider c n , i { 0 , 20 n 0.1 , 40 n 0.01 , 60 n 0.001 , 80 n 0.0001 } with the following two cases of initial values x 0 and x 1 :
Case I:
x 0 = ( 1 10 , 1 100 , 1 1000 , ) ; x 1 = ( 1 2 , 1 4 , 1 8 , ) ;
Case II:
x 0 = ( 3 10 , 3 100 , 3 100 , ) ; x 1 = ( 1 3 , 1 9 , 1 27 , ) .
Using x n + 1 x n < 10 4 as the stopping criterion, we plot the graphs of x n + 1 x n against the number of iterations in each case. The numerical results are reported in Figure 7 and Figure 8 and Table 3.
Remark 5.
By using different initial values, cases of m and varying the key parameter in Experiments 1–3, we obtained the numerical results displayed in Table 1, Table 2 and Table 3 and Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. In Figure 1, Figure 2, Figure 3 and Figure 4, we considered different initial values and cases of m with varying values of the key parameter c n , i for Experiment 1 in R m . As observed from the figures, these varying choices do not have a significant effect on the behavior of the algorithm. Similarly, Figure 5 and Figure 6 show that the behavior of our algorithm is consistent under varying initial starting points and different values of the key parameter c n , i for Experiment 2 in R 2 . Likewise, Figure 7 and Figure 8 reveal that the behavior of the algorithm is not affected by varying starting points and values of c n , i for Experiment 3 in 2 . From these results, we can conclude that our method is well-behaved since the choice of the key parameter and initial starting points do not affect the number of iterations or the CPU time in all the experiments.

7. Conclusions

In this article, we studied a new class of split inverse problems called the split variational inequality problem with multiple output sets. We introduced a relaxed inertial Tseng extragradient method with self-adaptive step sizes for finding the solution to the problem when the cost operators are pseudomonotone and non-Lipschitz in the framework of Hilbert spaces. Moreover, we proved a strong convergence theorem for the proposed method under some mild conditions. Finally, we applied our result to study and approximate the solutions of certain classes of optimization problems, and we presented several numerical experiments to demonstrate the applicability of our proposed algorithm. The results of this study open up several opportunities for future research. As part of our future research, we would like to extend the results in this paper to a more general space, such as the reflexive Banach space. Furthermore, we would consider extending the results to a larger class of operators, such as the classes of quasimonotone and non-monotone operators. Moreover, in our future research, we would be interested in investigating the stochastic variant of our results in this study.

Author Contributions

Conceptualization, T.O.A.; Methodology, T.O.A.; Validation, O.T.M.; Formal analysis, T.O.A.; Investigation, T.O.A.; Resources, O.T.M.; Writing—original draft, T.O.A.; Writing —review & editing, O.T.M.; Visualization, O.T.M.; Supervision, O.T.M.; Project administration, O.T.M.; Funding acquisition, O.T.M. All authors have read and agreed to the published version of the manuscript.

Funding

The research of the first author is wholly supported by the University of KwaZulu-Natal, Durban, South Africa, Postdoctoral Fellowship. He is grateful for the funding and financial support. The second author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903). Opinions expressed and conclusions arrived at are those of the authors and are not necessarily to be attributed to the NRF.

Acknowledgments

The authors thank the Reviewers and the Editor for the time spent in carefully going through the manuscript, and pointing out typos and areas of corrections, including constructive comments and suggestions, which have all helped to improve on the quality of the manuscript.

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Fichera, G. Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei VIII. Ser. Rend. Cl. Sci. Fis. Mat. Nat. 1963, 34, 138–142. [Google Scholar]
  2. Stampacchia, G. Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. Paris 1964, 258, 4413–4416. [Google Scholar]
  3. Alakoya, T.O.; Uzor, V.A.; Mewomo, O.T. A new projection and contraction method for solving split monotone variational inclusion, pseudomonotone variational inequality, and common fixed point problems. Comput. Appl. Math. 2022, 42, 1–33. [Google Scholar] [CrossRef]
  4. Ansari, Q.H.; Islam, M.; Yao, J.C. Nonsmooth variational inequalities on Hadamard manifolds. Appl. Anal. 2020, 99, 340–358. [Google Scholar] [CrossRef]
  5. Cubiotti, P.; Yao, J.C. On the Cauchy problem for a class of differential inclusions with applications. Appl. Anal. 2020, 99, 2543–2554. [Google Scholar] [CrossRef]
  6. Eskandari, Z.; Avazzadeh, Z.; Ghaziani, K.R.; Li, B. Dynamics and bifurcations of a discrete-time Lotka–Volterra model using nonstandard finite difference discretization method. Math. Meth. Appl. Sci. 2022, 1–16. [Google Scholar] [CrossRef]
  7. Li, B.; Liang, H.; He, Q. Multiple and generic bifurcation analysis of a discrete Hindmarsh-Rose model, Chaos. Solitons Fractals 2021, 146, 110856. [Google Scholar] [CrossRef]
  8. Vuong, P.T.; Shehu, Y. Convergence of an extragradient-type method for variational inequality with applications to optimal control problems. Numer. Algorithms 2019, 81, 269–291. [Google Scholar] [CrossRef]
  9. Aubin, J.; Ekeland, I. Applied Nonlinear Analysis; Wiley: New York, NY, USA, 1984. [Google Scholar]
  10. Baiocchi, C.; Capelo, A. Variational and Quasivariational Inequalities; Applications to Free Boundary Problems; Wiley: New York, NY, USA, 1984. [Google Scholar]
  11. Gibali, A.; Reich, S.; Zalas, R. Outer approximation methods for solving variational inequalities in Hilbert space. Optimization 2017, 66, 417–437. [Google Scholar] [CrossRef] [Green Version]
  12. Kinderlehrer, D.; Stampacchia, G. An introduction to variational inequalities and their applications. In Classics in Applied Mathematics; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar]
  13. Li, B.; Liang, H.; Shi, L.; He, Q. Complex dynamics of Kopel model with nonsymmetric response between oligopolists. Solitons Fractals 2022, 156, 111860. [Google Scholar] [CrossRef]
  14. Ogwo, G.N.; Alakoya, T.O.; Mewomo, O.T. Iterative algorithm with self-adaptive step size for approximating the common solution of variational inequality and fixed point problems. Optimization 2021. [Google Scholar] [CrossRef]
  15. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Modified inertial subgradient extragradient method with self adaptive stepsize for solving monotone variational inequality and fixed point problems. Optimization 2021, 70, 545–574. [Google Scholar] [CrossRef]
  16. Ceng, L.C.; Coroian, I.; Qin, X.; Yao, J.C. A general viscosity implicit iterative algorithm for split variational inclusions with hierarchical variational inequality constraints. Fixed Point Theory 2019, 20, 469–482. [Google Scholar] [CrossRef]
  17. Hai, T.N. Continuous-time ergodic algorithm for solving monotone variational inequalities. J. Nonlinear Var. Anal. 2021, 5, 391–401. [Google Scholar]
  18. Khan, S.H.; Alakoya, T.O.; Mewomo, O.T. Relaxed projection methods with self-adaptive step size for solving variational inequality and fixed point problems for an infinite family of multivalued relatively nonexpansive mappings in Banach spaces. Math. Comput. Appl. 2020, 25, 54. [Google Scholar] [CrossRef]
  19. Mewomo, O.T.; Alakoya, T.O.; Yao, J.-C.; Akinyemi, L. Strong convergent inertial Tseng’s extragradient method for solving non-Lipschitz quasimonotone variational Inequalities in Banach spaces. J. Nonlinear Var. Anal. 2023, 7, 145–172. [Google Scholar]
  20. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  21. Alakoya, T.O.; Mewomo, O.T.; Shehu, Y. Strong convergence results for quasimonotone variational inequalities. Math. Methods Oper. Res. 2022, 2022, 47. [Google Scholar] [CrossRef]
  22. Godwin, E.C.; Alakoya, T.O.; Mewomo, O.T.; Yao, J.-C. Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems. Appl. Anal. 2022. [Google Scholar] [CrossRef]
  23. Uzor, V.A.; Alakoya, T.O.; Mewomo, O.T. Strong convergence of a self-adaptive inertial Tseng’s extragradient method for pseudomonotone variational inequalities and fixed point problems. Open Math. 2022, 20, 234–257. [Google Scholar] [CrossRef]
  24. Alakoya, T.O.; Uzor, V.A.; Mewomo, O.T.; Yao, J.-C. On System of Monotone Variational Inclusion Problems with Fixed-Point Constraint. J. Inequal. Appl. 2022, 2022, 47. [Google Scholar] [CrossRef]
  25. Censor, Y.; Borteld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  27. López, G.; Martín-Márquez, V.; Xu, H.K. Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems, Medical Physics Publishing, Madison; Medical Physics Publishing: Madison, WI, USA, 2010; pp. 243–279. [Google Scholar]
  28. Moudafi, A.; Thakur, B.S. Solving proximal split feasibility problems without prior knowledge of operator norms. Optim. Lett. 2014, 8, 2099–2110. [Google Scholar] [CrossRef]
  29. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  30. Censor, Y.; Motova, A.; Segal, A. Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327, 1244–1256. [Google Scholar] [CrossRef] [Green Version]
  31. Godwin, E.C.; Izuchukwu, C.; Mewomo, O.T. Image restoration using a modified relaxed inertial method for generalized split feasibility problems. Math. Methods Appl. Sci. 2022. [Google Scholar] [CrossRef]
  32. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  33. He, H.; Ling, C.; Xu, H.K. A relaxed projection method for split variational inequalities. J. Optim. Theory Appl. 2015, 166, 213–233. [Google Scholar] [CrossRef]
  34. Kim, J.K.; Salahuddin, S.; Lim, W.H. General nonconvex split variational inequality problems. Korean J. Math. 2017, 25, 469–481. [Google Scholar]
  35. Ogwo, G.N.; Izuchukwu, C.; Mewomo, O.T. Inertial methods for finding minimum-norm solutions of the split variational inequality problem beyond monotonicity. Numer. Algorithms 2022, 88, 1419–1456. [Google Scholar] [CrossRef]
  36. Tian, M.; Jiang, B.-N. Weak convergence theorem for a class of split variational inequality problems and applications in Hilbert space. J. Ineq. Appl. 2017, 2017, 123. [Google Scholar] [CrossRef] [Green Version]
  37. Reich, S.; Tuyen, T.M. Iterative methods for solving the generalized split common null point problem in Hilbert spaces. Optimization 2020, 69, 1013–1038. [Google Scholar] [CrossRef]
  38. Reich, S.; Tuyen, T.M. The split feasibility problem with multiple output sets in Hilbert spaces. Optim. Lett. 2020, 14, 2335–2353. [Google Scholar] [CrossRef]
  39. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. Politehn. Univ. Bucharest Sci. Bull. Ser. A Appl. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  40. Chang, S.-S.; Yao, J.-C.; Wang, L.; Liu, M.; Zhao, L. On the inertial forward-backward splitting technique for solving a system of inclusion problems in Hilbert spaces. Optimization 2021, 70, 2511–2525. [Google Scholar] [CrossRef]
  41. Ogwo, G.N.; Alakoya, T.O.; Mewomo, O.T. Inertial Iterative Method With Self-Adaptive Step Size for Finite Family of Split Monotone Variational Inclusion and Fixed Point Problems in Banach Spaces. Demonstr. Math. 2022, 55, 193–216. [Google Scholar] [CrossRef]
  42. Uzor, V.A.; Alakoya, T.O.; Mewomo, O.T. On Split Monotone Variational Inclusion Problem with Multiple Output Sets with fixed point constraints. Comput. Methods Appl. Math. 2022. [Google Scholar] [CrossRef]
  43. Wang, Z.-B.; Long, X.; Lei, Z.-Y.; Chen, Z.-Y. New self-adaptive methods with double inertial steps for solving splitting monotone variational inclusion problems with applications. Commun. Nonlinear Sci. Numer. Simul. 2022, 114, 106656. [Google Scholar] [CrossRef]
  44. Yao, Y.; Iyiola, O.S.; Shehu, Y. Subgradient extragradient method with double inertial steps for variational inequalities. J. Sci. Comput. 2022, 90, 71. [Google Scholar] [CrossRef]
  45. Godwin, E.C.; Alakoya, T.O.; Mewomo, O.T.; Yao, J.-C. Approximation of solutions of split minimization problem with multiple output sets and common fixed point problem in real Banach spaces. J. Nonlinear Var. Anal. 2022, 6, 333–358. [Google Scholar]
  46. Iutzeler, F.; Hendrickx, J.M. A generic online acceleration scheme for optimization algorithms via relaxation and inertia. Optim. Methods Softw. 2019, 34, 383–405. [Google Scholar] [CrossRef] [Green Version]
  47. Alvarez, F. Weak convergence of a relaxed-inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert Space. SIAM J. Optim. 2004, 14, 773–782. [Google Scholar] [CrossRef]
  48. Attouch, H.; Cabot, A. Convergence of a relaxed inertial forward-backward algorithm for structured monotone inclusions. Optimization 2019, 80, 547–598. [Google Scholar] [CrossRef]
  49. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Banach spaces. Nonlinear Anal. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  50. Tan, K.K.; Xu, H.K. Approximating fixed points of nonexpansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301–308. [Google Scholar] [CrossRef]
  51. Chuang, C.S. Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl. 2013, 350. [Google Scholar] [CrossRef] [Green Version]
  52. Cottle, R.W.; Yao, J.C. Pseudomonotone complementary problems in Hilbert space. J. Optim. Theory Appl. 1992, 75, 281–295. [Google Scholar] [CrossRef]
  53. Tian, M.; Jiang, B.-N. Inertial Haugazeau’s hybrid subgradient extragradient algorithm for variational inequality problems in Banach spaces. Optimization 2021, 70, 987–1007. [Google Scholar] [CrossRef]
Figure 1. Experiment 1: m = 20 .
Figure 1. Experiment 1: m = 20 .
Mathematics 11 00386 g001
Figure 2. Experiment 1: m = 40 .
Figure 2. Experiment 1: m = 40 .
Mathematics 11 00386 g002
Figure 3. Experiment 1: m = 60 .
Figure 3. Experiment 1: m = 60 .
Mathematics 11 00386 g003
Figure 4. Experiment 1: m = 80 .
Figure 4. Experiment 1: m = 80 .
Mathematics 11 00386 g004
Figure 5. Experiment 2: Case 1.
Figure 5. Experiment 2: Case 1.
Mathematics 11 00386 g005
Figure 6. Experiment 2: Case 2.
Figure 6. Experiment 2: Case 2.
Mathematics 11 00386 g006
Figure 7. Experiment 3: Case 1.
Figure 7. Experiment 3: Case 1.
Mathematics 11 00386 g007
Figure 8. Experiment 3: Case 2.
Figure 8. Experiment 3: Case 2.
Mathematics 11 00386 g008
Table 1. Numerical results for Experiment 1.
Table 1. Numerical results for Experiment 1.
m = 20 m = 40 m = 60 m = 80
Proposed Algorithm 1Iter. CPU Time Iter. CPU Time Iter. CPU Time Iter. CPU Time
c n , i = 0 128 0.0889 156 0.1235 174 0.2028 189 0.2412
c n , i = 20 n 0.1 128 0.0652 156 0.1241 174 0.2664 189 0.2930
c n , i = 40 n 0.01 128 0.0719 156 0.1495 174 0.3013 189 0.3220
c n , i = 60 n 0.001 128 0.0695 156 0.1549 174 0.2959 189 0.3342
c n , i = 80 n 0.0001 128 0.0701 156 0.1678 174 0.2877 189 0.3129
Table 2. Numerical results for Experiment 2.
Table 2. Numerical results for Experiment 2.
Case ICase II
Proposed Algorithm 1 Iter. CPU Time Iter. CPU Time
c n , i = 0 248 0.0916 248 4.0980
c n , i = 20 n 0.1 248 0.0778 248 0.0816
c n , i = 40 n 0.01 248 0.0852 248 0.0818
c n , i = 60 n 0.001 248 0.0875 248 0.0753
c n , i = 80 n 0.0001 248 0.0817 248 0.0811
Table 3. Numerical results for Experiment 3.
Table 3. Numerical results for Experiment 3.
Case ICase II
Proposed Algorithm 1 Iter. CPU Time Iter. CPU Time
c n , i = 0 128 0.0682 128 0.0620
c n , i = 20 n 0.1 128 0.0434 128 0.0422
c n , i = 40 n 0.01 128 0.0446 128 0.0474
c n , i = 60 n 0.001 128 0.0423 128 0.0414
c n , i = 80 n 0.0001 128 0.0416 128 0.0424
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alakoya, T.O.; Mewomo, O.T. A Relaxed Inertial Tseng’s Extragradient Method for Solving Split Variational Inequalities with Multiple Output Sets. Mathematics 2023, 11, 386. https://doi.org/10.3390/math11020386

AMA Style

Alakoya TO, Mewomo OT. A Relaxed Inertial Tseng’s Extragradient Method for Solving Split Variational Inequalities with Multiple Output Sets. Mathematics. 2023; 11(2):386. https://doi.org/10.3390/math11020386

Chicago/Turabian Style

Alakoya, Timilehin Opeyemi, and Oluwatosin Temitope Mewomo. 2023. "A Relaxed Inertial Tseng’s Extragradient Method for Solving Split Variational Inequalities with Multiple Output Sets" Mathematics 11, no. 2: 386. https://doi.org/10.3390/math11020386

APA Style

Alakoya, T. O., & Mewomo, O. T. (2023). A Relaxed Inertial Tseng’s Extragradient Method for Solving Split Variational Inequalities with Multiple Output Sets. Mathematics, 11(2), 386. https://doi.org/10.3390/math11020386

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop