Next Article in Journal
Numerical and Machine Learning Approach for Fe3O4-Au/Blood Hybrid Nanofluid Flow in a Melting/Non-Melting Heat Transfer Surface with Entropy Generation
Next Article in Special Issue
Some Fixed-Point Results in Extended S-Metric Space of Type (α,β)
Previous Article in Journal
Generalized Common Best Proximity Point Results in Fuzzy Metric Spaces with Application
Previous Article in Special Issue
Fixed Point Theory on Triple Controlled Metric-like Spaces with a Numerical Iteration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Viscosity Approximation Methods for General Split Variational Inclusion and Fixed Point Problems in Hilbert Spaces

1
Department of Basic Teaching, Zhejiang University of Water Resources and Electric Power, Hangzhou 310018, China
2
Key Laboratory of Rare Earth Optoelectronic Materials and Devices of Zhejiang Province, Institute of Optoelectronic Materials and Devices, China Jiliang University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2023, 15(8), 1502; https://doi.org/10.3390/sym15081502
Submission received: 4 July 2023 / Revised: 25 July 2023 / Accepted: 25 July 2023 / Published: 28 July 2023
(This article belongs to the Special Issue Symmetry in Fixed Point Theory and Applications)

Abstract

:
The purpose of this paper is to find a common element of the fixed point set of a nonexpansive mapping and the set of solutions of the general split variational inclusion problem in symmetric Hilbert spaces by using the inertial viscosity iterative method. Some strong convergence theorems of the proposed algorithm are demonstrated. As applications, we use our results to study the split feasibility problem and the split minimization problem. Finally, the numerical experiments are presented to illustrate the feasibility and effectiveness of our theoretical findings, and our results extend and improve many recent ones.

1. Introduction

The Hilbert space theory and the nonlinear fixed point problem are an important field in mathematics and optimization. Symmetry is closely related to the fixed point problem. And Hilbert space is one kind of reflexive space, and a reflexive Hilbert space is called a symmetric space. Let H be a real symmetric Hilbert space and S : H H be a mapping. The set of fixed points of S is denoted by F i x ( S ) . It is known that S is a contraction if there exists a constant ρ ( 0 , 1 ) such that S x S y ρ x y ,   x , y H .
Let T : H 2 H be a set valued mapping. Then, T is said to be monotone if x y , u v 0 for all x , y H with u T x and v T y . A monotone mapping T is maximal monotone if its graph G ( T ) = { ( x , y ) , y T x } is not properly contained in the graph of any other monotone mapping. The resolvent operator J β T of mapping T defined by J β T = ( I + β T ) 1 for each β > 0 .
It is worth noting that the split variational inclusion problem serves as a model in image reconstruction, radiation therapy and sensor networks [1,2,3]. There are many other special cases of split variational inclusion problem, such as split feasibility problem, variational inclusion problem, fixed point problem, split equilibrium problem and split minimization problem; see [4,5,6,7,8,9,10] and the references therein. Let H 1 and H 2 be two real Hilbert spaces, and let B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be maximal monotone mappings. In fact, the following is a split variational inclusion problem: to find a point x H 1 such that
0 B 1 x , and 0 B 2 A x ,
where A : H 1 H 2 is a bounded linear operator. Several iterative algorithms for finding the set of solutions of the split variational inclusion problem have been studied by many authors [11,12,13]. Particularly, in 2012, Byrne et al. [14] introduced the following iteration process for given x 0 H 1 ,   λ > 0 :
x n + 1 = J λ B 1 [ x n + ϵ A * ( J λ B 2 I ) A x n ] ,
where ϵ ( 0 , 2 A * A ) . They established the weak and strong convergence of the algorithm to solve the split variational inclusion problem. In 2014, Kazmi and Rivi [13] proved a strong convergence result of the following algorithm to a solution of the split variational inclusion problem and the fixed point problem of a nonexpansive mapping in Hilbert space:
u n = J λ B 1 [ x n + ϵ A * ( J λ B 2 I ) A x n ] , x n + 1 = α n f ( x n ) + ( 1 α n ) S u n .
On the other hand, many authors are increasingly interested in using inertial techniques to build efficient iterative algorithms due to the effect that inertial techniques have to speed up convergence, see [15,16,17,18,19,20,21] and references therein. In 2001, Alvarez and Attouch [22] introduced the following inertial proximal point method to solve the variational inclusion problem:
x n + 1 = J λ n B 1 [ x n + θ n ( x n x n 1 ) ] ,
where { θ n } [ 0 , 1 ) , { λ n } > 0 . They proved that the sequence converges weakly to a zero of the maximal monotone operator B. Thenceforward, in 2017, Chuang et al. [23] extended this method to the hybrid inertial proximal algorithm in Hilbert spaces. They proved that their iterative sequence converges weakly to the solution of the split variational inclusion problem. In 2018, Cholamjiak et al. [20] obtained strong convergence results by combining the inertial technique of the Halpern iteration method. Moreover, in 2020, Pham et al. [24] proposed an algorithm which is a combination of Mann method and inertial method for solving the split variational inclusion problem in real Hilbert spaces:
x 0 , x 1 H 1 , w n = x n + α n ( x n x n 1 ) , y n = J β B 1 ( I λ n A * ( I J β B 2 A ) ) w n , x n + 1 = ( 1 θ n β n ) x n + θ n y n .
They proved that the sequence { x n } converges strongly to a solution of the split variational inclusion problem with two set-valued maximal monotone mappings.
Motivated and inspired by the above work, we consider the following general split variational inclusion problem of finding a point x H 1 such that
x i = 1 N B i 1 ( 0 ) , and A x i = 1 N K i 1 ( 0 ) ,
where B i : H 1 H 1 , K i : H 2 H 2 , i = 1 , 2 , , N are two families of maximal monotone mappings. The solution set of the general split variational inclusion problem is denoted by Γ . We present an inertial viscosity iterative algorithm for the general split variational inclusion problem and the fixed point problem of a nonexpansive mapping:
z n = x n + θ n ( x n x n 1 ) , w n = Σ i = 1 N γ i , n J β i , n B i [ z n λ i , n A * ( I J β i , n K i ) A z n ] , x n + 1 = α n f ( x n ) + ( 1 α n ) [ δ n w n + ( 1 δ n ) S w n ] .
Then, the strong convergence theorem of this algorithm is proved. We apply this iterative scheme to studying the split feasibility problem and the split minimization problem. Finally, we give the numerical experiments to illustrate the feasibility and effectiveness of our main theorem. Our results extend and improve many recent ones [12,13,14,20,23,24].

2. Preliminaries

Let C be a nonempty closed convex subset of a real symmetric Hilbert space H with inner product · , · of regularity and symmetry and norm · . x n x and x n x denote the weak convergence and strong convergence of the sequence { x n } to a point x, respectively. A mapping P C : H C is called the metric projection if x P C x x y ,   y C . It is known that P C is nonexpansive and
x P C x , y P C x 0 ,   y C .
The following lemmas and concepts will be needed to prove our main results.
Definition 1. 
Suppose S : H H is a mapping. Then, S is called nonexpansive if
S x S y x y ,   x , y H ,
S is said to be firmly nonexpansive if
S x S y 2 S x S y , x y ,   x , y H .
Lemma 1 
([11]). Suppose H is a real Hilbert space. Then, for all x , y H ,   κ R , the following statements are hold:
(i) 
x + y x 2 + 2 x + y , y ;
(ii) 
κ x + ( 1 κ ) y 2 = κ x 2 κ ( 1 κ ) x y 2 + ( 1 κ ) y 2 .
Lemma 2 
([17]). Assume { a n } is a sequence of nonnegative real numbers satisfying:
a n + 1 ( 1 α n ) a n + α n b n , n 0 ,
where { α n } ( 0 , 1 ) and { b n } R such that:
(i) 
lim n α n = 0 and n = 1 α n = ;
(ii) 
either lim sup n b n 0 or n = 0 | α n b n | < .
Then lim n a n = 0 .
Lemma 3 
([25]). Suppose S : C H is a nonexpansive mapping, and { x n } is a sequence in C . If x n u and lim n x n S x n = 0 , then S u = u .
Lemma 4 
([26]). Suppose that { a n } is a sequence of nonnegative real numbers satisfying a n i < a n i + 1 for all i N , where { n j } is a subsequence of { n } . Then, there exists a nondecreasing sequence { l j } N such that l j and j N , we have
a l j < a l j + 1 and a j < a l j + 1 .
In fact, l j = max { k j : a k < a k + 1 } .
Lemma 5 
([27]). Let B : H 2 H be a set-valued maximal monotone mapping and β > 0 , the following relations hold:
(i) 
For each β > 0 , the resolvent mapping J β B is is a single-valued and firmly nonexpansive mapping;
(ii) 
D ( J β B ) = H , F ( J β B ) = B 1 ( 0 ) = { x D ( B ) , 0 B x } ;
(iii) 
( I J β B ) is a firmly nonexpansive mapping;
(iv) 
Suppose that B 1 ( 0 ) , then for all x H , z B 1 ( 0 ) and J β B x z 2 x z 2 J β B x x 2 ;
(v) 
Suppose that B 1 ( 0 ) , then for all x H , p B 1 ( 0 ) and x J β B x , J β B x p 0 .
Lemma 6. 
Assume that H 1 and H 2 are two real Hilbert spaces. Let A : H 1 H 2 be a linear and bounded operator with its adjoint A * . Let B i : H 1 H 1 , K i : H 2 H 2 , i = 1 , 2 , , N are two families of maximal monotone mappings. Let J β i B i and J β i K i be the resolvent mapping of B i and K i , respectively. Suppose that the solution set of the the solution set Γ is nonempty and β i > 0 ,   λ i > 0 . Then, for any z H 1 , z is a solution of general split variational inclusion problem if and only if J β i B i [ z λ i A * ( I J β i K i ) A z ] = z .
Proof. 
⇒ Let z Γ , then z i = 1 N B i 1 ( 0 ) and A z i = 1 N K i 1 ( 0 ) . From Lemma 5(ii), we have that
J β i B i [ z λ i A * ( I J β i K i ) A z ] = J β i B i [ z λ i A * ( A z J β i K i A z ) ] = J β i B i z = z .
⇐ Let J β i B i [ z λ i A * ( I J β i K i ) A z ] = z and p Γ . From Lemma 5(v), for any p B i 1 ( 0 ) , we get
( z λ i A * ( I J β i K i ) A z ) z , z p 0 ,
which implies that A * ( I J β i K i ) A z , z p 0 , then A z J β i K i A z , A z A p 0 . For any w K i 1 ( 0 ) , we also use Lemma 5(v) to obtain
A z J β i K i A z , w J β i K i A z 0 ,
Thus, we observe that
A z J β i K i A z , w J β i K i A z + A z A p 0 ,
which means for any p B i 1 ( 0 ) and w K i 1 ( 0 ) , we have
A z J β i K i A z 2 A z J β i K i A z , A p w .
Since p Γ , then p i = 1 N B i 1 ( 0 ) and A p i = 1 N K i 1 ( 0 ) , we get A z = J β i K i A z , then A z F ( J β i K i ) = K i 1 ( 0 ) , so A z i = 1 N K i 1 ( 0 ) . It follows that
z = J β i B i [ z λ i A * ( I J β i K i ) A z ] = J β i B i z ,
which implies z F ( J β i B i ) = B i 1 ( 0 ) , so z i = 1 N K i 1 ( 0 ) . Therefore z Γ . □

3. Main Results

Theorem 1. 
Let H 1 and H 2 be two real Hilbert spaces. Let A : H 1 H 2 be a bounded linear operator and A * be the adjoint operator of A. Suppose that B i : H 1 H 1 and K i : H 2 H 2 , i = 1 , 2 , , N are two families of maximal monotone mappings. Let f : H 1 H 1 be a contraction with coefficient ρ ( 0 , 1 ) and S : H 1 H 1 be a nonexpansive mapping such that F i x ( S ) Γ . Suppose { α n } , { δ n } , { γ i , n } ( 0 , 1 ) . Assume that { β i , n } , { λ i , n } are sequences of positive real numbers and x 0 , x 1 H 1 . If the sequence { x n } defined by (1) satisfies the following conditions:
(i) 
Let the parameter θ n chosen as
θ n = min { θ , ϵ n x n x n 1 } if x n x n 1 , θ otherwise ,
where θ > 0 , ϵ n is a positive real sequence such that ϵ n = o ( α n ) ;
(ii) 
lim n α n = 0 , n = 0 α n = , δ n [ a , b ] ( 0 , 1 ) ;
(iii) 
i = 1 N γ i , n = 1 , γ i , n [ c , d ] ( 0 , 1 ) , λ i , n ( 0 , 2 A 2 ) ,
then { x n } converges strongly to an element ω F i x ( S ) Γ , where ω = P F i x ( S ) Γ f ( ω ) .
Proof. 
Let ω F i x ( S ) Γ , then we have ω = J β i , n B i ω , A ω = J β i , n K i A ω and S ω = ω . By the convexity of · 2 , we obtain
w n ω 2 = Σ i = 1 N γ i , n J β i , n B i [ z n λ i , n A * ( I J β i , n K i ) A z n ] ω 2 Σ i = 1 N γ i , n J β i , n B i [ z n λ i , n A * ( I J β i , n K i ) A z n ] ω 2 .
It follows from Lemma 5 that J β i , n B i [ I λ i , n A * ( I J β i , n K i ) A ] are nonexpansive. Then, we get
J β i , n B i [ z n λ i , n A * ( I J β i , n K i ) A z n ] ω 2   z n λ i , n A * ( I J β i , n K i ) A z n ω 2 =   z n ω 2 + λ i , n 2 A * ( I J β i , n K i ) A z n 2 + 2 λ i , n z n ω , A * ( J β i , n K i I ) A z n =   z n ω 2 + λ i , n 2 ( J β i , n K i I ) A z n , A A * ( J β i , n K i I ) A z n + 2 λ i , n A ( z n ω ) , ( J β i , n K i I ) A z n   z n ω 2 + λ i , n 2 A 2 ( J β i , n K i I ) A z n 2 +   2 λ i , n A ( z n ω ) + ( J β i , n K i I ) A z n ( J β i , n K i I ) A z n , ( J β i , n K i I ) A z n =   z n ω 2 + λ i , n 2 A 2 ( J β i , n K i I ) A z n 2 2 λ i , n ( J β i , n K i I ) A z n 2 +   2 λ i , n J β i , n K i A z n A ω , ( J β i , n K i I ) A z n   z n ω 2 ( 2 λ i , n λ i , n 2 A 2 ) ( J β i , n K i I ) A z n 2 =   z n ω 2 + λ i , n ( λ i , n A 2 2 ) ( J β i , n K i I ) A z n 2 .
which means
w n ω 2   Σ i = 1 N γ i , n J β i , n B i [ z n λ i , n A * ( I J β i , n K i ) A z n ] ω 2   z n ω 2 + Σ i = 1 N γ i , n λ i , n ( λ i , n A 2 2 ) ( J β i , n K i I ) A z n 2 .
From condition (i), we have
z n ω =   x n + θ n ( x n x n 1 ) ω   x n ω + θ n x n x n 1 =   x n ω + α n θ n α n x n x n 1   x n ω + α n M 1 ,
M 1 > 0 is a constant. Define y n = δ n w n + ( 1 δ n ) S w n , then we have
y n ω   δ n w n ω + ( 1 δ n ) S w n ω   δ n w n ω + ( 1 δ n ) w n ω =   w n ω .
We compute that:
x n + 1 ω =   α n f ( x n ) + ( 1 α n ) y n ω   α n f ( x n ) ω + ( 1 α n ) y n ω   α n ρ x n ω + ( 1 α n ) ( x n ω + α n M 1 )   [ 1 α n ( 1 ρ ) ] x n ω + α n M 1   max { x n ω , M 1 1 ρ } max { x 0 ω , M 1 1 ρ } .
which implies that x n is bounded; hence, z n , w n , y n and S w n is also bounded.
Since { x n } is bounded and z n ω     x n ω   +   α n M 1 , there exists a constant M 2 > 0 , we have
z n ω 2 x n ω 2 + α n M 2 .
Therefore, using (3), we observe that
w n ω 2   z n ω 2 + Σ i = 1 N γ i , n λ i , n ( λ i , n A 2 2 ) ( J β i , n K i I ) A z n 2   | x n ω 2 + Σ i = 1 N γ i , n λ i , n ( λ i , n A 2 2 ) ( J β i , n K i I ) A z n 2 + α n M 2 .
It follows from (2) that
J β i , n B i [ z n λ i , n A * ( I J β i , n K i ) A z n ] ω 2   z n λ i , n A * ( I J β i , n K i ) A z n ω 2   w n ω , z n + λ i , n A * ( J β i , n K i I ) A z n ω =   1 2 z n + λ i , n A * ( J β i , n K i I ) A z n ω 2 + 1 2 w n ω 2   1 2 w n ω z n λ i , n A * ( J β i , n K i I ) A z n + ω 2 =   1 2 z n ω + λ i , n A * ( J β i , n K i I ) A z n 2 + 1 2 w n ω 2   1 2 w n z n λ i , n A * ( J β i , n K i I ) A z n 2 =   1 2 w n ω 2 + 1 2 z n ω 2 + 1 2 λ i , n 2 A * ( J β i , n K i I ) A z n 2 + z n ω , λ i , n A * ( J β i , n K i I ) A z n   1 2 w n z n 2 1 2 λ i , n 2 A * ( J β i , n K i I ) A z n 2 + w n z n , λ i , n A * ( J β i , n K i I ) A z n =   1 2 w n ω 2 + 1 2 z n ω 2 1 2 w n z n 2 + w n ω , λ i , n A * ( J β i , n K i I ) A z n   1 2 w n ω 2 + 1 2 z n ω 2 1 2 w n z n 2 + λ i , n w n ω A * ( J β i , n K i I ) A z n ,
which implies that
w n ω 2 Σ i = 1 N γ i , n J β i , n B i [ z n λ i , n A * ( I J β i , n K i ) A z n ] ω 2 1 2 w n ω 2 + 1 2 z n ω 2 1 2 w n z n 2 + Σ i = 1 N γ i , n λ i , n w n ω A * ( J β i , n K i I ) A z n ,
hence, by (5),
w n ω 2   z n ω 2 w n z n 2 + 2 Σ i = 1 N γ i , n λ i , n w n ω A * ( J β i , n K i I ) A z n   x n ω 2 w n z n 2 + 2 Σ i = 1 N γ i , n λ i , n w n ω A * ( J β i , n K i I ) A z n + α n M 2 .
Furthermore,
y n ω 2 =   δ n w n + ( 1 δ n ) S w n ω 2 =   y n ω , δ n w n + ( 1 δ n ) S w n ω =   1 2 δ n w n + ( 1 δ n ) S w n ω 2 + 1 2 y n ω 2 1 2 y n ω δ n w n + ( 1 δ n ) S w n + ω 2 =   1 2 δ n ( w n ω ) + ( 1 δ n ) ( S w n ω ) 2 + 1 2 y n ω 2 1 2 δ n ( y n w n ) + ( 1 δ n ) ( y n S w n ) 2 =   1 2 δ n 2 w n ω 2 + 1 2 ( 1 δ n ) 2 S w n ω 2 + δ n ( 1 δ n ) w n ω , S w n ω +   1 2 y n ω 2 1 2 δ n 2 y n w n 2 1 2 ( 1 δ n ) 2 y n S w n 2 δ n ( 1 δ n ) y n w n , y n S w n   1 2 δ n 2 w n ω 2 + 1 2 ( 1 δ n ) 2 w n ω 2 + δ n ( 1 δ n ) w n ω 2 +   1 2 y n ω 2 1 2 δ n 2 y n w n 2 1 2 ( 1 δ n ) 2 y n S w n 2 =   1 2 y n ω 2 + 1 2 w n ω 2 1 2 δ n 2 y n w n 2 1 2 ( 1 δ n ) 2 y n S w n 2 ,
which indicates that
y n ω 2     w n ω 2 δ n 2 y n w n 2 ( 1 δ n ) 2 y n S w n 2 ,
Moreover, for some M 3 > 0 ,
z n ω 2 =   x n + θ n ( x n x n 1 ) ω 2 =   x n ω + θ n ( x n x n 1 ) 2 =   x n ω 2 + θ n 2 x n x n 1 2 + 2 θ n x n ω , x n x n 1   x n ω 2 + θ n 2 x n x n 1 2 + 2 θ n x n ω x n x n 1 =   x n ω 2 + θ n x n x n 1 ( θ n x n x n 1 + 2 x n ω )   x n ω 2 + θ n x n x n 1 M 3 .
Observe that
x n + 1 ω 2 = α n f ( x n ) + ( 1 α n ) y n ω 2 α n f ( x n ) + ( 1 α n ) y n ω , x n + 1 ω = ( 1 α n ) y n ω , x n + 1 ω + α n f ( x n ) f ( ω ) , x n + 1 ω + α n f ( ω ) ω , x n + 1 ω 1 α n 2 [ y n ω 2 + x n + 1 ω 2 ] + α n 2 [ ρ 2 x n ω 2 + x n + 1 ω 2 ] + α n f ( ω ) ω , x n + 1 ω 1 2 x n + 1 ω 2 + 1 α n 2 y n ω 2 + α n 2 ρ 2 x n ω 2 + α n f ( ω ) ω , x n + 1 ω ,
then from (3), (4) and (9) and conditions (ii), we get
x n + 1 ω 2   ( 1 α n ) y n ω 2 + α n ρ 2 x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω   ( 1 α n ) ( x n ω 2 + θ n x n x n 1 M 3 ) + α n ρ 2 x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω =   [ 1 α n ( 1 ρ 2 ) ] x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω + α n ( 1 α n ) θ n α n x n x n 1 M 3 .
It follows from (4) and (6) that
x n + 1 ω 2   ( 1 α n ) y n ω 2 + α n ρ 2 x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω   ( 1 α n ) ( x n ω 2 + Σ i = 1 N γ i , n λ i , n ( λ i , n A 2 2 ) ( J β i , n K i I ) A z n 2 + α n M 2 ) +   α n ρ 2 x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω =   [ 1 α n ( 1 ρ 2 ) ] x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω + ( 1 α n ) α n M 2 +   ( 1 α n ) Σ i = 1 N γ i , n λ i , n ( λ i , n A 2 2 ) ( J β i , n K i I ) A z n 2 ,
and by (4) and (7), we have that
x n + 1 ω 2   ( 1 α n ) y n ω 2 + α n ρ 2 x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω   ( 1 α n ) ( x n ω 2 w n z n 2 + 2 Σ i = 1 N γ i , n λ i , n w n ω A * ( J β i , n K i I ) A z n + α n M 2 ) +   α n ρ 2 x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω =   [ 1 α n ( 1 ρ 2 ) ] x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω + ( 1 α n ) α n M 2   ( 1 α n ) w n z n 2 + 2 ( 1 α n ) Σ i = 1 N γ i , n λ i , n w n ω A * ( J β i , n K i I ) A z n .
It follows from (3) and (8) and conditions (ii), we deduce
x n + 1 ω 2 ( 1 α n ) y n ω 2 + α n ρ 2 x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω ( 1 α n ) ( w n ω 2 δ n 2 y n w n 2 ) + α n ρ 2 x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω ( 1 α n ) ( x n ω 2 + α n M 2 δ n 2 y n w n 2 ) + α n ρ 2 x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω = [ 1 α n ( 1 ρ 2 ) ] x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω + ( 1 α n ) α n M 2 ( 1 α n ) δ n 2 y n w n 2 ,
and
x n + 1 ω 2 ( 1 α n ) y n ω 2 + α n ρ 2 x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω ( 1 α n ) ( w n ω 2 ( 1 δ n ) 2 y n S w n 2 ) + α n ρ 2 x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω ( 1 α n ) ( x n ω 2 + α n M 2 ( 1 δ n ) 2 y n S w n 2 ) + α n ρ 2 x n ω 2 +   2 α n f ( ω ) ω , x n + 1 ω = [ 1 α n ( 1 ρ 2 ) ] x n ω 2 + 2 α n f ( ω ) ω , x n + 1 ω + ( 1 α n ) α n M 2 ( 1 α n ) ( 1 δ n ) 2 y n S w n 2 .
Next, we consider the convergence of the sequence { x n ω } in two cases.
Case 1: There exists a n 0 such that x n + 1 ω x n ω for each n n 0 . This indicates that { x n ω } is convergent. Thus, from (11), (13) and (14), we have
( 1 α n ) Σ i = 1 N γ i , n λ i , n ( 2 λ i , n A 2 ) ( J β i , n K i I ) A z n 2 [ 1 α n ( 1 ρ 2 ) ] x n ω 2 x n + 1 ω 2 + 2 α n f ( ω ) ω , x n + 1 ω + ( 1 α n ) α n M 2 0 ,
( 1 α n ) δ n 2 y n w n 2 = [ 1 α n ( 1 ρ 2 ) ] x n ω 2 x n + 1 ω 2 + 2 α n f ( ω ) ω , x n + 1 ω + ( 1 α n ) α n M 2 0 ,
( 1 α n ) ( 1 δ n ) 2 y n S w n 2 = [ 1 α n ( 1 ρ 2 ) ] x n ω 2 x n + 1 ω 2 + 2 α n f ( ω ) ω , x n + 1 ω + ( 1 α n ) α n M 2 0 .
Then, by the restriction conditions, we can get
( J β i , n K i I ) A z n 0 ; y n w n 0 ; y n S w n 0 .
From (12), we have
( 1 α n ) w n z n 2 [ 1 α n ( 1 ρ 2 ) ] x n ω 2 x n + 1 ω 2 + 2 α n f ( ω ) ω , x n + 1 ω + ( 1 α n ) α n M 2 + 2 ( 1 α n ) Σ i = 1 N γ i , n λ i , n w n ω A * ( J β i , n K i I ) A z n 0 ,
Then, we obtain
w n z n 0 .
From the definition of z n , we obtain
x n z n   =   θ n x n x n 1 α n M 1 0 .
From (15) and (16), we get
x n w n     x n z n   +   z n w n 0 , x n y n     x n w n   +   w n y n 0 , S w n w n     S w n y n   +   y n w n 0 .
From (ii) and (18), we have
x n + 1 x n   x n + 1 y n   +   y n x n =   α n f ( x n ) + ( 1 α n ) y n y n   +   y n x n =   α n f ( x n ) y n   +   y n x n 0 .
Suppose that { x n j } is a subsequence of { x n } such that x n j ω * . From (17) and (18), there exist subsequences of { z n } and { w n } satisfying z n j ω * and w n j ω * , respectively. Since A is a bounded linear operator, then A z n A ω * . Moreover, we know that ( J β i , n K i I ) A z n 0 , which implies that A ω * = J β i , n K i A ω * , by Lemma 6, we get ω * Γ . From S w n w n 0 and Lemma 3, we deduce ω * F i x ( S ) . Hence ω * F i x ( S ) Γ . Then, it follows that
lim sup n f ( ω ) ω , x n + 1 ω = lim sup j f ( ω ) ω , x n j + 1 ω = f ( ω ) ω , ω * ω 0 .
Apply Lemma 2 to (10), we have x n ω = P F i x ( S ) Γ f ( ω ) .
Case 2: Suppose that the sequence { x n + 1 ω } is not monotonically decreasing. Then, there exists a subsequence n j such that
x n j ω 2 x n j + 1 ω 2 ,   j N .
By Lemma 4, there exists a nondecreasing sequence { m i } N such that m i :
x m i ω 2     x m i + 1 ω 2 ,   x i ω 2     x m i + 1 ω 2 .
Similar to the proof in Case 1, we have
x m i + 1 ω 2 [ 1 α m i ( 1 ρ 2 ) ] x m i ω 2 + 2 α m i f ( ω ) ω , x m i + 1 ω + α m i ( 1 α m i ) θ m i α m i x m i x m i 1 M 3 ,
and
lim sup n f ( ω ) ω , x m i + 1 ω 0 ,
which implies
0 x m i + 1 ω 2 x m i ω 2 = [ 1 α m i ( 1 ρ 2 ) ] x m i ω 2 + 2 α m i f ( ω ) ω , x m i + 1 ω + α m i ( 1 α m i ) θ m i α m i x m i x m i 1 M 3 x m i ω 2 ,
then using lim n θ n α n x n x n 1 0 , we get
x m i ω 2 2 1 ρ 2 f ( ω ) ω , x m i + 1 ω + 1 α m i 1 ρ 2 θ m i α m i x m i x m i 1 M 3 0 .
By (20), we obtain x m i + 1 ω 0 . It follows from x i ω 2 x m i + 1 ω 2 for all i N that x i ω 2 0 , by using Lemma 4, we deduce x i ω . Therefore, the sequence x n ω , n . This completes the proof. □
In Theorem 1, we put f ( x ) = u ; then, we have the following result.
Corollary 1. 
Let H 1 and H 2 be two real Hilbert spaces. Let A : H 1 H 2 be a bounded linear operator and A * be the adjoint operator of A. Suppose that B i : H 1 H 1 , K i : H 2 H 2 , i = 1 , 2 , , N are two families of maximal monotone mappings. Let u H 1 be fixed and S : H 1 H 1 be a nonexpansive mapping such that F i x ( S ) Γ . For x 0 , x 1 H 1 , the sequence { x n } defined by:
z n = x n + θ n ( x n x n 1 ) , w n = Σ i = 1 N γ i , n J β i , n B i [ z n λ i , n A * ( I J β i , n K i ) A z n ] , x n + 1 = α n u + ( 1 α n ) [ δ n w n + ( 1 δ n ) S w n ] .
where { α n } , { δ n } , { γ i , n } ( 0 , 1 ) , { β i , n } , { λ i , n } are sequences of positive real numbers satisfying the following conditions:
(i) 
Let the parameter θ n chosen as
θ n = min { θ , ϵ n x n x n 1 } if x n x n 1 , θ otherwise ,
where θ > 0 , ϵ n is a positive real sequence such that ϵ n = o ( α n ) ;
(ii) 
lim n α n = 0 , n = 0 α n = , δ n [ a , b ] ( 0 , 1 ) ;
(iii) 
i = 1 N γ i , n = 1 , γ i , n [ c , d ] ( 0 , 1 ) , λ i , n ( 0 , 2 A 2 ) .
Then { x n } converges strongly to an element ω F i x ( S ) Γ , where ω = P F i x ( S ) Γ u .
In Theorem 1, we set B = B 1 = B 2 = = B N , K = K 1 = K 2 = = K N . Then, we obtain the following result.
Corollary 2. 
Let H 1 and H 2 be two real Hilbert spaces. Let A : H 1 H 2 be a bounded linear operator and A * be the adjoint operator of A. Suppose that B : H 1 H 1 , K : H 2 H 2 are two maximal monotone mappings. Let f : H 1 H 1 be a contraction with coefficient ρ ( 0 , 1 ) and S : H 1 H 1 be a nonexpansive mapping such that F i x ( S ) Γ . For x 0 , x 1 H 1 , the sequence { x n } defined by:
z n = x n + θ n ( x n x n 1 ) , w n = J β n B [ z n λ n A * ( I J β n K ) A z n ] , x n + 1 = α n f ( x n ) + ( 1 α n ) [ δ n w n + ( 1 δ n ) S w n ] .
where { α n } , { δ n } ( 0 , 1 ) , { β n } , { λ n } are sequences of positive real numbers satisfying the following conditions:
(i) 
Let the parameter θ n chosen as
θ n = min { θ , ϵ n x n x n 1 } if x n x n 1 , θ otherwise ,
where θ > 0 , ϵ n is a positive real sequence such that ϵ n = o ( α n ) ;
(ii) 
lim n α n = 0 , n = 0 α n = , δ n [ a , b ] ( 0 , 1 ) ;
(iii) 
λ n ( 0 , 2 A 2 ) .
Then, { x n } converges strongly to an element ω F i x ( S ) Γ , where ω = P F i x ( S ) Γ f ( ω ) .
Let C H 1 and Q H 2 be two nonempty closed convex subsets. Now, we recall that the following split feasibility problem is to find
x C ,   such that   A x Q .
The solution set of the split feasibility problem is denoted by Ω . In Corollary 2, if J β n B = P C and J β n K = P Q , we obtain the following result.
Corollary 3. 
Let C and Q be nonempty closed convex subsets of Hilbert spaces H 1 and H 2 , respectively. Let A : H 1 H 2 be a bounded linear operator and A * be the adjoint operator of A. Let f : H 1 H 1 be a contraction with coefficient ρ ( 0 , 1 ) and S : H 1 H 1 be a nonexpansive mapping such that F i x ( S ) Ω . For x 0 , x 1 H 1 , the sequence { x n } defined by:
z n = x n + θ n ( x n x n 1 ) , w n = P C [ z n λ n A * ( I P Q ) A z n ] , x n + 1 = α n f ( x n ) + ( 1 α n ) [ δ n w n + ( 1 δ n ) S w n ] .
where { α n } , { δ n } ( 0 , 1 ) , { λ n } are sequences of positive real numbers satisfying the following conditions:
(i) 
Let the parameter θ n chosen as
θ n = min { θ , ϵ n x n x n 1 } if x n x n 1 , θ otherwise ,
where θ > 0 , ϵ n is a positive real sequence such that ϵ n = o ( α n ) ;
(ii) 
lim n α n = 0 , n = 0 α n = , δ n [ a , b ] ( 0 , 1 ) ;
(iii) 
λ n ( 0 , 2 A 2 ) .
Then, { x n } converges strongly to an element ω F i x ( S ) Ω , where ω = P F i x ( S ) Ω f ( ω ) .
Let f : H 1 R and g : H 2 R be proper lower semicontinuous convex functions. The split minimization problem is to find
x H 1 such that x arg min x H 1 f ( x ) , A x arg min y H 2 g ( y ) .
The solution set of the split minimization problem is denoted by Υ . It is well known that the subdifferential f is maximal monotone and J f ( x ) = ( I + λ f ) is firmly nonexpansive. In Corollary 2, if J β n B = J f ( x ) and J β n K = J g ( x ) , we obtain the following result.
Corollary 4. 
Let H 1 and H 2 be Hilbert spaces and f : H 1 R , g : H 2 R be proper lower semicontinuous convex functions. Let A : H 1 H 2 be a bounded linear operator and A * be the adjoint operator of A. Let f : H 1 H 1 be a contraction with coefficient ρ ( 0 , 1 ) and S : H 1 H 1 be a nonexpansive mapping such that F i x ( S ) Υ . For x 0 , x 1 H 1 , the sequence { x n } defined by:
z n = x n + θ n ( x n x n 1 ) , w n = J f ( x ) [ z n λ n A * ( I J g ( x ) ) A z n ] , x n + 1 = α n f ( x n ) + ( 1 α n ) [ δ n w n + ( 1 δ n ) S w n ] .
where { α n } , { δ n } ( 0 , 1 ) , { λ n } are sequences of positive real numbers satisfying the following conditions:
(i) 
Let the parameter θ n chosen as
θ n = min { θ , ϵ n x n x n 1 } if x n x n 1 , θ otherwise ,
where θ > 0 , ϵ n is a positive real sequence such that ϵ n = o ( α n ) ;
(ii) 
lim n α n = 0 , n = 0 α n = , δ n [ a , b ] ( 0 , 1 ) ;
(iii) 
λ n ( 0 , 2 A 2 ) .
Then, { x n } converges strongly to an element ω F i x ( S ) Υ , where ω = P F i x ( S ) Υ f ( ω ) .

4. Numerical Examples

In this section, we give some numerical experiments to illustrate the feasibility and effectiveness of our proposed algorithm and the main result. All the codes are written in Python 3.7.
Example 1. 
Let H 1 = H 2 = R 2 , and A = 4 6 3 2 . Let B 1 , B 2 , K 1 , K 2 : R 2 R 2 be defined by B 1 = 4 0 0 2 , B 2 = 6 0 0 4 , K 1 = 5 0 0 3 and K 2 = 7 0 0 5 . We put β i , n = 1 3 , then we can get the resolvent mappings associated with B 1 ,   B 2 ,   K 1 and K 2 . Let γ i , n = 1 2 , δ n = 1 4 , α n = 1 3 n , and λ i , n = 1 2 A 2 for all n N . We take
θ n = min { 0.5 , 1 n 2 x n x n 1 } if x n x n 1 , 0.5 otherwise ,
Let S and f be defined by S x = 1 6 x , f ( x ) = 1 5 x . This implies that ρ = 1 5 . Starting the initial values x 0 = x 1 = 60 60 and x 0 = x 1 = 120 120 in (1), respectively. The numerical results have been shown in Table 1 and Figure 1. We test the convergence behavior of this algorithm under different stopping conditions. The results are shown in Table 2.
Remark 1. 
The parameters we select satisfy the conditions (i)–(iii) in Theorem 1. We randomly selected initial values to study the convergence of the algorithm. The numerical results we obtained verify the effectiveness and feasibility of our proposed iterative algorithm. In addition, we can observe the convergence rate of the iterative algorithm. The most important thing is that the method we provide converges very quickly in terms of the number of iterations and execution time, and these results are not significantly related to the choice of initial values.

5. Conclusions

In this paper, we have presented and analyzed an inertial viscosity iterative algorithm for general split variational inclusion problems and fixed point problems in Hilbert spaces. The strong convergence of the proposed algorithms is demonstrated, and the numerical experiments are given to illustrate the efficiency of Theorem 1. We give an extension of the inertial viscosity approximation and the common fixed point problems in Hilbert spaces, and we generalize the split variational inclusion problems to the general split variational inclusion problems of Cholamjiak et al. [20] and Chuang [23]. In Corollary 2, if f = S = I , it is the main result of Pham et al. [24]. In addition, the methods and results also extend and improve some corresponding recent results of [12,13,14,22] as special cases.

Author Contributions

Conceptualization, C.P.; methodology, C.P.; software, C.P. and K.W.; validation, C.P.; formal analysis, C.P.; investigation, C.P. and K.W.; resources, C.P. and K.W.; data curation, C.P. and K.W.; writing—original draft preparation, C.P.; writing—review and editing, C.P.; visualization, C.P. and K.W.; supervision, C.P.; project administration, C.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Research funding (Grant no. 88106003214).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  2. Rockafellar, R.T. Monotone operator and the proximal point algorithm. SIAM J. Control Optim. 2011, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  3. Combettes, P.L. The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 1996, 95, 155–270. [Google Scholar]
  4. Reich, S.; Tuyen, T.M. The split feasibility problem with multiple output sets in Hilbert spaces. Optim. Lett. 2020, 14, 2335–2353. [Google Scholar] [CrossRef]
  5. Ghasemzadehdibagi, S.; Asadi, M.; Haghayeghi, S.; Khojasteh, F. A new three-step iteration method for α-nonexpansive mappings and variational inequalities. J. Math. Anal. 2018, 9, 38–46. [Google Scholar]
  6. Qin, X.; Yao, J.C. A viscosity iterative method for a split feasibility problem. J. Nonlinear Convex Anal. 2019, 20, 1497–1506. [Google Scholar]
  7. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  8. Abbas, M.; Ibrahim, Y.; Khan, A.R.; Sen, D.L. Strong convergence of a system of generalized mixed equilibrium problem, split variational inclusion problem and fixed point problem in Banach spaces. Symmetry 2019, 11, 722. [Google Scholar] [CrossRef] [Green Version]
  9. Payvand, M.A.; Jahedi, S. System of generalized mixed equilibrium problems, variational inequality, and fixed point problems. Fixed Point Theory Appl. 2016, 2016, 93. [Google Scholar] [CrossRef] [Green Version]
  10. Asadi, M.; Karapnar, E. Coincidence point theorem on hilbert spaces via weak ekeland variational principle and application to boundary value problem. Thai J. Math. 2021, 19, 1–7. [Google Scholar]
  11. Shehu, Y.; Ogbuisi, F.U. An iterative method for solving split monotone variational inclusion and fixed point problems. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. 2016, 110, 503–518. [Google Scholar] [CrossRef]
  12. Chuang, C.S. Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl. 2013, 2013, 350. [Google Scholar] [CrossRef] [Green Version]
  13. Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
  14. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  15. Hammad, H.A.; Rehman, H.U.; Sen, M.D.L. Shrinking projection methods for accelerating relaxed inertial tseng-type algorithm with applications. Math. Probl. Eng. 2020, 2020, 7487383. [Google Scholar] [CrossRef]
  16. Chuasuk, P.; Ogbuisi, F.; Shehu, Y.; Cholamjiak, P. New inertial method for generalized split variational inclusion problems. J. Ind. Manag. Optim. 2020, 17, 3357–3371. [Google Scholar] [CrossRef]
  17. Tan, B.; Qin, X.; Yao, J.C. Strong convergence of self-adaptive inertial algorithms for solving split variational inclusion problems with applications. J. Sci. Comput. 2021, 87, 20. [Google Scholar] [CrossRef]
  18. Shehu, Y.; Gibali, A. New inertial relaxed method for solving split feasibilities. Optim. Lett. 2020, 15, 2109–2126. [Google Scholar] [CrossRef]
  19. Hammad, H.A.; Tuyen, T.M. Effect of shrinking projection and cq-methods on two inertial forward–backward algorithms for solving variational inclusion problems. Rend. Circ. Mat. Palermo 2021, 70, 1669–1683. [Google Scholar]
  20. Cholamjiak, W.; Cholamjiak, P.; Suantai, S. An inertial forward-backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 2018, 20, 42. [Google Scholar] [CrossRef]
  21. Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed cq algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 2018, 14, 1595–1615. [Google Scholar] [CrossRef] [Green Version]
  22. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  23. Chuang, C.S. Hybrid inertial proximal algorithm for the split variational inclusion problem in Hilbert spaces with applications. Optimization 2017, 66, 777–792. [Google Scholar] [CrossRef]
  24. Anh, P.K.; Thong, D.V.; Dung, V.T. A strongly convergent mann-type inertial algorithm for solving split variational inclusion problems. Optim. Eng. 2021, 22, 159–185. [Google Scholar] [CrossRef]
  25. Xu, H.K. Strong convergence of an iterative method for nonexpansive and accretive operators. J. Math. Anal. Appl. 2006, 314, 631–643. [Google Scholar] [CrossRef]
  26. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  27. Marino, G.; Xu, H.K. Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 2004, 3, 791–808. [Google Scholar] [CrossRef]
Figure 1. Numerical results for Example 1.
Figure 1. Numerical results for Example 1.
Symmetry 15 01502 g001
Table 1. The values of error x n + 1 x n .
Table 1. The values of error x n + 1 x n .
n x 0 = x 1 = 60 60 x 0 = x 1 = 120 120
1 69.238 138.476
2 12.854 25.671
3 2.36772 4.74734
4 0.44423 0.89771
5 0.08335 0.17194
27 2.47 × 10 13 8.43 × 10 13
28 9.84 × 10 14 2.33 × 10 15
29 5.49 × 10 14 8.47 × 10 14
30 6.81 × 10 15 2.59 × 10 14
Table 2. The number of termination iterations and execution time with different stopping conditions.
Table 2. The number of termination iterations and execution time with different stopping conditions.
x n + 1 x n < ε n x 0 = x 1 = 60 60 x 0 = x 1 = 120 120
ε n Iter.Times (ms)Iter.Times (ms)
10 4 6 0.75 8 0.75
10 5 9 0.90 10 1.11
10 6 12 0.97 12 1.17
10 7 14 1.03 14 1.31
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pan, C.; Wang, K. Inertial Viscosity Approximation Methods for General Split Variational Inclusion and Fixed Point Problems in Hilbert Spaces. Symmetry 2023, 15, 1502. https://doi.org/10.3390/sym15081502

AMA Style

Pan C, Wang K. Inertial Viscosity Approximation Methods for General Split Variational Inclusion and Fixed Point Problems in Hilbert Spaces. Symmetry. 2023; 15(8):1502. https://doi.org/10.3390/sym15081502

Chicago/Turabian Style

Pan, Chanjuan, and Kunyang Wang. 2023. "Inertial Viscosity Approximation Methods for General Split Variational Inclusion and Fixed Point Problems in Hilbert Spaces" Symmetry 15, no. 8: 1502. https://doi.org/10.3390/sym15081502

APA Style

Pan, C., & Wang, K. (2023). Inertial Viscosity Approximation Methods for General Split Variational Inclusion and Fixed Point Problems in Hilbert Spaces. Symmetry, 15(8), 1502. https://doi.org/10.3390/sym15081502

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop