Next Article in Journal
Approximate Solutions of Time Fractional Diffusion Wave Models
Next Article in Special Issue
Generalized Mann Viscosity Implicit Rules for Solving Systems of Variational Inequalities with Constraints of Variational Inclusions and Fixed Point Problems
Previous Article in Journal
Numerical Study for Darcy–Forchheimer Flow of Nanofluid due to a Rotating Disk with Binary Chemical Reaction and Arrhenius Activation Energy
Previous Article in Special Issue
Fixed Point Results for Generalized ℱ-Contractions in Modular b-Metric Spaces with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence of Two Splitting Projection Algorithms in Hilbert Spaces

1
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Mathematics, Hangzhou Normal University, Hangzhou 311121, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(10), 922; https://doi.org/10.3390/math7100922
Submission received: 26 August 2019 / Accepted: 30 September 2019 / Published: 3 October 2019
(This article belongs to the Special Issue Variational Inequality)

Abstract

:
The aim of this present paper is to study zero points of the sum of two maximally monotone mappings and fixed points of a non-expansive mapping. Two splitting projection algorithms are introduced and investigated for treating the zero and fixed point problems. Possible computational errors are taken into account. Two convergence theorems are obtained and applications are also considered in Hilbert spaces

1. Introduction—Preliminaries

Let H be an infinite-dimensional real Hilbert space. Its inner product is denoted by x , y . The induced norm is denoted by x = x , x for x , y H . Let C be a convex and closed set in H and let A : C H be a single-valued mapping. We recall the following definitions.
A is said to be a monotone mapping iff
x x , A x A x 0 , x , x C .
A is said to be a strongly monotone mapping iff there exists a positive real constant L such that
x x , A x A x L x x 2 , x , x C .
A is said to be an inverse-strongly monotone mapping iff there exists a positive real constant L such that
x x , A x A x L A x A x 2 , x , x C .
A is said to be L-Lipschitz continuous iff there exists a positive real constant L such that
A x A x L x x , x , x C .
A is said to be sequentially weakly continuous iff, for any vector sequence { x n } in C, { x n } converges weakly to x, which implies that A x n converges weakly to A x .
Consider that the following monotone variational inequality, which associates with mapping A and set C consists of finding an x C such that
A x , x * x 0 , x * C .
From now on, we borrow V I ( C , A ) to present the set of solutions (1). Recently, spotlight has been shed on projection-based iterative methods, which are efficient to deal with solutions of variational inequality (1). With the aid of resolvent mapping P r o j C H ( I d r A ) , where P r o j C H is the metric projection from H onto C, r is some positive real number and I d denotes the identity on H , one knows that x is a solution to inequality (1) iff x is a fixed point of P r o j C H ( I d r A ) . When dealing with the resolvent mapping, one is required to metric projections at every iteration. In the case that C is a linear variety or a closed ball or polytope, the computation of P r o j C H is not hard to implement. If C is a bounded set, then the existence of solutions of the variational inequality is guaranteed by Browder [1]. If A is monotone and L-Lipschitz continuous, Korpelevich [2] introduced the following so-called extragradient method:
x 0 C , y n = P r o j C H ( x n r A x n ) , x n + 1 = P r o j C H ( x n r A y n ) , n 0 ,
where C is assumed to be a convex and closed set in a finite-dimensional Euclidean space R n and r is positive real number in ( 0 , 1 / L ) . He proved that sequence { x n } converges to a point in V I ( C , A ) , for more materials; see [2] and the cited references therein. We remark here that the extragradient method is an Ishikawa-like iterative method, which is efficient for solving fixed-point problems of pseudocontractive mappings whose complementary mappings are monotone, that is, A is monotone if and only if I A is pseudocontractive.
Next, we turn our attention to set-valued mappings. Let B : H H be a set-valued mapping. We borrow G r a p h ( B ) : = { ( y , x ) H × H : x A y } to denote the graph of mapping B and B 1 ( 0 ) to denote the zero set of mapping B. One says that B is monotone iff y y , x x 0 , f o r a l l ( y , x ) , ( y , x ) G r a p h ( B ) . One says that B is maximal iff there exists no proper monotone extensions of the graph of B on H × H , that is, the graph of B is not a subset of any other monotone operator graphs. For a maximally monotone operator B, one can define the single-valued resolvent mapping J r = ( r B + I d ) 1 : H D o m ( B ) , where D o m ( B ) stands for the domain of B, I d stands for the identity mapping and r is a real number. In the case in which B is the subdifferential of proper, lower semicontinuous and convex functions, then its resolvent mapping is the called the proximity mapping. One knows B 1 ( 0 ) = F i x ( J r ) , where F i x ( J r ) stands for the fixed-point set of J r and J r is firmly non-expansive, that is, J r x J r x , x x J r x J r x 2 .
The class of maximally monotone mapping is under the spotlight of researchers working on the fields of optimization and functional analysis. Let f be a proper convex and closed function f : H ( , ] . One known example of maximally mapping is f , the subdifferential of f. It is defined as follows:
f ( x ) : = { x * H : x x , x * + f ( x ) f ( x ) , x H } , x H .
Rockafellar [3] asserted that f is a maximally monotone operator. One can verify that f ( v ) = min x H f ( x ) iff 0 f ( v ) . Next, we give one more example for maximally monotone mappings: N C + M , where M is a continuous single-valued maximally monotone mapping, and N C is the mapping of the normal cone:
N C ( x ) : = { x * H : x * , x x 0 , x C }
for x C and is empty otherwise. Then, 0 N C ( x ) + M x iff x C is a solution to the following monotone variational inequalities: x x , M x 0 for all x C .
One of the fundamental and efficient solution methods for investigating the inclusion problem 0 T x , where T is a maximally monotone mapping is the known proximal point algorithm (PPA), which was studied by Martinet [4,5] and Rockafellar [6,7]. The PPA has been extensively studied [8,9,10,11] and is known to yield special cases’ decomposition methods such as the method of partial inverses [12], the FB splitting method, and the ADMM [13,14]. The following forward-backward splitting method ( I r n A ) x n ( I + r n B ) x n + 1 , n = 0 , 1 , , where r n > 0 , was proposed by Lions and Mercier [15], and Passty [16] for T = A + B , where A and B are two maximally monotone mappings. Furthermore, if B = N C , then this method is reduced to the gradient-projection iterative method [17]. Recently, a number of researchers, who work on the fields of monotone operators, studied the splitting algorithm; see [18,19,20,21,22] and the references therein.
Let S : C C be a single-valued mapping. In this paper, we use F i x ( S ) to stand for the fixed-point set of mapping S. Recall that S is said to be non-expansive if x x S x S x , x , x C . If C is a bounded set, then the set of fixed points of mapping S is non-empty; see [23]. In the real world, a number of problems and modelings have reformulations that require finding fixed points of non-expansive mapping (zeros of monotone mappings). One knows that Mann-like iteration is weakly convergent for non-expansive mapping only. Recently, a number of researchers concentrated on various Mann-like iterations so that strongly convergent theorems can be obtained without additional compact restrictions on mappings; see [24,25,26,27].
For most real mathematical modelings, one often has more than one constraint. For such modelings, solutions to a problem which are simultaneously solutions to two or more problems (or desired solutions lie on the solution set of other problems); see [28,29,30,31,32,33] and the references therein.
In this paper, we, based on Tseng’s ideas, are concerned with the problem of finding a common solution of fixed-point problems of a non-expansive mapping and zero-point problems of a sum of two monotone operators based on two splitting algorithms, which take into account possible computational errors. Convergence theorems of the algorithms are obtained. Applications of the algorithms are also discussed.
In order to prove the main results of this paper, the following tools are essential.
An infinite-dimensional space X is said to satisfy Opial’s condition [34] if, for any { x n } X with x n x , the following inequality holds:
lim inf n x n y > lim inf n x n x
for y X with y x . It is well known that the above inequality is equivalent to
lim sup n x n y > lim sup n x n x
for y X with y x . It is well known that l p , where p > 1 and L 2 satisfy the Opial’s condition.
The following lemma is trivial, so the proof is omitted.
Lemma 1.
Let { a n } be a real positive sequence with a n + 1 a n + b n , n n 0 , where { b n } is a real positive sequence with n = 1 b n < and n 0 is some nonnegative integer. Then, the limit lim n a n exists.
Lemma 2.
Reference [35] Let H be a Hilbert space and let { t n } be a real sequence with the restriction 0 < p t n q < 1 for all n 1 . Let { x n } and { y n } be two vector sequences in H with lim n y n + t n x n t n y n = r , lim sup n x n r and lim sup n y n r , where r is some positive real number. Then, lim n y n x n = 0 .
Lemma 3.
Reference [34] Let C be a convex and closed set in an infinite-dimension Hilbert space H and let S be a non-expansive mapping with a non-empty fixed-point set on set C. Let { x n } be a vector sequence on C. If { x n } converges weakly to x and lim n ( S I ) x n = 0 , then x F i x ( S ) .

2. Main Results

Theorem 1.
Let C be a convex and closed set in a Hilbert space H . Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let A : C H be a monotone and both L-Lipschitz continuous and sequentially weakly continuous mapping. Let B be a maximally monotone mapping on H. Assume that D o m ( B ) lies in C and F i x ( S ) ( A + B ) 1 ( 0 ) is not empty. Let { α n } be a real number sequence in [ a , b ] for some a , b ( 0 , 1 ) , and let { r n } be a real number sequence in [ c , d ] for some c , d ( 0 , 1 / 2 L ) . Let { x n } be a vector sequence defined and generated in the following iterative process:
x 0 C , a r b i t r a r i l y c h o s e n , y n = J r n ( x n r n A x n e n ) , x n + 1 = α n x n + ( 1 α n ) S P r o j C H ( y n r n A y n + r n A x n + e n ) , n 0 ,
where { e n } is an error sequence in H with n = 0 e n < . Then, { x n } converges to a point x ¯ F i x ( S ) ( A + B ) 1 ( 0 ) weakly.
Proof. 
Set z n = y n r n A y n + r n A x n + e n . Fixing x * ( A + B ) 1 ( 0 ) F i x ( T ) , we find that
x n x * 2 = x n y n + y n z n + z n x * 2 = y n x n 2 + y n z n 2 + z n x * 2 + 2 y n x n , x * y n + 2 y n z n , z n x * = x n y n 2 y n z n 2 + z n x * 2 + 2 x n z n , y n x * x n y n 2 r n A y n r n A x n e n 2 + z n x * 2 .
Using the Lipschitz continuitity of mapping A, one finds that
z n x * 2 x n x * 2 x n y n 2 + r n A y n r n A x n e n 2 x n x * 2 x n y n 2 + 2 r n 2 A y n A x n 2 + 2 e n 2 x n x * 2 ( 1 2 r n 2 L 2 ) x n y n 2 + 2 e n 2 .
It follows that
x n + 1 x * 2 α n x n x * 2 + ( 1 α n ) S P r o j C H z n x * 2 α n x n x * 2 + ( 1 α n ) z n x * 2 x n x * 2 ( 1 α n ) ( 1 2 r n 2 L 2 ) x n y n 2 + 2 e n 2 .
In light of Lemma 1, one finds that the following limit lim n x n x * exists; in particular, the vector sequence { x n } is bounded. By using (2), one gets that
( 1 α n ) ( 1 2 r n 2 L 2 ) x n y n 2 x * x n 2 x * x n + 1 2 + 2 e n 2 .
Thanks to the condition on { α n } , { r n } and { e n } , one obtains
lim n x n y n = 0 .
Notice the fact that vector sequence { x n } is bounded. There is a vector sequence { x n i } , which is a subsequence of original sequence { x n } converging to x ¯ C weakly. In light of (3), we find that the subsequence { y n i } of { y n } also converges to x ¯ weakly.
Now, one is in a position to claim that x ¯ lies in ( A + B ) 1 ( 0 ) . Notice that
x n y n e n r n A x n B y n .
Suppose μ B ν . By using the continuitity of mapping B, one reaches
x n y n e n r n A x n μ , y n ν 0 .
Taking into account the fact that A is a sequentially weakly continuous account, one arrives at A x ¯ μ , x ¯ ν 0 . This guarantees A x ¯ B x ¯ , that is, x ¯ ( A + B ) 1 ( 0 ) .
On the other hand, we have that x ¯ F i x ( S ) . Indeed, set lim n x n x * = d . It follows from (2) that S P r o j C H z n x * x n x * + 2 e n . This shows that lim sup n S P r o j C H z n x * d . It follows from Lemma 2 that
lim n S P r o j C H z n x n = 0 .
Since A is L-Lipschitz continuous, we find that
S y n y n S y n S P r o j C H z n + S P r o j C H z n x n + x n y n y n P r o j C H z n + S P r o j C H z n x n + x n y n ( 1 + r n L ) y n x n + e n + S P r o j C H z n x n .
Combining (3) with (4), we obtain that lim n S y n y n = 0 . Thanks to Lemma 3, one concludes that x ¯ F i x ( S ) .
Next, one claims that vector sequence { x n } converges to x ¯ weakly. If not, one finds that there exists some subsequence { x n j } of { x n } and this subsequence { x n j } converges to x ^ C weakly, and x ^ x ¯ . Similarly, one has x ^ ( A + B ) 1 ( 0 ) . From the fact that lim n x n p exists, p ( A + B ) 1 ( 0 ) , one may suppose that lim n x n x ¯ = d , where d is a nonnegative number. By using the Opial’s inequality, one arrives at
d = lim inf i x n i x ¯ < lim inf i x n i x ^ = lim inf j x n j x ^ < lim inf j x n j x ¯ = d .
One reaches a contradiction. Hence, x ¯ = x ^ . □
The following result is not hard to derive from Theorem 1.
Corollary 1.
Let C be a convex and closed set in a Hilbert space H . Let A : C H be a monotone and both L-Lipschitz continuous and sequentially weakly continuous mapping. Let B be a maximally monotone mapping on H. Suppose that D o m ( B ) C and ( A + B ) 1 ( 0 ) is not empty. Let { α n } be a real number sequence in [ a , b ] for some a , b ( 0 , 1 ) and let { r n } be a real number sequence in [ c , d ] for some c , d ( 0 , 1 / 2 L ) . Let { x n } be a vector sequence defined and generated in the following process:
x 0 C , a r b i t r a r i l y c h o s e n , y n = J r n ( x n r n A x n e n ) , x n + 1 = α n x n + ( 1 α n ) P r o j C H ( y n + r n A x n r n A y n + e n ) , n 0 ,
where { e n } is an error sequence in H such that n = 0 e n < . Then, { x n } converges to a point x ¯ ( A + B ) 1 ( 0 ) weakly.
Next, one is ready to present the other convergence theorem.
Theorem 2.
Let C be a convex and closed set in a Hilbert space H . Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let A : C H be a monotone and both L-Lipschitz continuous and sequentially weakly continuous mapping. Let B be a maximally monotone mapping on H. Assume that D o m ( B ) lies in C and F i x ( S ) ( A + B ) 1 ( 0 ) is not empty. Let { α n } be a real number sequence in [ 0 , a ] for some a [ 0 , 1 ) and let { r n } be a real number sequence in [ b , c ] for some b , c ( 0 , 1 / 2 L ) . Let { x n } be a vector sequence defined and generated in the following iterative process:
x 0 H a r b i t r a r i l y c h o s e n , C 0 = C , z n = J r n ( x n r n A x n e n ) , y n = α n x n + ( 1 α n ) S P r o j C H ( z n r n A z n + r n A x n + e n ) , C n + 1 = { w C n : y n w 2 x n w 2 ( 1 α n ) ( 1 2 r n 2 L 2 ) x n z n 2 + 2 e n 2 } , x n + 1 = P r o j C n + 1 H x 0 , n 0 ,
where { e n } is an error sequence with the restriction lim n e n = 0 . Then, { x n } converges strongly to P r o j F i x ( S ) ( A + B ) 1 ( 0 ) H x 0 .
Proof. 
First, we show that the set C n is closed and convex. It is clear that C n is closed. We only show the convexness of C n . From assumption, we see that C 0 is convex. We suppose that C m is a convex set for some m 0 . Next, one claims that C m + 1 is also a convex set. Since
w y n 2 w x n 2 ( 1 α n ) ( 1 2 r n 2 L 2 ) x n z n 2 + 2 e n 2
is equivalent to
y n 2 x n 2 + ( 1 α n ) ( 1 2 r n 2 L 2 ) x n z n 2 2 e n 2 + 2 x n y n , w 0 ,
we easily find that C m + 1 is a convex set. This claims that set C n is convex and closed. Next, we show that F i x ( S ) ( A + B ) 1 ( 0 ) C n . From the assumption, we see that F i x ( S ) ( A + B ) 1 ( 0 ) C 0 . Suppose that F i x ( S ) ( A + B ) 1 ( 0 ) C m for some m 0 . Next, we show that F i x ( S ) ( A + B ) 1 ( 0 ) C m + 1 for the same m. Set v n = z n r n A z n + r n A x n + e n . For any w F i x ( S ) ( A + B ) 1 ( 0 ) C m , we find that
x n w 2 = x n z n + z n v n + v n w 2 = z n x n 2 + v n z n 2 + v n w 2 + 2 x n z n , z n w + 2 z n v n , v n w x n z n 2 r n A z n r n A x n e n 2 + v n w 2 .
Thanks to the fact that A is a Lipschitz continuous mapping, one asserts that
v n w 2 x n w 2 z n x n 2 + r n A z n r n A x n e n 2 x n w 2 z n x n 2 + 2 r n 2 A z n A x n 2 + 2 e n 2 x n w 2 ( 1 2 r n 2 L 2 ) z n x n 2 + 2 e n 2 .
It follows that
y n w 2 ( 1 α n ) S P r o j C v n w 2 + α n x n w 2 ( 1 α n ) v n w 2 + α n x n w 2 2 e n 2 ( 1 α n ) ( 1 2 r n 2 L 2 ) x n z n 2 + x n w 2 .
This implies that w C n + 1 . This proves that F i x ( S ) ( A + B ) 1 ( 0 ) C n . Since x n = P r o j C n H x 0 and x n + 1 = P r o j C n + 1 H x 0 C n + 1 , which is a subset of C n , we find that
0 x n x 0 , x n + 1 x n x n x 0 2 + x n x 0 x n + 1 x 0 .
This implies that x n x 0 x n + 1 x 0 . For any w F i x ( S ) ( A + B ) 1 ( 0 ) C n , we find from x n = P r o j C n H x 0 that x 0 x n x 0 w , in particular, x 0 x n x 0 P r o j F i x ( S ) ( A + B ) 1 ( 0 ) H x 0 .
This claims that vector sequence { x n } is bounded and limit lim n x 0 x n exists. Note that
x n x n + 1 2 = 2 x n x 0 , x 0 x n + x n x n + 1 + x 0 x n + 1 2 + x n x 0 2 = x 0 x n + 1 2 x n x 0 2 + 2 x 0 x n , x n + 1 x n x 0 x n + 1 2 x n x 0 2 .
Letting n , one obtains x n x n + 1 0 . In light of x n + 1 = P r o j C n + 1 H x 0 C n + 1 , we see that
y n x n + 1 2 2 e n 2 ( 1 α n ) ( 1 2 r n 2 L 2 ) x n z n 2 + x n x n + 1 2 .
It follows that lim n y n x n + 1 = 0 . This proves that lim n y n x n = 0 . Using the restrictions imposed on the sequence { α n } , { r n } and { e n } , we also find that lim n x n z n = 0 . By using the fact that { x n } is a bounded sequence, there exists a sequence { x n i } , which is a subsequence of { x n } , converging to x ¯ C weakly. One also obtains that the sequence { z n i } also converges to x ¯ weakly. Note that x n z n e n r n A x n B z n . Next, we suppose μ is a point in B ν . The monotonicity of B yields that x n z n e n r n A x n μ , z n ν 0 . Since A is sequentially weakly continuous mapping, we obtain that x ¯ ν , A x ¯ μ 0 . These yield that A x ¯ B x ¯ . Hence, one obtains x ¯ ( A + B ) 1 ( 0 ) .
One now is in a position to claim that x ¯ F i x ( S ) . Since lim n y n x n = 0 , which in turn implies that lim n S P r o j C H v n x n = 0 . Since A is Lipschitz continuous, one has
S x n x n S x n S z n + S z n S P r o j C H v n + S P r o j C H v n x n x n z n + z n P r o j C H v n + S P r o j C H v n x n ( 1 + r n L ) z n x n + e n + S P r o j C H v n x n .
This proves that lim n S x n x n = 0 . In light of Lemma 3, one finds x ¯ F i x ( S ) . Put x ˜ = P r o j ( A + B ) 1 ( 0 ) H x 0 . Since x n = P r o j C n H x 0 and x ˜ C n , we find that x 0 x n x 0 x ˜ . Note that
x 0 x ˜ x 0 x ¯ lim inf i x 0 x n i lim sup i x 0 x n i x 0 x ˜ .
It follows that
x 0 x ¯ = lim i x 0 x n i = x 0 x ˜ ,
from which one gets x n i x ¯ = x ˜ . From the arbitrariness of { x n i } , one has x n x ˜ .  □
The following results can be derived immediately from Theorem 2.
Corollary 2.
Let C be a convex and closed set in a Hilbert space H . Let A : C H be a monotone and both L-Lipschitz continuous and sequentially weakly continuous mapping. Let B be a maximally monotone mapping on H. Assume that D o m ( B ) lies in C and ( A + B ) 1 ( 0 ) is not empty. Let { α n } be a real number sequence in [ 0 , a ] for some a [ 0 , 1 ) and let { r n } be a real number sequence in [ b , c ] for some b , c ( 0 , 1 / 2 L ) . Let { x n } be a vector sequence defined and generated in the following iterative process:
x 0 H c h o s e n a r b i t r a r i l y , C 0 = C , z n = J r n ( x n r n A x n e n ) , y n = α n x n + ( 1 α n ) P r o j C H ( z n r n A z n + r n A x n + e n ) , C n + 1 = { w C n : y n w 2 x n w 2 ( 1 α n ) ( 1 2 r n 2 L 2 ) x n z n 2 + 2 e n 2 } , x n + 1 = P r o j C n + 1 H x 0 , n 0 ,
where J r n = ( I + r n B ) 1 and { e n } is an error sequence in H such that lim n e n = 0 . Then, { x n } converges to P r o j ( A + B ) 1 ( 0 ) x 0 strongly.
Corollary 3.
Let C be a convex and closed set in a Hilbert space H . Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let { α n } be a real number sequence in [ 0 , a ] for some a [ 0 , 1 ) . Let { x n } be a vector sequence defined and generated in the following iterative process:
x 0 H c h o s e n a r b i t r a r i l y , C 0 = C , y n = ( 1 α n ) S x n + α n x n , C n + 1 = { w C n : y n w 2 x n w 2 ( 1 α n ) ( 1 2 r n 2 L 2 ) x n z n 2 } , x n + 1 = P r o j C n + 1 H x 0 , n 0 .
Then, { x n } converges strongly to P r o j F i x ( S ) H x 0 .

3. Applications

This section gives some results on solutions of variational inequalities, minimizers of convex functions, and solutions of equilibrium problems.
Let H be a real Hilbert space and let C be a convex and closed set in H. Let i C be a function defined by
i C ( x ) = 0 , x C , , x C .
One knows that indicator function i C is proper convex and lower semicontinuous, and its subdifferential i C is maximally monotone. Define the resolvent mapping of subdifferential operator i C by J r : = ( I + r i C ) 1 . Letting x = J r y , one finds
y r i C x + x y r N C x + x y x , z x 0 , z C x = P r o j C H y ,
where N C x : = { z H | z , v x , v C } . If B = i C in Theorem 1 and Theorem 2, then the following results can be derived immediately.
Theorem 3.
Let C be a convex and closed set in a Hilbert space H . Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let A : C H be a monotone and both L-Lipschitz continuous and sequentially weakly continuous mapping. Assume that F i x ( S ) V I ( C , A ) is not empty. Let { α n } be a real number sequence in [ a , b ] for some a , b ( 0 , 1 ) and let { r n } be a real number sequence in [ c , d ] for some c , d ( 0 , 1 / 2 L ) . Let { x n } be a vector sequence defined and generated in the following iterative process:
x 0 C , c h o s e n a r b i t r a r i l y , y n = P r o j C H ( x n r n A x n e n ) , x n + 1 = α n x n + ( 1 α n ) S P r o j C H ( y n r n A y n + r n A x n + e n ) , n 0 ,
where { e n } is an error sequence in H such that n = 0 e n < . Then, { x n } converges to a point x ¯ F i x ( S ) V I ( C , A ) weakly.
Theorem 4.
Let C be a convex and closed set in a Hilbert space H . Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let A : C H be a monotone and both L-Lipschitz continuous and sequentially weakly continuous mapping. Assume that F i x ( S ) V I ( C , A ) is not empty. Let { α n } be a real number sequence in [ 0 , a ] for some a [ 0 , 1 ) and let { r n } be a real number sequence in [ c , d ] for some c , d ( 0 , 1 / 2 L ) . Let { x n } be a vector sequence defined and generated in the following iterative process:
x 0 H c h o s e n a r b i t r a r i l y , C 0 = C , z n = P r o j C ( x n r n A x n e n ) , y n = α n x n + ( 1 α n ) S P r o j C H ( z n r n A z n + r n A x n + e n ) , C n + 1 = { w C n : y n w 2 x n w 2 ( 1 α n ) ( 1 2 r n 2 L 2 ) x n z n 2 + 2 e n 2 } , x n + 1 = P r o j C n + 1 H x 0 , n 0 ,
where { e n } is an error sequence in H such that lim n e n = 0 . Then, { x n } converges to P r o j F i x ( S ) V I ( C , A ) H x 0 strongly.
Next, we consider minimizers of a proper convex and lower semicontinuous function.
Let f : H ( , ] be a proper lower semicontinuous convex function. One can define subdifferential mapping f by f ( x ) = { x * H : f ( x ) + y x , x * f ( y ) , y H } , x H . Rockafellar [3] proved that subdifferential mappings are maximally monotone and 0 f ( v ) if and only if f ( v ) = min x H f ( x ) .
Theorem 5.
Let C be a convex and closed set in a Hilbert space H . Let f : H ( , + ] be a proper convex lower semicontinuous function such that ( f ) 1 ( 0 ) is not empty. Let { α n } be a real number sequence in [ a , b ] for some a , b ( 0 , 1 ) and let { r n } be a real number sequence in [ c , d ] for some c , d ( 0 , 1 / 2 L ) . Let { x n } be a vector sequence defined and generated in the following iterative process:
x 0 C , c h o s e n a r b i t r a r i l y , y n = arg min z H { f ( z ) + z x n + e n 2 2 r n } , x n + 1 = α n x n + ( 1 α n ) S P r o j C H ( y n + e n ) , n 0 ,
where { e n } is an error sequence in H such that n = 0 e n < . Then, { x n } converges to a point x ¯ ( f ) 1 ( 0 ) weakly.
Proof. 
From the assumption that f : H ( , ] is proper, convex, and lower semicontinuous, one obtains that subdifferential f is maximally monotone. Setting A = 0 and y n = J r n ( x n e n ) , one sees that
y n = arg min z H { f ( z ) + z x n + e n 2 2 r n }
is equivalent to
0 f ( y n ) + 1 r n ( y n x n + e n ) .
It follows that
x n e n y n + r n f ( y n ) .
 □
By using Theorem 2.1, we draw the desired conclusion immediately.
Theorem 6.
Let C be a convex and closed set in a Hilbert space H. Let f : H ( , + ] be a proper convex lower semicontinuous function such that ( f ) 1 ( 0 ) is not empty. Let { α n } be a real number sequence in [ 0 , a ] for some a [ 0 , 1 ) and let { r n } be a real number sequence in [ b , c ] for some b , c ( 0 , 1 / 2 L ) . Let { x n } be a vector sequence defined and generated in the following iterative process:
x 0 H c h o s e n a r b i t r a r i l y , C 0 = C , z n = arg min z H { f ( z ) + z x n + e n 2 2 r n } , y n = α n x n + ( 1 α n ) S P r o j C ( z n + e n ) , C n + 1 = { w C n : y n w 2 x n w 2 ( 1 α n ) ( 1 2 r n 2 L 2 ) x n z n 2 + 2 e n 2 } , x n + 1 = P r o j C n + 1 H x 0 , n 0 ,
where { e n } is an error sequence in H such that lim n e n = 0 . Then, { x n } converges to P r o j ( f ) 1 ( 0 ) H x 0 strongly.
Proof. 
From the assumption that f : H ( , ] is proper, convex and lower semicontinuous, one sees that subdifferential f is maximally monotone. Setting A = 0 and z n = J r n ( x n e n ) , one sees that
z n = arg min z H { f ( z ) + z x n + e n 2 2 r n }
is equivalent to
0 f ( z n ) + 1 r n ( z n x n + e n ) .
It follows that
x n e n z n + r n f ( z n ) .
 □
By using Theorem 2, we draw the desired conclusion immediately.
Finally, we consider an equilibrium problem, which is also known as Ky Fan inequality [36], in the sense of Blum and Oettli [37].
We employ R to denote the set of real numbers. Let F be a bifunction mapping C × C to R . The equilibrium problem consists of
Finding x C such that F ( x , y ) 0 , y C .
Hereafter, E P ( F ) means the solution set of problem (5).
In order to study solutions of equilibrium problem (5), the following routine restrictions on F are needed:
(R1)
for each x C , y F ( x , y ) is convex and lower semi-continuous;
(R2)
for each x , y , z C , lim sup t 0 F ( t z + ( 1 t ) x , y ) F ( x , y ) ;
(R3)
for each x C , F ( x , x ) = 0 ;
(R4)
for each x , y C , F ( x , y ) + F ( y , x ) 0 .
The following lemma is on a resolvent mapping associated with F, introduced in [38].
Lemma 4.
Let F : C × C R be a bifunction with restriction (R1)–(R4). Let r > 0 and x H . Then, there exists a vector z C such that y z , z x + r F ( z , y ) 0 , y C . Define a mapping T r : H C by
T r x = z C : y z , z x + r F ( z , y ) 0 , y C
for each x H and each r > 0 . Then, (1) F i x ( T r ) = E P ( F ) is convex and closed; (2) T r is single-valued firmly non-expansive.
Lemma 5
[39]. Let F be a bifunction with restrictions (R1)–(R4), and let A F be a mapping on H defined by
A F x = , x C , { z H : F ( x , y ) + x y , z 0 , y C } , x C .
Then, A F is a maximally monotone mapping such that D ( A F ) C , E P ( F ) = A F 1 ( 0 ) , and T r x = ( I + r A F ) 1 x , x H , r > 0 , where T r is defined as in (6).
Thanks to Lemmas 4 and 5, one finds from Theorem 1 and Theorem 2 the following results on equilibrium problem (5) immediately.
Theorem 7.
Let C be a convex and closed set in a Hilbert space H . Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let F : C × C R be a bifunction with restrictions (R1)–(R4). Assume that F i x ( S ) E P ( F ) is not empty. Let { α n } be a real number sequence in [ a , b ] for some a , b ( 0 , 1 ) and let { r n } be a real number sequence such that r n c , where c is some positive real number. Let { x n } be a vector sequence defined and generated in the following iterative process: x 0 C , x n + 1 = α n x n + ( 1 α n ) S ( I + r A F ) 1 x n , n 0 , where A F is defined by (7). Then, { x n } converges to a point x ¯ F i x ( S ) E P ( F ) weakly.
Theorem 8.
Let C be a convex and closed set in a Hilbert space H . Let S be a non-expansive self mapping on C, whose fixed-point set is non-empty. Let F : C × C R be a bifunction with restrictions (R1)–(R4). Assume that F i x ( S ) E P ( F ) is not empty. Let { α n } be a real number sequence in [ 0 , a ] for some a [ 0 , 1 ) and let { r n } be a real number sequence such that r n c , where c is some positive real number. Let { x n } be a vector sequence defined and generated in the following iterative process:
x 0 H c h o s e n a r b i t r a r i l y , C 0 = C , y n = α n x n + ( 1 α n ) S ( I + r A F ) 1 x n , C n + 1 = { w C n : y n w 2 x n w 2 ( 1 α n ) ( 1 2 r n 2 L 2 ) x n z n 2 } , x n + 1 = P r o j C n + 1 H x 0 , n 0 ,
where A F is defined by (7). Then, { x n } converges to P r o j F i x ( S ) E P ( F ) H x 0 strongly.

Author Contributions

These authors contributed equally to this work.

Funding

This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia under grant no. KEP-2-130-39. The authors, therefore, acknowledge with thanks DSR for technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Browder, F.E. Fixed-point theorems for noncompact mappings in Hilbert space. Proc. Natl. Acad. Sci. USA 1965, 53, 1272–1276. [Google Scholar] [CrossRef]
  2. Korpelevich, G.M. An extragradient method for finding saddle points and for other problems. Èkonomika i Matematicheskie Metody 1976, 12, 747–756. [Google Scholar]
  3. Rockafellar, R.T. Convex Analysis; Princeton University Press: Princeton, NJ, USA, 1970. [Google Scholar]
  4. Martinet, B. Regularisation d’inequations variationnelles par approximations successives. Rev. Franc. Inform. Rech. Oper. 1970, 4, 154–159. [Google Scholar]
  5. Martinet, B. Determination approchée d’un point fixe d’une application pseudo-contractante. C. R. Acad. Sci. Paris Ser. A–B 1972, 274, 163–165. [Google Scholar]
  6. Rockfellar, R.T. Monotone operators and proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
  7. Rockafellar, R.T. Augmented Lagrangians and applications of the proximal point algorithm in convex programmin. Math. Oper. Res. 1976, 1, 97–116. [Google Scholar] [CrossRef]
  8. Bin Dehaish, B.A.; Qin, X.; Latif, A.; Bakodah, H.O. Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 2015, 16, 1321–1336. [Google Scholar]
  9. Ansari, Q.H.; Babu, F.; Yao, J.C. Regularization of proximal point algorithms in Hadamard manifolds. J. Fixed Point Theory Appl. 2019, 21, 25. [Google Scholar] [CrossRef]
  10. Cho, S.Y.; Li, W.; Kang, S.M. Convergence analysis of an iterative algorithm for monotone operators. J. Inequal. Appl. 2013, 2013, 199. [Google Scholar] [CrossRef] [Green Version]
  11. Bin Dehaish, B.A.; Latif, A.; Bakodah, H.O.; Qin, X. A regularization projection algorithm for various problems with nonlinear mappings in Hilbert spaces. J. Inequal. Appl. 2015, 2015, 51. [Google Scholar] [CrossRef] [Green Version]
  12. Spingarn, J.E. Applications of the method of partial inverses to convex programming decomposition. Math. Program. 1985, 32, 199–223. [Google Scholar] [CrossRef]
  13. Eckstein, J.; Bertsekas, D.P. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef]
  14. Eckstein, J.; Ferris, M.C. Operator-splitting methods for monotone affine variational inequalities, with a parallel application to optimal control. Informs J. Comput. 1998, 10, 218–235. [Google Scholar] [CrossRef]
  15. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  16. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef] [Green Version]
  17. Sibony, M. Méthodes itératives pour les équations et inéquations aux dérivées partielles nonlinéares de type monotone. Calcolo 1970, 7, 65–183. [Google Scholar] [CrossRef]
  18. Cho, S.Y. Strong convergence analysis of a hybrid algorithm for nonlinear operators in a Banach space. J. Appl. Anal. Comput. 2018, 8, 19–31. [Google Scholar]
  19. Tseng, P. A modified forward-backward splitting methods for maximal monotone mappings. SIAM. J Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  20. Cho, S.Y.; Kang, S.M. Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32, 1607–1618. [Google Scholar] [CrossRef]
  21. Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malays. Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
  22. Chang, S.S.; Wen, C.F.; Yao, J.C. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
  23. Browder, F.E. Nonexpansive nonlinear operators in a Banach space. Proc. Natl. Acad. Sci. USA 1965, 54, 1041–1044. [Google Scholar] [CrossRef] [PubMed]
  24. Cho, S.Y. Generalized mixed equilibrium and fixed point problems in a Banach space. J. Nonlinear Sci. Appl. 2016, 9, 1083–1092. [Google Scholar] [CrossRef] [Green Version]
  25. Cho, S.Y.; Kang, S.M. Approximation of fixed points of pseudocontraction semigroups based on a viscosity iterative process. Appl. Math. Lett. 2011, 24, 224–228. [Google Scholar] [CrossRef]
  26. Takahahsi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
  27. Takahashi, W.; Wen, C.F.; Yao, J.C. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
  28. Liu, L.; Qin, X.; Agarwal, R.P. Iterative methods for fixed points and zero points of nonlinear mappings with applications. Optimization 2019. [Google Scholar] [CrossRef]
  29. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
  30. Zhao, J.; Jia, Y.; Zhang, H. General alternative regularization methods for split equality common fixed-point problem. Optimization 2018, 67, 619–635. [Google Scholar] [CrossRef]
  31. He, S.; Wu, T.; Gibali, A.; Dong, Q. Totally relaxed, self-adaptive algorithm for solving variational inequalities over the intersection of sub-level sets. Optimization 2018, 67, 1487–1504. [Google Scholar] [CrossRef]
  32. Husain, S.; Singh, N. A hybrid iterative algorithm for a split mixed equilibrium problem and a hierarchical fixed point problem. Appl. Set-Valued Anal. Optim. 2019, 1, 149–169. [Google Scholar]
  33. Qin, X.; Yao, J.C. A viscosity iterative method for a split feasibility problem. J. Nonlinear Convex Anal. 2019, 20, 1497–1506. [Google Scholar]
  34. Opial, Z. Weak convergence of sequence of successive approximations for non-expansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef]
  35. Schu, J. Weak and strong convergence of fixed points of asymptotically non-expansive mappings. Bull. Austral. Math. Soc. 1991, 43, 153–159. [Google Scholar] [CrossRef]
  36. Fan, K. A minimax inequality and applications. In Inequality III; Shisha, O., Ed.; Academic Press: New York, NY, USA, 1972; pp. 103–113. [Google Scholar]
  37. Blum, E.; Oettli, W. From optimization and variational inequalities to equilibriums problems. Math. Stud. 1994, 63, 123–145. [Google Scholar]
  38. Combettes, P.L.; Hirstoaga, S.A. Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 2005, 6, 117–136. [Google Scholar]
  39. Takahashi, S.; Takahashi, W.; Toyoda, M. Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147, 27–41. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Kutbi, M.A.; Latif, A.; Qin, X. Convergence of Two Splitting Projection Algorithms in Hilbert Spaces. Mathematics 2019, 7, 922. https://doi.org/10.3390/math7100922

AMA Style

Kutbi MA, Latif A, Qin X. Convergence of Two Splitting Projection Algorithms in Hilbert Spaces. Mathematics. 2019; 7(10):922. https://doi.org/10.3390/math7100922

Chicago/Turabian Style

Kutbi, Marwan A., Abdul Latif, and Xiaolong Qin. 2019. "Convergence of Two Splitting Projection Algorithms in Hilbert Spaces" Mathematics 7, no. 10: 922. https://doi.org/10.3390/math7100922

APA Style

Kutbi, M. A., Latif, A., & Qin, X. (2019). Convergence of Two Splitting Projection Algorithms in Hilbert Spaces. Mathematics, 7(10), 922. https://doi.org/10.3390/math7100922

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop