Next Article in Journal
Chaotic Multi-Objective Particle Swarm Optimization Algorithm Incorporating Clone Immunity
Next Article in Special Issue
Sehgal Type Contractions on Dislocated Spaces
Previous Article in Journal / Special Issue
ON H+Type Multivalued Contractions and Applications in Symmetric and Probabilistic Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Iterative Approach to the Solutions of Proximal Split Feasibility Problems

1
The Key Laboratory of Intelligent Information and Data Processing of NingXia Province, North Minzu University, Yinchuan 750021, China
2
Health Big Data Research Institute of North Minzu University, Yinchuan 750021, China
3
School of Mathematical Sciences, Tianjin Polytechnic University, Tianjin 300387, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(2), 145; https://doi.org/10.3390/math7020145
Submission received: 10 January 2019 / Revised: 25 January 2019 / Accepted: 28 January 2019 / Published: 3 February 2019
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
The proximal split feasibility problem is investigated in Hilbert spaces. An iterative procedure is introduced for finding the solution of the proximal split feasibility problem. Strong convergence analysis of the presented algorithm is proved.

1. Introduction

Recall that the split feasibility problem (SFP) seeks a point u such that
u C a n d A u Q ,
where C and Q are two closed convex subsets of two real Hilbert spaces H 1 and H 2 , respectively and A : H 1 H 2 is a bounded linear operator.
In 1994, Censor and Elfving [1] refined the above mathematical model from the medical image reconstruction and phase retrievals. This provides us a useful tool to research inverse problems arising in science and engineering. One effective method for solving SFP (1) is algorithmic iteration. In the literature, there are several effective iterative algorithms presented by some authors (see, for instance [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28].)
In this paper, our goal is to focus a general case of proximal split feasibility problems and to investigate the convergence analysis. To begin with, we first give several related concepts.
Let ϕ : H 2 R { + } be a lower semi-continuous, proper and convex function. Let ϵ > 0 be a constant. Recall that the Moreau [29]-Yosida [30] regularization is defined by
ϕ ϵ ( x ) = min u H 2 ϕ ( u ) + 1 2 ϵ u x 2 .
Consequently, we can define the proximity operator of ϕ by the form
p r o x ϵ ϕ ( x ) = arg min u H 2 ϕ ( u ) + 1 2 ϵ u x 2 .
The subdifferential of ϕ at x denoted by ϕ ( x ) is defined as follows
ϕ ( x ) = { x * H 2 : ϕ ( x ) + x * , x x ϕ ( x ) , x H 2 } .
It is easy to validate that 0 ϕ ( x ) x = p r o x ϵ ϕ ( x ) . This means that the minimizer of ϕ is the fixed point of its proximity operator.
Let φ : H 1 R { + } be a lower semi-continuous, proper and convex function. Recall that the proximal split feasibility problem seeks a point x H 1 such that x solves the following minimization problem
min x H 1 { φ ( x ) + ϕ ϵ ( A x ) } .
In what follows, we use Γ to denote the solution set of the problem (2).
The above problem (2) has been studied extensively in the literature, see for instance [31,32,33,34,35]. In order to solve problem (2), in [36], Moudafi and Thakur presented the following split proximal algorithm.
  • Fixed an initialization u 0 H 1 .
  • Assume that u n H 1 has been obtained. Calculate ν ( u n ) = g ( u n ) 2 + h ( u n ) 2 , where g ( u n ) = 1 2 ( I p r o x ϵ ϕ ) A u n 2 and h ( u n ) = 1 2 ( I p r o x μ n ϵ φ ) u n 2 .
  • If ν ( u n ) = 0 , then the iterative procedure stops, otherwise continue to compute the next iterate
    u n + 1 = p r o x μ n ϵ φ ( u n μ n A * ( I p r o x ϵ ϕ ) A u n ) , n 0 ,
    where μ n = τ n g ( u n ) + h ( u n ) ν 2 ( u n ) .
Remark 1.
Note that the stepsize sequence { μ n } is implicit because of the terms g ( u n ) and ν ( u n ) . This indicates that the computation of u n + 1 is complicated.
To overcome this difficulty, Shehu and Iyiola [37] suggested the following explicit algorithm to solve problem (2).
  • Fixed u H 1 and u 1 H 1 .
  • Set n = 1 and calculate
    y n = ζ n u + ( 1 ζ n ) u n , ν ( y n ) = A * ( I p r o x ϵ ϕ ) A y n + ( I p r o x ϵ φ ) y n , z n = y n τ n h ( y n ) + l ( y n ) ν 2 ( y n ) ( A * ( I p r o x ϵ ϕ ) A y n + ( I p r o x ϵ φ ) y n ) , u n + 1 = ( 1 ϑ n ) y n + ϑ n z n .
  • If A * ( I p r o x ϵ ϕ ) A y n = 0 = ( I p r o x ϵ φ ) y n and u n + 1 = u n , then the iterative process stops, otherwise continue to the next step.
  • Set n n + 1 and repeat steps 2-3.
Remark 2.
In Step 3, we note that A * ( I p r o x ϵ ϕ ) A y n = 0 = ( I p r o x ϵ φ ) y n implies ν ( y n ) = 0 . In this case, the iterates z n and u n + 1 have no meanings.
In the present paper, our goal is to mend the above gap and to suggest a modified proximal split feasibility algorithm for solving the proximal split feasibility problem (2). We prove that the presented sequence converges strongly to a solution of the proximal split feasibility problem (2).

2. Preliminaries

Let H be a real Hilbert space. Use · , · and · to denote its inner product and norm, respectively. Let C be a nonempty closed convex subset of H . Recall that a mapping S : C C is said to be firmly nonexpansive [38] if
S u S v 2 S u S v , u v
for all u , v C .
Note that the proximal operators I p r o x ϵ φ and I p r o x ϵ ϕ are firmly nonexpansive, namely,
( I p r o x ϵ φ ) u ( I p r o x ϵ φ ) v 2 ( I p r o x ϵ φ ) u ( I p r o x ϵ φ ) v , u v
for all u , v H 1 and
( I p r o x ϵ ϕ ) u ( I p r o x ϵ ϕ ) v 2 ( I p r o x ϵ ϕ ) u ( I p r o x ϵ ϕ ) v , u v
for all u , v H 2 .
For u H , there exists a unique point in C , denoted by p r o j C ( u ) , such that
u p r o j C ( u ) u u
for all u C .
It is known that p r o j C is firmly-nonexpansive and has the following characteristic [39]
u p r o j C ( u ) , u p r o j C ( u ) 0
for all u H and u C .
An operator F is called strongly positive if there exists a constant δ > 0 such that F u , u δ u 2 for all u H .
The following expressions will be used in the sequel.
  • u n u denotes the weak convergence of { u n } to u;
  • u n u denotes the strong convergence of { u n } to u;
  • F i x ( S ) means the set of fixed points of S.
Lemma 1.
[40] In a real Hilbert space H , the following identity holds
λ x + ( 1 λ ) x ˜ 2 = λ x 2 + ( 1 λ ) x ˜ 2 λ ( 1 λ ) x x ˜ 2 ,
for all λ [ 0 , 1 ] , x , x ˜ H .
Lemma 2.
[41] Suppose that three sequences { u n } , { v n } and { θ n } satisfy the following conditions
(i) 
u n 0 for all n 0 ;
(ii) 
there exists a constant M such that v n M for all n 0 ;
(iii) 
θ n [ 0 , 1 ] and n = 0 θ n = ;
(iv) 
u n + 1 ( 1 θ n ) u n + θ n v n for all n 0 .
Then, we have lim   sup n u n lim   sup n v n .
Lemma 3.
[42] Suppose that H is a real Hilbert space and C H is a nonempty closed convex set. If T is a nonexpansive self-mapping of C , then the operator I T is demi-closed at 0, i.e., x n x C and x n T x n 0 imply x = T x .
Lemma 4.
[43] Assume that three sequences { ρ n } , { η n } and { ζ n } satisfy the following assumptions
(i) 
ρ n 0 for all n 0 ;
(ii) 
{ ζ n } n N [ 0 , 1 ] and n = 1 ζ n = ;
(iii) 
lim   sup n η n 0 ;
(iv) 
ρ n + 1 ( 1 ζ n ) ρ n + ζ n η n for all n 0 .
Then lim n ρ n = 0 .

3. Main Results

Suppose that
(i)
H 1 and H 2 are two real Hilbert spaces and C H 1 and Q H 2 are two closed convex sets;
(ii)
A : H 1 H 2 is a bounded linear operator, φ : H 1 R { + } and ϕ : H 2 R { + } are two proper, convex and lower semi-continuous functions.
In what follows, assume Γ . The following lemma plays a key role for constructing our algorithm and proving our main result.
Lemma 5.
[34] z Γ iff A * ( I p r o x ϵ ϕ ) A z + ( I p r o x ϵ φ ) z = 0 .
Next, we suggest the following algorithm by applying Lemma 5.
Let f : H 1 H 1 be a μ -contraction. Let F : H 1 H 1 be a strongly positive linear bounded operator with coefficient δ > 0 . Let { ζ n } ( 0 , 1 ) , { ϑ n } ( 0 , 1 ) and { τ n } ( 0 , + ) be three real number sequences. Let γ be a constant such that δ / μ > γ > 0 .
  • Given fixed point x 0 H 1 . Set n = 0 .
  • Calculate y n and ν ( y n ) via the iterative procedures
    y n = ζ n γ f ( x n ) + ( I ζ n F ) x n ,
    and
    ν ( y n ) = A * ( I p r o x ϵ ϕ ) A y n + ( I p r o x ϵ φ ) y n .
  • If ν ( y n ) = 0 , then the iterative process stops (in this case, y n is a solution of (2) by Lemma 5), otherwise continuous to the next step.
  • Compute
    x n + 1 = ( 1 ϑ n ) y n + ϑ n z n ,
    where
    z n = y n τ n g ( y n ) + h ( y n ) ν ( y n ) 2 ν ( y n ) ,
    in which g ( y n ) = 1 2 ( I p r o x ϵ ϕ ) A y n 2 and h ( y n ) = 1 2 ( I p r o x ϵ φ ) y n 2 .
  • Set n n + 1 and repeat steps 2–4.
Assume that the above iterates (5)–(8) do not terminate, that is, the sequence { x n } generated by (7) is very large. In this case, we demonstrate the convergence analysis of the sequence { x n } .
Theorem 1.
Suppose that the control parameters { ζ n } , { ϑ n } and { τ n } satisfy the following restrictions
(C1): 
lim n ζ n = 0 and n = 0 ζ n = ;
(C2): 
0 < lim   inf n ϑ n lim   sup n ϑ n < 1 ;
(C3): 
lim  ; inf n τ n ( 4 τ n ) > 0 .
Then sequence { x n } generated by (7) strongly converges to z, where z = p r o j Γ ( I F + γ f ) z .
Proof. 
Firstly, it is easy to check that operator p r o j Γ ( I F + γ f ) is a contraction under the restriction δ / μ > γ > 0 . Denote its unique fixed point by z, that is, z = p r o j Γ ( I F + γ f ) z . Next, we show the boundedness of the sequence { x n } . In terms of the nonexpansivity of the operators I p r o x ϵ ϕ and I p r o x ϵ φ , from (3) and (4), we have
2 h ( y n ) = ( I p r o x ϵ φ ) y n 2 ( I p r o x ϵ φ ) y n , y n z ,
and
2 g ( y n ) = ( I p r o x ϵ ϕ ) A y n 2 ( I p r o x ϵ ϕ ) A y n , A y n A z .
By (6), (9) and (10), we obtain
ν ( y n ) , y n z = A * ( I p r o x ϵ ϕ ) A y n + ( I p r o x ϵ φ ) y n , y n z = ( I p r o x ϵ ϕ ) A y n , A y n A z + ( I p r o x ϵ φ ) y n , y n z 2 g ( y n ) + 2 h ( y n ) .
This together with (8) implies that
z n z 2 = y n τ n g ( y n ) + h ( y n ) ν ( y n ) 2 ν ( y n ) z = y n z 2 2 τ n g ( y n ) + h ( y n ) ν ( y n ) 2 ν ( y n ) , y n z + τ n 2 ( g ( y n ) + h ( y n ) ) 2 ν ( y n ) 2 y n z 2 τ n ( 4 τ n ) ( g ( y n ) + h ( y n ) ) 2 ν ( y n ) 2 .
By condition (C3), without loss of generality, we assume 0 < a < τ n < b < 4 for all n 0 . In the light of (7) and (12), we have
x n + 1 z ( 1 ϑ n ) y n z + ϑ n z n z y n z = ζ n γ ( f ( x n ) f ( z ) ) + ζ n ( γ f ( z ) F ( z ) ) + ( I ζ n F ) ( x n z ) ζ n γ μ x n z + ζ n γ f ( z ) F ( z ) + ( 1 δ ζ n ) x n z = [ 1 ( δ γ μ ) ζ n ] x n z + ζ n γ f ( z ) F ( z ) max { x n z , γ f ( z ) F ( z ) δ γ μ } max { x 0 z , γ f ( z ) F ( z ) δ γ μ } .
Hence, the sequence { x n } is bounded. Consequently, we can check easily that the sequences { y n } and { z n } are bounded.
By virtue of (5), we have
y n z 2 = ζ n ( γ f ( x n ) F ( z ) ) + ( I ζ n F ) ( x n z ) 2 I ζ n F 2 x n z 2 + ζ n 2 γ f ( x n ) F ( z ) 2 + 2 ζ n γ f ( x n ) F ( z ) , ( I ζ n F ) ( x n z ) ( 1 δ ζ n ) 2 x n z 2 + ζ n 2 γ f ( x n ) F ( z ) 2 + 2 γ ζ n f ( x n ) f ( z ) , ( I ζ n F ) ( x n z ) + 2 ζ n γ f ( z ) F ( z ) , x n z 2 ζ n 2 γ f ( z ) F ( z ) , F ( x n ) F ( z ) ( 1 δ ζ n ) 2 x n z 2 + ζ n 2 γ f ( x n ) F ( z ) 2 + 2 γ μ ( 1 δ ζ n ) ζ n x n z 2 + 2 δ ζ n 2 γ f ( z ) F ( z ) x n z + 2 ζ n γ f ( z ) F ( z ) , x n z [ 1 2 ( δ γ μ ) ζ n ] x n z 2 + 2 ζ n γ f ( z ) F ( z ) , x n z + ζ n 2 M 1 ,
where M 1 sup n 0 { γ f ( x n ) F ( z ) , 2 δ γ f ( x n ) F ( z ) x n z , δ 2 x n z 2 } .
On the basis of (7) and Lemma 1, we get
x n + 1 z 2 = ( 1 ϑ n ) ( y n z ) + ϑ n ( z n z ) 2 = ( 1 ϑ n ) y n z 2 + ϑ n z n z 2 ( 1 ϑ n ) ϑ n y n z n 2 .
From (12) and (15), we deduce
x n + 1 z 2 y n z 2 ( 1 ϑ n ) ϑ n y n z n 2 .
By (7), we note that
y n z n = 1 ϑ n ( y n x n + 1 ) .
Thus, combining (14), (16) with (25), we get
x n + 1 z 2 y n z 2 1 ϑ n ϑ n x n + 1 y n 2 [ 1 2 ( δ γ μ ) ζ n ] x n z 2 + 2 ζ n γ f ( z ) F ( z ) , x n z + ζ n 2 M 1 1 ϑ n ϑ n x n + 1 y n 2 = [ 1 2 ( δ γ μ ) ζ n ] x n z 2 + ζ n σ n ,
where
σ n = 2 γ f ( z ) F ( z ) , x n z + M 1 ζ n 1 ϑ n ζ n ϑ n x n + 1 y n 2 .
According to the boundedness of the sequence { x n } , from (27), we obtain σ n M 2 ( n 0 ) for some M 2 . Applying Lemma 2 to (26), we get 0 2 ( δ γ μ ) lim   sup n x n z 2 lim   sup n σ n M 2 . Therefore, lim   sup n σ n exists and there exists a subsequence { x n i } of { x n } such that x n i x ˜ and
lim   sup n σ n = lim   sup n ( 2 γ f ( z ) F ( z ) , x n z + M 1 ζ n 1 ϑ n ζ n ϑ n x n + 1 y n 2 ) = lim i ( 2 γ f ( z ) F ( z ) , x ˜ z 1 ϑ n i ζ n i ϑ n i x n i + 1 y n i 2 ) .
This indicates that lim i 1 ϑ n i ζ n i ϑ n i x n i + 1 y n i 2 exists and by conditions (C1) and (C2), we deduce
lim i x n i + 1 y n i = 0 .
This together with (25) implies that
lim i y n i z n i = 0 .
Combining (5) with (21), we obtain
lim i x n i + 1 x n i = lim i y n i x n i = 0 .
By (12), we have
0 τ n i ( 4 τ n i ) ( g ( y n i ) + h ( y n i ) ) 2 ν ( y n i ) 2 y n i z 2 z n i z 2 y n i z n i ( y n i z + z n i z ) 0 ( as i ) .
It follows that
lim i g ( y n i ) + h ( y n i ) ν ( y n i ) = 0 .
Noting that ν ( y n i ) is bounded, from (23), we deduce lim i ( g ( y n i ) + h ( y n i ) ) = 0 . Thus, lim i g ( y n i ) = lim i h ( y n i ) = 0 . That is,
lim i ( I p r o x ϵ φ ) y n i = lim i ( I p r o x ϵ ϕ ) A y n i = 0 .
This together with Lemma 3 implies that x ˜ F i x ( p r o x ϵ φ ) and A x ˜ F i x ( p r o x ϵ ϕ ) . Hence x ˜ Γ . Therefore,
lim   sup n γ f ( z ) F ( z ) , x n z = γ f ( z ) F ( z ) , x ˜ z 0 .
By (26), we have
x n + 1 z 2 [ 1 2 ( δ γ μ ) ζ n ] x n z 2 + 2 ζ n γ f ( z ) F ( z ) , x n z + ζ n 2 M 1 .
According to Lemma 4 and (24), we deduce that x n z . This completes the proof. □
  • Given fixed point x 0 H 1 . Set n = 0 .
  • Calculate y n and ν ( y n ) via the iterative procedures
    y n = ( 1 ζ n ) x n ,
    and
    ν ( y n ) = A * ( I p r o x ϵ ϕ ) A y n + ( I p r o x ϵ φ ) y n .
  • If ν ( y n ) = 0 , then the iterative process stops (in this case, y n Γ by Lemma 5), otherwise continuous to the next step.
  • Compute
    x n + 1 = ( 1 ϑ n ) y n + ϑ n z n ,
    where
    z n = y n τ n g ( y n ) + h ( y n ) ν ( y n ) 2 ν ( y n ) ,
    in which g ( y n ) = 1 2 ( I p r o x ϵ ϕ ) A y n 2 and h ( y n ) = 1 2 ( I p r o x ϵ φ ) y n 2 .
  • Set n n + 1 and repeat steps 2–4.
Assume that the above iterates (25)–(28) do not terminate, that is, the sequence { x n } generated by (27) is very large.
Corollary 1.
Suppose that the control parameters { ζ n } , { ϑ n } and { τ n } satisfy the restrictions (C1)–(C3). Then sequence { x n } generated by (27) strongly converges to z = p r o j Γ ( 0 ) , the minimum norm element in Γ.

Author Contributions

All the authors have contributed equally to this paper. All the authors have read and approved the final manuscript.

Funding

This research was partially supported by the grants NSFC61362033 and NZ17015 and the Major Projection of North Minzu University (ZDZX201805).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Cho, S.-Y.; Qin, X.; Yao, J.-C.; Yao, Y.-H. Viscosity approximation splitting methods for monotone and nonexpansive operators in Hilbert spaces. J. Nonlinear Convex Anal. 2018, 19, 251–264. [Google Scholar]
  3. Byrne, C. Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  4. Xu, J.; Chi, E.-C.; Yang, M.; Lange, K. A majorization-minimization algorithm for split feasibility problems. Comput. Optim. Appl. 2018, 71, 795–828. [Google Scholar] [CrossRef]
  5. Wang, F.; Xu, H.-K. Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010, 2010, 102085. [Google Scholar] [CrossRef]
  6. Yao, Y.; Postolache, M.; Yao, J.-C. An iterative algorithm for solving the generalized variational inequalities and fixed points problems. Mathematics 2019, 7, 61. [Google Scholar] [CrossRef]
  7. Yao, Y.; Postolache, M.; Liou, Y.-C. Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013, 2013, 201. [Google Scholar] [CrossRef] [Green Version]
  8. Yao, Y.; Liou, Y.-C.; Postolache, M. Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization 2018, 67, 1309–1319. [Google Scholar] [CrossRef]
  9. Yao, Y.; Yao, J.-C.; Liou, Y.-C.; Postolache, M. Iterative algorithms for split common fixed points of demicontractive operators without priori knowledge of operator norms. Carpathian J. Math. 2018, 34, 459–466. [Google Scholar]
  10. Hieu, D.-V. Parallel extragradient-proximal methods for split equilibrium problems. Math. Model. Anal. 2016, 21, 478–501. [Google Scholar] [CrossRef]
  11. Yao, Y.; Wu, J.; Liou, Y.-C. Regularized methods for the split feasibility problem. Abstr. Appl. Anal. 2012, 2012, 140679. [Google Scholar] [CrossRef]
  12. Yao, Y.-H.; Liou, Y.-C.; Yao, J.-C. Split common fixed point problem for two quasi-pseudocontractive operators and its algorithm construction. Fixed Point Theory Appl. 2015, 2015, 127. [Google Scholar] [CrossRef]
  13. Petrot, N.; Suwannaprapa, M.; Dadashi, V. Convergence theorems for split feasibility problems on a finite sum of monotone operators and a family of nonexpansive mappings. J. Inequal. Appl. 2018, 2018, 205. [Google Scholar] [CrossRef]
  14. Moudafi, A.; Gibali, A. l1-l2 regularization of split feasibility problems. Numer. Algorithms 2018, 78, 739–757. [Google Scholar] [CrossRef]
  15. Gibali, A.; Liu, L.W.; Tang, Y.-C. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2018, 12, 817–830. [Google Scholar] [CrossRef]
  16. Wang, F. Polyak’s gradient method for split feasibility problem constrained by level sets. Numer. Algorithms 2018, 77, 925–938. [Google Scholar] [CrossRef]
  17. He, S.; Tian, H.; Xu, H.-K. The selective projection method for convex feasibility and split feasibility problems. J. Nonlinear Convex Anal. 2018, 19, 1199–1215. [Google Scholar]
  18. Yao, Y.; Liou, Y.-C.; Yao, J.-C. Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl. 2017, 10, 843–854. [Google Scholar] [CrossRef] [Green Version]
  19. Duong, V.-T.; Dang, V.-H. An inertial method for solving split common fixed point problems. J. Fixed Point Theory Appl. 2017, 19, 3029–3051. [Google Scholar]
  20. Buong, N. Iterative algorithms for the multiple-sets split feasibility problem in Hilbert spaces. Numer. Algorithms 2017, 76, 783–798. [Google Scholar] [CrossRef]
  21. Dadashi, V. Shrinking projection algorithms for the split common null point problem. Bull. Aust. Math. Soc. 2017, 96, 299–306. [Google Scholar] [CrossRef]
  22. Takahashi, W. The split common null point problem for generalized resolvents in two banach spaces. Numer. Algorithms 2017, 75, 1065–1078. [Google Scholar] [CrossRef]
  23. Yao, Y.; Agarwal, R.-P.; Postolache, M.; Liou, Y.-C. Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. Fixed Point Theory Appl. 2014, 2014, 183. [Google Scholar] [CrossRef] [Green Version]
  24. Qu, B.; Wang, C.; Xiu, N. Analysis on Newton projection method for the split feasibility problem. Computat. Optim. Appl. 2017, 67, 175–199. [Google Scholar] [CrossRef]
  25. Dong, Q.-L.; Tang, Y.-C.; Cho, Y.-J.; Rassias, T.-M. Optimal choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim. 2018, 71, 341–360. [Google Scholar] [CrossRef]
  26. Yao, Y.; Postolache, M.; Zhu, Z. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 2019, in press. [Google Scholar]
  27. Dadashi, V.; Postolache, M. Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2019. [Google Scholar] [CrossRef]
  28. Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017, 18, 875–882. [Google Scholar]
  29. Moreau, J.-J. Proximite et dualite dans un espace hilbertien. Bull. Soc. Math. 1965, 93, 273–299. [Google Scholar] [CrossRef]
  30. Yosida, K. Functional Analysis; Springer: Berlin, Germany, 1964. [Google Scholar]
  31. Shehu, Y.; Cai, G.; Iyiola, O.-S. Iterative approximation of solutions for proximal split feasibility problems. Fixed Point Theory Appl. 2015, 2015, 123. [Google Scholar] [CrossRef]
  32. Abbas, M.; AlShahrani, M.; Ansari, Q.-H.; Iyiola, O.-S.; Shehu, Y. Iterative methods for solving proximal split minimization problems. Numer. Algorithms 2018, 78, 193–215. [Google Scholar] [CrossRef]
  33. Witthayarat, U.; Cho, Y.-J.; Cholamjiak, P. On solving proximal split feasibility problems and applications. Ann. Funct. Anal. 2018, 9, 111–122. [Google Scholar] [CrossRef]
  34. Yao, Y.; Qin, X.; Yao, J.-C. Constructive approximation of solutions to proximal split feasibility problems. J. Nonlinear Convex Anal. 2019, in press. [Google Scholar]
  35. Yao, Y.; Postolache, M.; Qin, X.; Yao, J.-C. Iterative algorithms for the proximal split feasibility problem. UPB Sci. Ser. A Appl. Math. Phys. 2018, 80, 37–44. [Google Scholar]
  36. Moudafi, A.; Thakur, B.-S. Solving proximal split feasibility problems without prior knowledge of operator norms. Optim. Lett. 2014, 8, 2099–2110. [Google Scholar] [CrossRef]
  37. Shehu, Y.; Iyiola, O.-S. Strong convergence result for proximal split feasibility problem in Hilbert spaces. Optimization 2017, 66, 2275–2290. [Google Scholar] [CrossRef]
  38. Yao, Y.-H.; Qin, X.; Yao, J.-C. Projection methods for firmly type nonexpansive operators. J. Nonlinear Convex Anal. 2018, 19, 407–415. [Google Scholar]
  39. Yao, Y.; Chen, R.; Xu, H.-K. Schemes for finding minimum-norm solutions of variational inequalities. Nonlinear Anal. 2010, 72, 3447–3456. [Google Scholar] [CrossRef]
  40. Yao, Y.; Shahzad, N. Strong convergence of a proximal point algorithm with general errors. Optim. Lett. 2012, 6, 621–628. [Google Scholar] [CrossRef]
  41. Mainge, P.-E.; Maruster, S. Convergence in norm of modified Krasnoselki-Mann iterations for fixed points of demicontractive mappings. Set-Valued Anal. 2007, 15, 67–79. [Google Scholar]
  42. Goebel, K.; Kirk, W.-A. Topics in Metric Fixed Point Theory; Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  43. Xu, H.-K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Zhu, L.-J.; Yao, Y. An Iterative Approach to the Solutions of Proximal Split Feasibility Problems. Mathematics 2019, 7, 145. https://doi.org/10.3390/math7020145

AMA Style

Zhu L-J, Yao Y. An Iterative Approach to the Solutions of Proximal Split Feasibility Problems. Mathematics. 2019; 7(2):145. https://doi.org/10.3390/math7020145

Chicago/Turabian Style

Zhu, Li-Jun, and Yonghong Yao. 2019. "An Iterative Approach to the Solutions of Proximal Split Feasibility Problems" Mathematics 7, no. 2: 145. https://doi.org/10.3390/math7020145

APA Style

Zhu, L. -J., & Yao, Y. (2019). An Iterative Approach to the Solutions of Proximal Split Feasibility Problems. Mathematics, 7(2), 145. https://doi.org/10.3390/math7020145

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop