Next Article in Journal
Development of a Quantitative Model for the Analysis of the Functioning of Integrated Textile Supply Chains
Next Article in Special Issue
Connectedness and Path Connectedness of Weak Efficient Solution Sets of Vector Optimization Problems via Nonlinear Scalarization Methods
Previous Article in Journal
Stability Results for a Coupled System of Impulsive Fractional Differential Equations
Previous Article in Special Issue
Convergence Theorem of Two Sequences for Solving the Modified Generalized System of Variational Inequalities and Numerical Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gradient Methods with Selection Technique for the Multiple-Sets Split Equality Problem

1
School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China
2
Department of Mathematics, Tianjin Polytechnic University, Tianjin 300387, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(10), 928; https://doi.org/10.3390/math7100928
Submission received: 17 August 2019 / Revised: 26 September 2019 / Accepted: 26 September 2019 / Published: 8 October 2019
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
The inverse problem is one of the four major problems in computational mathematics. There is an inverse problem in medical image reconstruction and radiotherapy that is called the multiple-sets split equality problem. The multiple-sets split equality problem is a unified form of the split feasibility problem, split equality problem, and split common fixed point problem. In this paper, we present two iterative algorithms for solving it. The suggested algorithms are based on the gradient method with a selection technique. Based on this technique, we only need to calculate one projection in each iteration.

1. Introduction

The inverse problem is one of the four major problems in computational mathematics. The rapid development of the inverse problem has been a feature of recent decades; it can be found in computer vision, machine learning, statistics, geography, medical imaging, remote sensing, ocean acoustics, tomography, aviation, physics, and other fields. There is an inverse problem in medical image reconstruction and radiotherapy that can be expressed as a split feasibility problem [1,2,3,4,5,6,7,8,9], split equality problem [10,11,12,13], and split common fixed point problem [14,15,16,17,18,19].
In this paper, we focus on a unified form of the split feasibility problem, split equality problem, and split common fixed point problem that is called the multiple-sets split equality problem.
Let H 1 , H 2 , H 3 be three real Hilbert spaces. r , t are positive integers, { C i } i = 1 r and { Q j } j = 1 t are two families of closed and convex subsets of H 1 and H 2 , respectively. A : H 1 H 3 , B : H 2 H 3 are two bounded and linear operators. Then the multiple-sets split equality problem (MSSEP for short) can be formulated as
f i n d i n g x i = 1 r C i , y j = 1 t Q j s u c h t h a t A x = B y .
It reduces to the split equality problem if r = t = 1 ; moreover, it is the split feasibility problem if H 2 = H 3 and B is the identity operator on H 2 . In addition, it is the split common fixed point problem if we take x C i to x = P C i x , y Q j to y P Q j where P C i , P Q j are the metric projections on C i , Q j .
In the problem (1), without loss of generality, we may assume that t r and let C r + 1 = C r + 2 = = C t = H 1 . Then the problem (1) can be described equivalently as:
f i n d i n g x i = 1 t C i , y j = 1 t Q j s u c h t h a t A x = B y .
Let S i = C i × Q i H = H 1 × H 2 , i Λ = { 1 , 2 , , t } , G = [ A , B ] : H H 3 , G * be the adjoint operator of G where H 1 × H 2 is the Cartesian product of H 1 and H 2 . Then the original problem (1) can be modified as
f i n d i n g w = ( x , y ) i = 1 t S i s u c h t h a t G w = 0 .
Assume the problem (3) is consistent and let Ω denote its solution set, that is, Ω is not empty. We consider the proximity function
f ( w ) = 1 2 Σ i = 1 t α i w P S i w 2 + 1 2 G w 2 ,
where α i , i = 1 , , t are positive real numbers and P S i , i = 1 , , t are metric projections from H onto S i . Since C i and Q i are closed convex, so are S i , and then P S i are well defined. Then problem (3) can be transformed into the minimization problem
min w i = 1 t S i f ( w ) .
Note that the proximity function f ( w ) is convex and differentiable with gradient
f ( w ) = Σ i = 1 t α i ( I P S i ) w + G * G w ,
where I is the identity operator on H. The gradient function f ( w ) is L-Lipschitz continuous with Lipschitz constant [20]
L = Σ i = 1 t α i + G 2 .
To solve the minimization problem (4), a classical method is the gradient algorithm, which takes the iterative issue
w n + 1 = w n γ n f ( w n ) ,
where γ n is the iterative step size in step n.
Note that in iteration (5), we need to calculate projections for t times in each step. On the other hand, notice that w * Ω if and only if g ( w * ) = 0 , where
g ( w ) = 1 2 w P S i ( n ) w 2 + 1 2 G w 2 ,
in which
i ( n ) { i | max 1 i t w P S i w } .
Then we consider the iterative issue
w n + 1 = w n γ n g ( w n ) .
In iteration (6), we only need to implement a projection once in each step. Motivated by this point, we present Algorithms 1 and 2 in Section 3 to solve problem (3).
The general structure of this paper is as follows. In the next section, we go over some preliminaries. In Section 3, we present the main algorithms and their convergence. In Section 4, several numerical results are shown to confirm the effectiveness of the suggested algorithm. In the last section, there are some conclusions.

2. Preliminaries

For convenience, note that H is a real Hilbert space and I denotes the identity operator on H. By x n x * and x n x * , the strong and weak convergence of sequence { x n } to a point x * , respectively, and ω w ( x n ) denotes the set of weak cluster points of the sequence { x n } . P S is the projection from H onto its closed and convex subset S.
Lemma 1
([21]). Let S be a closed, convex, and nonempty subset of H, then for any x , y H and z S ,
(i) x P S x , z P S x 0 ;
(ii) P S x P S y 2 P S x P S y , x y ;
(iii) x P S x , x z x P S x 2 .
Lemma 2
([22]). Let { a n } , { α n } , { u n } be sequences of non-negative real numbers with
{ α n } [ 0 , 1 ] , Σ n = 1 α n = , Σ n = 1 u n < .
Let { t n } be a sequence of real numbers with lim sup n t n 0 and
a n + 1 ( 1 α n ) a n + α n t n + u n .
Then lim n a n = 0 .
Lemma 3
([23]). Let S be a closed and convex subset of H, and T : S S be non-expansive, and { x n } S . If x n x and lim n x n T x n = 0 , then T x = x .
Lemma 4
([24]). Let S be a closed, convex, and nonempty subset of H and { x n } be a sequence in H. If
(i) lim n x n x exists for each x S ;
(ii) ω w ( x n ) S ;
then { x n } converges weakly to a point in S.

3. Main Results

Assume that the problem (3) is consistent and let Ω denote its solution set. That is, Ω is not empty and Ω : = { w H : w i = 1 t S i , G w = 0 } .
Remark 1.
w n is a solution of the problem (3) if and only if the equality (8) holds.
On the one hand, if w n + q n z n = 0 , then take z Ω . We have
0 = w n + q n z n , w n z = w n + G * G w n P S i ( n ) w n , w n z = w n P S i ( n ) w n , w n z + G w n , G w n G z w n P S i ( n ) w n 2 + G w n 2 .
The first equality is from w n + q n z n = 0 , the second one is from the definition of q n and z n , and the last inequality is from Lemma 1 ( i i i ) and G z = 0 . Then
w n P S i ( n ) w n = 0 a n d G w n = 0 ,
which implies that
w n P S i w n = 0 , i Λ a n d G w n = 0 .
Hence w n i = 1 t S i and G w n = 0 . Namely, w n Ω .
Conversely, if w n is a solution of the problem (3), that is w n i = 1 t S i and G w n = 0 , then q n = G * G w n = 0 and z n = P S i ( n ) w n = w n , so w n + q n z n = 0 . That is, w n + q n z n = 0 .
Next we discuss the convergence of the iterative sequence { w n } generated by Algorithm 1 if it does not terminate in finite steps.
Algorithm 1: Gradient method 1
Take w 0 H arbitrarily and compute
z n = P S i ( n ) w n , q n = G * G w n ,
where n 0 and
i ( n ) { i | max i Λ w n P S i w n , Λ = { 1 , 2 , , t } } .
If
w n + q n z n = 0 ,
then stop. w n is the solution (based on Remark 1). Otherwise, calculate
w n + 1 = w n τ n ( w n + q n z n ) ,
where
τ n = λ n w n z n 2 + G w n 2 2 w n + q n z n 2 ,
in which λ n ( 0 , 4 ) .
Theorem 1.
If 0 < lim inf n λ n lim sup n λ n < 4 , taking initial point w 0 H arbitrarily, then the sequence { w n } generated by Algorithm 1 converges weakly to a solution of the problem (3).
Proof. 
First, we show the boundedness of { w n } . Take z Ω . Based on the inequality in the process of Remark 1, we get
w n + 1 z 2 = w n z τ n ( w n + q n z n ) 2 = w n z 2 2 τ n w n + q n z n , w n z + τ n 2 w n + q n z n 2 w n z 2 λ n ( w n z n 2 + G w n 2 ) 2 w n + q n z n 2 + λ n 2 4 ( w n z n 2 + G w n 2 ) 2 w n + q n z n 2 = w n z 2 λ n ( 1 λ n 4 ) ( w n z n 2 + G w n 2 ) 2 w n + q n z n 2 .
This implies that lim n w n z exists. Thus the sequence { w n } is bounded and so are the sequences { G w n } and { P S i w n } , i Λ .
Next we show that ω w ( w n ) Ω .
Since lim n w n z exists and
λ n ( 1 λ n 4 ) ( w n z n 2 + G w n 2 ) 2 w n + q n z n 2 w n z 2 w n + 1 z 2 ,
together with the boundedness of the sequence { w n + q n z n } and the definition of λ n , it follows that
lim n ( w n z n 2 + G w n 2 ) 2 w n + q n z n 2 = 0 ,
which implies that
lim n w n z n = 0 a n d lim n G w n = 0 .
Hence,
lim n w n P S i w n = 0 , i Λ a n d lim n G w n = 0 .
Since { w n } is bounded, let w * be a weak cluster point of { w n } with subsequence { w n i } weakly convergent to it.
lim n w n i P S i w n i = 0 , i Λ a n d lim n G w n i = 0 .
By Lemma 3, we get w * Ω , and by the arbitrariness of w * ω w ( w n ) , we deduce that ω w ( w n ) Ω . Moreover, the conditions in Lemma 4 have also been satisfied, and the sequence { w n } generated by the Algorithm 1 converges weakly to some solution of the problem (3). The proof is completed. □
There is only weak convergence in Theorem 1. Next, we show a strong convergence theorem for solving the problem (3).
Next, we discuss the convergence of the iterative sequence { w n } generated by Algorithm 2 if it does not terminate in finite steps.
Algorithm 2: Gradient method 2
Take u H and initial point w 0 H . Compute
z n = P S i ( n ) w n , q n = G * G w n ,
where
i ( n ) = { i | max i Λ w n P S i w n , Λ = { 1 , 2 , , t } } .
If
w n + q n z n = 0 ,
then stop. w n is the solution (by Remark 1). Otherwise, calculate
w n + 1 = α n u + ( 1 α n ) ( w n τ n ( w n + q n z n ) ) ,
where α n ( 0 , 1 ) , n 0 and
τ n = λ n w n z n 2 + G w n 2 2 w n + q n z n 2 ,
in which λ n ( 0 , 4 ) .
Theorem 2.
Suppose that lim n α n = 0 , Σ n = 0 α n = , and 0 < lim inf n λ n lim sup n λ n < 4 . Taking u H and initial point w 0 H arbitrarily, then the sequence { w n } generated by the Algorithm 2 converges strongly to z = P Ω u .
Proof. 
Let u n = w n τ n ( w n + q n z n ) , for n 0 . From the process (10) in Theorem 1, we get
u n z 2 w n z 2 λ n ( 1 λ n 4 ) ( w n z n 2 + G w n 2 ) 2 w n + q n z n 2
by the definition of λ n , that is, u n z w n z . Thus
w n + 1 z = α n u + ( 1 α n ) u n z α n u z + ( 1 α n ) u n z α n u z + ( 1 α n ) w n z max { w n z , u z } .
By induction, we derive
w n + 1 z max { w 0 z , u z } ,
which means that the sequence { w n } is bounded and so are the sequences { G w n } and { P S i w n } , i Λ . By a simple derivation,
w n + 1 z 2 = α n ( u z ) + ( 1 α n ) ( u n z ) 2 ( 1 α n ) u n z 2 + 2 α n u z , w n + 1 z .
Then by (13),
w n + 1 z 2 ( 1 α n ) w n z 2 + 2 α n u z , w n + 1 z ( 1 α n ) λ n ( 1 λ n 4 ) ( w n z n 2 + G w n 2 ) 2 w n + q n z n 2 = ( 1 α n ) w n z 2 + α n [ 2 u z , w n + 1 z ( 1 α n ) α n λ n ( 1 λ n 4 ) ( w n z n 2 + G w n 2 ) 2 w n + q n z n 2 ] .
Let
θ n = w n z 2 ,
δ n = 2 u z , w n + 1 z ( 1 α n ) α n λ n ( 1 λ n 4 ) ( w n z n 2 + G w n 2 ) 2 w n + q n z n 2 .
Then the inequality (14) equals
θ n + 1 ( 1 α n ) θ n + α n δ n ,
and also
0 θ n + 1 ( 1 α n ) θ n + α n δ n , n 0 .
It follows that
δ n 2 u z , w n + 1 z 2 u z w n + 1 z .
So
lim sup n δ n < .
Next, we show that lim sup n δ n 1 . Otherwise, if lim sup n δ n < 1 , then by the definition of the supremum, there exists m such that δ n 1 for all n m . It follows that for all n m ,
θ n + 1 ( 1 α n ) θ n + α n δ n = θ n + α n ( δ n θ n ) θ n α n .
Thus
θ n + 1 θ m Σ i = m n α i .
Hence, taking lim sup as n in the above inequality, we obtain
0 lim sup n θ n + 1 θ m lim sup Σ i = m n α i = ,
which is a contradiction. Therefore, lim sup n δ n 1 , and it is finite. By the boundedness of { δ n } , we can take a subsequence { n k } of { n } such that
lim sup n δ n = lim k δ n k = lim k [ 2 u z , w n k + 1 z ( 1 α n k ) α n k λ n k ( 1 λ n k 4 ) ( w n k z n k 2 + G w n k 2 ) 2 w n k + q n k z n k 2 ] .
Since the sequence { w n k + 1 } is bounded, there exists a subsequence of { w n k + 1 } . Without loss of generality, we may assume it’s { w n k + 1 } itself, such that lim k u z , w n k + 1 z exists. Consequently, the following limit exists:
lim k ( 1 α n k ) α n k λ n k ( 1 λ n k 4 ) ( w n k z n k 2 + G w n k 2 ) 2 w n k + q n k z n k 2 .
Together with the definitions of α n and λ n , it shows that
lim k ( w n k z n k ) 2 + G w n k 2 w n k + q n k z n k 2 = 0 ,
which yields
lim k w n k z n k = 0 a n d lim k G w n k = 0 .
Following the proof procedure of Theorem 1, we conclude that ω w ( w n k ) Ω . Since
w n k + 1 w n k = α n k u + ( 1 α n k ) u n k w n k α n k u w n k + ( 1 α n k ) u n k w n k = α n k u w n k + ( 1 α n k ) τ n k w n k + q n k z n k = α n k u w n k + ( 1 α n k ) λ n k w n k z n k 2 + G w n k 2 w n k + q n k z n k 0 ,
assume that w n k + 1 w * Ω . Then
lim sup n δ n = lim k δ n k = lim k [ 2 u z , w n k + 1 z ( 1 α n k ) α n k λ n k ( 1 λ n k 4 ) ( w n k z n k 2 + G w n k 2 ) 2 w n k + q n k z n k 2 ] lim k 2 u z , w n k + 1 z = 2 u z , w * z 0 ,
due to the fact that z = P Ω u and Lemma 1. Finally, applying Lemma 2 to (15), we conclude that w n z . The proof is completed. □

4. Numerical Experiments

In this section, we provide several numerical results of the MSSEP (2) to confirm the effectiveness of the suggested Algorithm 1. The whole program was written in Wolfram Mathematica (version 9.0). All of the numerical results were carried out on a personal Lenovo computer with Intel(R)Core(TM) i5-6600 CPU 3.30 GHz and RAM 8.00 GB.
The MSSEP with C 1 = { x R 2 | x ( 1 , 1 ) 5 } , C 2 = { x R 2 | x ( 1 , 1 ) 5 } , C 3 = { x R 2 | x ( 0 , 3 ) 5 } , Q 1 = { y R 3 | y ( 1 , 1 , 1 ) 5 } , Q 2 = { y R 3 | y ( 0 , 0 , 0 ) 5 } , Q 3 = { y R 3 | y ( 1 , 0 , 0 ) 5 } , A = 1 2 0 3 5 2 , B = 2 0 1 3 2 3 1 0 0 , Λ = { 1 , 2 , 3 } , λ n = 0.6 . We choose two initial values x 0 = ( 2 , 2 ) , y 0 = ( 2 , 2 , 2 ) and x 0 = ( 20 , 20 ) , y 0 = ( 10 , 10 , 10 ) and take the iterative steps n as the transverse axis and A x B y as the vertical axis in the figures below (Figure 1 and Figure 2). We considered using the Algorithm 1 to solve this MSSEP.
The figures above confirm the effectiveness of the proposed Algorithm 1 and also show that there is an approximately linear downward trend after finite steps, which means the convergence rate of the proposed Algorithm 1 may be fast enough.

Author Contributions

The main idea of this paper was proposed by D.T.; L.J. and L.S. reviewed all the steps of the initial manuscript. All authors approved the final manuscript.

Funding

This research was supported by NSFC Grants No. 11301379; No. 11226125.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Yao, Y.H.; Postolache, M.; Qin, X.L.; Yao, J.C. Iterative algorithms for the proximal split feasibility problem. UPB Sci. Bull. Ser. A Appl. Math. Phys. 2018, 80, 37–44. [Google Scholar]
  3. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef]
  4. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  5. Takahashi, W. The split feasibility problem in Banach spaces. J. Nonlinear Convex Anal. 2014, 15, 1349–1355. [Google Scholar]
  6. Wang, F.; Xu, H.K. Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Amal. 2011, 74, 4105–4111. [Google Scholar] [CrossRef]
  7. Xu, H.K.; Alghamdi, M.A.; Shahzad, N. An unconstrained optimization approach to the split feasibility problem. J. Nonlinear Convex Anal. 2017, 18, 1891–1899. [Google Scholar]
  8. Ceng, L.C.; Wong, N.C.; Yao, J.C. Hybrid extragradient methods for finding minimum-norm solutions of split feasibility problems. J. Nonlinear Convex Anal. 2015, 16, 1965–1983. [Google Scholar]
  9. Yao, Y.H.; Postolache, M.; Zhu, Z.C. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 2019. [Google Scholar] [CrossRef]
  10. Moudafi, A. Alternating CQ algorithm for convex feasibility and split fixed point problems. J. Nonlinear Convex Anal. 2013, 15, 809–818. [Google Scholar]
  11. Shi, L.Y.; Chen, R.D.; Wu, Y.J. Strong convergence of iterative algorithms for the split equality problem. J. Inequal. Appl. 2014, 2014, 478. [Google Scholar] [CrossRef] [Green Version]
  12. Dong, Q.L.; He, S.N.; Zhao, J. Solving the split equality problem without prior knowledge of operator norms. Optimization 2015, 64, 1887–1906. [Google Scholar] [CrossRef]
  13. Tian, D.L.; Shi, L.Y.; Chen, R.D. Strong convergence theorems for split inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 2017, 19, 1501–1514. [Google Scholar] [CrossRef]
  14. Cegielski, A. General method for solving the split common fixed point problem. J. Optim. Theory Appl. 2015, 165, 385–404. [Google Scholar] [CrossRef]
  15. Kraikaew, P.; Saejung, S. On split common fixed point problems. J. Math Anal. Appl. 2014, 415, 513–524. [Google Scholar] [CrossRef]
  16. Moudafi, A. The split common fixed-point problem for demicontractive mappings. Inverse Probl. 2010, 26, 055007. [Google Scholar] [CrossRef] [PubMed]
  17. Takahashi, W. The split common fixed point problem and strong convergence theorems by hybrid methods in two Banach spaces. J. Nonlinear Convex Anal. 2016, 17, 1051–1067. [Google Scholar]
  18. Takahashi, W.; Wen, C.F.; Yao, J.C. An implicit algorithm for the split common fixed point problem in Hilbert spaces and applications. Appl. Anal. Optim. 2017, 1, 423–439. [Google Scholar]
  19. Yao, Y.H.; Qin, X.L.; Yao, J.C. Self-adaptive step-sizes choice for split common fixed point problems. J. Nonlinear Convex Anal. 2018, 11, 1959–1969. [Google Scholar]
  20. Xu, H.K. A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021–2034. [Google Scholar] [CrossRef]
  21. Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. In Monographs and Textbooks in Pure and Applied Mathematics; Marcel Dekker: New York, NY, USA, 1984; pp. 1–170. [Google Scholar]
  22. Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. Theory Methods Appl. 2007, 67, 2350–2360. [Google Scholar] [CrossRef]
  23. Browder, F.E. Fixed point theorems for noncompact mappings in Hilbert spaces. Proc. Natl. Acad. Sci. USA 1965, 53, 1272–1276. [Google Scholar] [CrossRef] [PubMed]
  24. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 595–597. [Google Scholar] [CrossRef]
Figure 1. x 0 = ( 2 , 2 ) , y 0 = ( 2 , 2 , 2 ) .
Figure 1. x 0 = ( 2 , 2 ) , y 0 = ( 2 , 2 , 2 ) .
Mathematics 07 00928 g001
Figure 2. x 0 = ( 20 , 20 ) , y 0 = ( 10 , 10 , 10 ) .
Figure 2. x 0 = ( 20 , 20 ) , y 0 = ( 10 , 10 , 10 ) .
Mathematics 07 00928 g002

Share and Cite

MDPI and ACS Style

Tian, D.; Jiang, L.; Shi, L. Gradient Methods with Selection Technique for the Multiple-Sets Split Equality Problem. Mathematics 2019, 7, 928. https://doi.org/10.3390/math7100928

AMA Style

Tian D, Jiang L, Shi L. Gradient Methods with Selection Technique for the Multiple-Sets Split Equality Problem. Mathematics. 2019; 7(10):928. https://doi.org/10.3390/math7100928

Chicago/Turabian Style

Tian, Dianlu, Lining Jiang, and Luoyi Shi. 2019. "Gradient Methods with Selection Technique for the Multiple-Sets Split Equality Problem" Mathematics 7, no. 10: 928. https://doi.org/10.3390/math7100928

APA Style

Tian, D., Jiang, L., & Shi, L. (2019). Gradient Methods with Selection Technique for the Multiple-Sets Split Equality Problem. Mathematics, 7(10), 928. https://doi.org/10.3390/math7100928

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop