Next Article in Journal
Robust qLPV Tracking Fault-Tolerant Control of a 3 DOF Mechanical Crane
Previous Article in Journal
How Europe Is Preparing Its Core Solution for Exascale Machines and a Global, Sovereign, Advanced Computing Platform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Half-Space Relaxation Projection Method for Solving Multiple-Set Split Feasibility Problem

by
Guash Haile Taddele
1,2,
Poom Kumam
1,3,4,*,
Anteneh Getachew Gebrie
2 and
Kanokwan Sitthithakerngkiet
5
1
Department of Mathematics, King Mongkut’s University of Technology Thonburi, Bangkok 10140, Thailand
2
Department of Mathematics, College of Computational and Natural Science, Debre Berhan University, Debre Berhan P.O. Box 445, Ethiopia
3
Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Science Laboratory Building, King Mongkut’s University of Technology Thonburi (KMUTT), 126 Pracha-Uthit Road, Bang Mod, Thrung Khru, Bangkok 10140, Thailand
4
Department of Medical Research, China Medical University Hospital, China Medical University, Taichung 40402, Taiwan
5
Intelligent and Nonlinear Dynamic Innovations Research Center, Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), Wongsawang, Bangsue, Bangkok 10800, Thailand
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2020, 25(3), 47; https://doi.org/10.3390/mca25030047
Submission received: 3 June 2020 / Revised: 19 July 2020 / Accepted: 21 July 2020 / Published: 24 July 2020

Abstract

:
In this paper, we study an iterative method for solving the multiple-set split feasibility problem: find a point in the intersection of a finite family of closed convex sets in one space such that its image under a linear transformation belongs to the intersection of another finite family of closed convex sets in the image space. In our result, we obtain a strongly convergent algorithm by relaxing the closed convex sets to half-spaces, using the projection onto those half-spaces and by introducing the extended form of selecting step sizes used in a relaxed CQ algorithm for solving the split feasibility problem. We also give several numerical examples for illustrating the efficiency and implementation of our algorithm in comparison with existing algorithms in the literature.

1. Introduction

1.1. Split Inverse Problem

Split Inverse Problem (SIP) is an archetypal model presented in ([1], Section 2), and it is stated as
find x X that   solves   IP 1 such   that y = A x Y and   solves   IP 2
where A is a bounded linear operator from a space X to another space Y, and IP1 and IP2 are two inverse problems installed in X and Y, respectively. Real-world inverse problems can be cast into this framework by making different choices of the spaces X and Y (including the case X = Y ), and by choosing appropriate inverse problems for IP1 and IP2. For example, image restoration, computer tomograph and intensity-modulated radiation therapy (IMRT) treatment planning generate functions that can be transformed to have an SIP model; see in [2,3,4,5]. The split feasibility problem [6] and multiple-set split feasibility problem [7] are the first instances of the SIP, where the two problems IP1 and IP2 are of the Convex Feasibility Problem (CFP) type [8]. In the SIP framework, many authors studied cases for which IP1 and IP2 are convex feasibility problems, minimization problems, equilibrium problems, fixed point problems, null point problems and so on; see, for example [2,3,5,9,10,11,12,13,14,15,16,17,18,19,20].

1.2. Split Feasibility Problem and Multiple-Set Split Feasibility Problem

Let H be a real Hilbert space and let T : H H be an operator. We say that T is ρ -strongly quasi-nonexpansive, where ρ 0 , if Fix T = { x H : T x = x } Ø and
T x p 2 x p 2 ρ T x p 2 , ( x , p ) H × Fix T .
If ρ = 0 in (1), then T is called a quasi-nonexpansive operator. If ρ > 0 in (1), then we say that T is strongly quasi-nonexpansive. Obviously, a nonexpansive operator having a fixed point is quasi-nonexpansive. If T is quasi-nonexpansive, then FixT is closed and convex. For ν 0 denote by T ν : = ( 1 ν ) I + ν T , the ν -relaxation of T, where I is the identity operator and ν is called relaxation parameter. If T is quasi-nonexpansive, then usually one applies a relaxation parameter ν [ 0 , 1 ] .
Let H 1 and H 2 be real Hilbert spaces and let A : H 1 H 2 be a bounded linear operator. Given a nonempty closed convex subsets { C 1 , , C N } and { Q 1 , , Q M } of H 1 and H 2 , respectively. The Multiple-Set Split Feasibility Problem (MSSFP), which was introduced by Censor et al. [7], is formulated as finding a point
x ¯ i = 1 N C i   such   that   A x ¯ j = 1 M Q j .
Denote by Ω the set of solutions for (2). The MSSFP (2) with N = M = 1 is known as the Split Feasibility Problem (SFP), which is formulated as finding a point
x ¯ C   such   that   A x ¯ Q ,
where C and Q are nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively. The SFP was first introduced in 1994 by Censor and Elfving [6] for modeling inverse problems in finite-dimensional Hilbert spaces for modeling inverse problems that arise from phase retrievals and in medical image reconstruction. SFP plays an important role in the study of signal processing, image reconstruction, intensity-modulated radiation, therapy, etc. [2,3,5,11]. Several iterative algorithms were presented to solve the SFP and MSSFP provided the solution exists; see, for example, in [3,6,21,22,23,24,25,26,27,28,29,30,31,32]. The algorithm proposed by Censor and Elfving [6] for solving the SFP involves the computation of the inverse of A per each iteration assuming the existence of the inverse of A, a fact that makes the algorithm nonapplicable in practice. Most methods employ the Landweber operators, see the definition and its property in [33]. In general, all of these methods produce sequences that converge weakly to a solution. Byrne [3] proposed the following iteration for solving the SFP, and called it the CQ-algorithm or the projected Landweber method:
x n + 1 = P C V ν ( x n ) ,
where x 1 H 1 is arbitrary, V ν is a ν -relaxation of the Landweber operator V (corresponding to P Q ), i.e.,
V = I 1 A 2 A ( I P Q ) A ,
where ν ( 0 , 2 ) , A denotes the adjoint of A, and A 2 is the spectral norm of A A . It is well known that the CQ algorithm (4) does not necessarily converge strongly to the solution of SFP in the infinite-dimensional Hilbert space. An important advantage of the C Q algorithm by Byrne [3,21] is that computation of the inverse of A (matrix inverses) is not necessary. The Landweber operator is used for a more general type of problem called the split common fixed point problem of quasi-nonexpansive operators (see, for example, [10,34,35]), but the implementation of the Landweber-type algorithm generated requires prior knowledge of the operator norm. However, the operator norm is a global invariant and is often difficult to estimate; see, for example, the Theorem of Hendrickx and Olshevsky in [36]. To overcome this difficulty, Lopez et al. [2] introduced a new way of selecting the step sizes for solving the SFP (3) such that the information of the operator norm is not necessary. To be precise, Lopez et al. [2] proposed
x n + 1 = P C ( I γ n A ( I P Q ) A ) x n ,
where γ n = ρ n f ( x n ) f ( x n ) 2 , n 1 , ρ n ( 0 , 4 ) , f ( x n ) = 1 2 ( I P Q ) A x n 2 and f ( x n ) = A ( I P Q ) A x n . In addition, the computation of the projection on a closed convex set is not easy. In order to overcome this drawback, Yang [37] considered SFPs in which the involved sets C and Q are given as sub-level sets of convex functions, i.e.,
C = { x H 1 : c ( x ) 0 }   and   Q = { y H 2 : q ( y ) 0 } ,
where c : H 1 R and q : H 2 R are convex and subdifferentiable functions on H 1 and H 2 , respectively, and that c and q are bounded operators (i.e., bounded on bounded sets). It is known that every convex function defined on a finite-dimensional Hilbert space is subdifferentiable and its subdifferential operator is a bounded operator (see [38]). In this situation, the efficiency of the C Q method is extremely affected because, in general, the computation of projections onto such subsets is still very difficult. Motivated by Fukushima’s relaxed projection method in [39], Yang [37] suggested calculating the projection onto a half-space containing the original subset instead of the latter set itself. More precisely, Yang introduced a relaxed C Q algorithm using a half-space relaxation projection method for solving SFP. The proposed algorithm by Yang is given as follows:
x n + 1 = P C n ( x n γ f n ( x n ) ) ,
where f n = A ( I P Q n ) A , for each n N the set C n is given by
C n = { x H 1 : c ( x n ) ξ n , x n x } ,
where ξ n c ( x n ) , and the set Q n is given by
Q n = { y H 2 : q ( A x n ) ε n , A x n y } ,
where ε n q ( A x n ) . Obviously, C n and Q n are half-spaces and C C n and Q Q n for every n 1 . More important, since the projections onto C n and Q n have the closed form, the relaxed C Q algorithm is now easily implemented. The specific form of the metric projections onto C n and Q n can be found in [38,40,41].
For solving the MSSFP (2), many methods have been developed; see, for example, in [7,26,42,43,44,45,46,47,48,49] and references therein. We aim to propose a strongly convergent algorithm with high efficiency that is easy to implement in solving the MSSFP. Motivated by Yang [37], we are interested in solving the MSSFP (2) in which the involved sets C i ( i { 1 , , N } ) and Q j ( j { 1 , , M } ) are given as sub-level sets of convex functions, i.e.,
C i = { x H 1 : c i ( x ) 0 } and   Q j = { y H 2 : q j ( y ) 0 } ,
where c i : H 1 R and q j : H 2 R are convex functions for all i { 1 , , N } , j { 1 , , M } . We assume that each c i and q j are subdifferentiable on H 1 and H 2 , respectively, and that c i and q j are bounded operators (i.e., bounded on bounded sets). In what follows, we define N + M half-spaces at point x n by
C i , n = { x H 1 : c i ( x n ) ξ i , n , x n x } ,
where ξ i , n c i ( x n ) , and
Q j , n = { y H 2 : q j ( A x n ) ε j , n , A x n y } ,
where ε j , n q j ( A x n ) .
The paper contributes to developing the algorithm for the MSSFP in the direction of half-space relaxation (assuming C i and Q j are given as a sub-level sets of convex functions (8)) and parallel computation of projection onto half-spaces (9) and (10) without prior knowledge of the operator norm.
This paper is organized in the following way. In Section 2, we recall some basic and useful facts that will be used in the proof of our results. In Section 3, we introduce the extended form of the way of selecting step sizes used in the relaxed CQ algorithm for solving the SFP by [37] and Lopez et al. [2] to work for the MSSFP framework, and we analyze the strong convergence of our proposed algorithm. In Section 4, we give some numerical examples to discuss the performance of the proposed algorithm. Finally, we give some conclusions.

2. Preliminary

In this section, in order to prove our result, we recall some basic notions and useful results in a real Hilbert space H. The symbols " " and " " denote weak and strong convergence, respectively.
Let C be a nonempty closed convex subset of H. The metric projection on C is a mapping P C : H C defined by
P C ( x ) = arg min { y x : y C } , x H .
Lemma 1. 
[50] Let C be a closed convex subset of H. Given x H and a point z C , then z = P C ( x ) if and only if
x z , y z 0 , y C .
The mapping T : H H is firmly nonexpansive if
T x T y 2 x y 2 | | ( I T ) x ( I T ) y 2 , x , y H ,
which is equivalent to
T x T y 2 T x T y , x y , x , y H .
If T is firmly nonexpansive, I T is also firmly nonexpansive. The metric projection P C on a closed convex subset C of H is firmly nonexpansive.
Definition 1. 
The subdifferential of a convex function f : H R at x H , denoted by f ( x ) , is defined by
f ( x ) = { ξ H : f ( z ) f ( x ) + ξ , z x , z H } .
If f ( x ) Ø , f is said to be subdifferentiable at x. If the function f is continuously differentiable then f ( x ) = { f ( x ) } , this is the gradient of f.
Definition 2. 
The function f : H R is called weakly lower semi-continuous at x 0 if the sequence { x n } weakly converges to x 0 implies
lim inf n f ( x n ) f ( x 0 ) .
A function that is weakly lower semi-continuous at each point of H is called weakly lower semi-continuous on H.
Lemma 2. 
[3,51] Let H 1 and H 2 be real Hilbert spaces and f : H 1 R is given by f ( x ) = 1 2 ( I P Q ) A x 2 where Q is closed convex subset of H 2 and A : H 1 H 2 be a bounded linear operator. Then
(i) 
The function f is convex and weakly lower semi-continuous on H 1 ;
(ii) 
f ( x ) = A ( I P Q ) A x for x H 1 ;
(iii) 
f is A 2 -Lipschitz, i.e., f ( x ) f ( y ) A 2 x y , x , y H 1 .
Lemma 3. 
[27,52] Let C and Q be closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and f : H 1 R is given by f ( x ) = 1 2 ( I P Q ) A x 2 , where A : H 1 H 2 be a bounded linear operator. Then for λ > 0 and x ¯ H 1 the following statements are equivalent.
(i) 
The point x ¯ solves the SFP (3), i.e., x ¯ { x C : A x Q } ;
(ii) 
The point x ¯ is the fixed point of the mapping P C ( I λ f ) , i.e.,
P C ( x ¯ λ f ( x ¯ ) ) = x ¯ .
Lemma 4. 
[53] Let H be a real Hilbert space. Then, for all x , y H and α [ 0 , 1 ] , we have
(i) 
α x + ( 1 α ) y 2 = α x 2 + ( 1 α ) y 2 α ( 1 α ) x y 2 ;
(ii) 
x + y 2 = x 2 + y 2 + 2 x , y ;
(iii) 
x + y 2 x 2 + 2 y , x + y .
Lemma 5. 
[54] Let { a n } be the sequence of nonnegative numbers such that
a n + 1 ( 1 α n ) a n + α n δ n ,
where { δ n } is a sequence of real numbers bounded from above and 0 α n 1 and n = 1 α n = . Then it holds that
lim sup n α n lim sup n δ n .

3. Half-Space Relaxation Projection Algorithm

In this section, we propose an iterative algorithm to solve the MSSFP (2). To make our algorithm more efficient and the implementation of the algorithm more easy, we assume that the convex sets C i and Q j are given in the form of (8) and we use projections onto half-spaces C i , n and Q j , n defined in (9) and (10), respectively, instead of onto C i and Q j , just as the relaxed or inexact methods in [5,37,39,55]. Moreover, in order to remove the requirement of the estimated value of the operator norm and solve the MSSFP when finding the operator norm is not easy, we now introduce a new way of selecting the step sizes for solving the MSSFP (2) given as follows for x H 1 , and C i , n and Q j , n are half-spaces defined in (9) and (10).
(i) 
For each i { 1 , , N } and n 1 , define
g i , n ( x ) = 1 2 ( I P C i , n ) x 2 and   so g i , n ( x ) = ( I P C i , n ) x .
(ii) 
g n ( x ) and g n ( x ) are defined as g n ( x ) = g i n x , n ( x ) and so g n ( x ) = g i n x , n ( x ) where i n x { 1 , , N } such that for each n 1 ,
i n x arg max { g i , n ( x ) : i { 1 , , N } } .
(iii) 
For each j { 1 , , M } and n 1 , define
f j , n ( x ) = 1 2 ( I P Q j , n ) A x 2 and   f j , n ( x ) = A ( I P Q j , n ) A x .
From Aubin [51], g i , n and f j , n are convex, weakly lower semi-continuous and differentiable for each i { 1 , , N } and j { 1 , , M } . Now, using g i , n , g i , n , g n , g n , f j , n and f j , n given in (i)–(iii) above, and assuming that the solution set Ω of the MSSFP (2) is nonempty, we propose and analyze the strong convergence of our algorithm, called the Half-Space Relaxation Projection Algorithm.
Note that, the iterative scheme in Algorithm 1 (HSRPA) is established in away that g n ( z n ) and f j , n ( z n ) are computed, i.e., the projections P C i , n and P Q j , n are computed, in parallel setting under simple assumptions on step sizes.
Algorithm 1: Half-Space Relaxation Projection Algorithm (HSRPA).
Initialization: Choose u, x 1 H 1 . Let the positive real constants λ 1 , λ 2 and δ j ( j = 1 , , M ),
  and the real sequences { α n } , { β n } and { ρ n } satisfy the following conditions:
  (C1) 
λ 1 , λ 2 ( 0 , 1 ) and λ 1 + λ 2 = 1 .
  (C2) 
0 < δ j < 1 for all j { 1 , , M } , and j = 1 M δ j = 1 .
  (C3) 
0 < α n < 1 , lim n α n = 0 and n = 1 α n = .
  (C4) 
0 < a β n b < 1 for all n N .
  (C5) 
0 < λ ρ n < 4 λ ¯ and lim inf n ρ n ( 4 λ ¯ λ ρ n ) > 0 , where λ = max { λ 1 , λ 2 } and
   λ ¯ = min { λ 1 , λ 2 } .
Iterative Step: Proceed with the following computations:
z n = ( 1 α n ) x n + α n u , y n = z n j = 1 M δ j τ j , n ( λ 1 g n ( z n ) + λ 2 f j , n ( z n ) ) , x n + 1 = ( 1 β n ) z n + β n y n
 where
τ j , n = ρ n f j , n ( z n ) + g n ( z n ) d j ( z n ) ,
 for
d j ( z n ) = 1 , if g n ( z n ) 2 + f j , n ( z n ) 2 = 0 g n ( z n ) 2 + f j , n ( z n ) 2 , otherwise . .
Lemma 6. 
If { j { 1 , , M } : g n ( z n ) 2 + f j , n ( z n ) 2 0 } = Ø at some iterate n in HSRPA, then z n is the solution of MSSFP (2).
Proof. 
{ j { 1 , , M } : g n ( z n ) 2 + f j , n ( z n ) 2 0 } = Ø implies
g n ( z n ) 2 + f j , n ( z n ) 2 = 0 , j { 1 , , M } , g n ( z n ) = 0 = f j , n ( z n ) , j { 1 , , M } , g i , n ( z n ) = 0 = f j , n ( z n ) , i { 1 , , N } , j { 1 , , M } , ( I P C i , n ) ( z n ) = 0 = A ( I P Q j , n ) A ( z n ) , i { 1 , , N } , j { 1 , , M } .
Thus, one can get P C i , n ( z n λ f j , n ( z n ) ) = z n for all i { 1 , , N } and j { 1 , , M } . Since  C i C i , n , we get z n C i , n for all i { 1 , , N } . Combined with the fixed point relation Lemma 3 (ii), we also get that A z n Q j , n for all j { 1 , , M } . Following the representations of the sets C i , n and Q j , n in (9) and (10) we obtain that c i ( z n ) 0 for all i { 1 , , N } and q j ( A z n ) 0 for all i { 1 , , N } , and this implies that z n C i for all i { 1 , , N } and A z n Q j for all j { 1 , , M } , which completes the proof. ☐
By Lemma 6, we can conclude that the HSRPA terminates at some iterate n when { j { 1 , , M } : g n ( z n ) 2 + f j , n ( z n ) 2 0 } = Ø . Otherwise, if the HSRPA does not stop, then we have the following strong convergence theorem for the approximation of the solution of the problem of MSSFP (2).
Theorem 1. 
The sequence { x n } generated by HSRPA converges strongly to the solution point x ¯ of MSSFP (2) ( x n x ¯ Ω ) where x ¯ = P Ω u .
Proof. 
Let x ¯ Ω . Since I P C i , n and I P Q j , n are firmly nonexpansive, and since x ¯ verifies (2), we have for all x H 1
g i , n ( x ) , x x ¯ = ( I P C i , n ) x , x x ¯ ( I P C i , n ) x 2 = 2 g i , n ( x ) ,
and
f j , n ( x ) , x x ¯ = A ( I P Q j , n ) A x , x x ¯ = ( I P Q j , n ) A x , A x A x ¯ ( I P Q j , n ) A x 2 = 2 f j , n ( x ) .
Using definition of y n and Lemma 4 ( i i ) , we have
y n x ¯ 2 = z n j = 1 M δ j τ j , n ( λ 1 g n ( z n ) + λ 2 f j , n ( z n ) ) x ¯ 2 z n x ¯ 2 + j = 1 M δ j τ j , n ( λ 1 g n ( z n ) + λ 2 f j , n ( z n ) ) 2 2 j = 1 M δ j τ j , n ( λ 1 g n ( z n ) + λ 2 f j , n ( z n ) ) , z n x ^ .
Using convexity of . 2 , we have
j = 1 M δ j τ j , n ( λ 1 g n ( z n ) + λ 2 f j , n ( z n ) ) 2 j = 1 M δ j ( τ j , n ) 2 λ 1 g n ( z n ) + λ 2 f j , n ( z n ) 2 j = 1 M δ j ( τ j , n ) 2 ( λ 1 g n ( z n ) 2 + λ 2 f j , n ( z n ) 2 ) j = 1 M λ δ j ρ n f j , n ( z n ) + g n ( z n ) d j ( z n ) 2 ( g n ( z n ) 2 + f j , n ( z n ) 2 ) j = 1 M λ δ j ρ n f j , n ( z n ) + g n ( z n ) d j ( z n ) 2 d j ( z n ) = λ ρ n 2 j = 1 M δ j ( f j , n ( z n ) + g n ( z n ) ) 2 d j ( z n ) .
From (11) and (12), we have
j = 1 M δ j τ j , n ( λ 1 g n ( z n ) + λ 2 f j , n ( z n ) ) , z n x ^ = j = 1 M δ j τ j , n ( λ 1 g n ( z n ) + λ 2 f j , n ( z n ) , z n x ^ = j = 1 M δ j τ j , n ( λ 1 g n ( z n ) , z n x ^ + λ 2 f j , n ( z n ) , z n x ^ ) j = 1 M δ j τ j , n ( 2 λ 1 g n ( z n ) + 2 λ 2 f j , n ( z n ) ) = j = 1 M δ j ρ n f j , n ( z n ) + g n ( z n ) d j ( z n ) ( 2 λ 1 g n ( z n ) + 2 λ 2 f j , n ( z n ) ) 2 λ ¯ ρ n j = 1 M δ j ( f j , n ( z n ) + g n ( z n ) ) 2 d j ( z n ) .
In view of (13), (14) and (15), we have
y n x ¯ 2 z n x ¯ 2 + λ ρ n 2 j = 1 M δ j ( f j , n ( z n ) + g n ( z n ) ) 2 d j ( z n )   4 λ ¯ ρ n j = 1 M δ j ( f j , n ( z n ) + g n ( z n ) ) 2 d j ( z n ) = z n x ¯ 2 + ρ n ( λ ρ n 4 λ ¯ ) j = 1 M δ j ( f j , n ( z n ) + g n ( z n ) ) 2 d j ( z n ) .
From (16) and (C5), we have
y n x ¯ z n x ¯ .
Using (17), Lemma 4 ( i ) and the definition of x n + 1 , we get
x n + 1 x ¯ 2 = ( 1 β n ) z n + β n y n x ¯ 2 = ( 1 β n ) ( z n x ¯ ) + β n ( y n x ¯ ) 2 = ( 1 β n ) z n x ¯ 2 + β n y n x ¯ 2 β n ( 1 β n ) z n y n 2 z n x ¯ 2 β n ( 1 β n ) z n y n 2 .
From (18) and the definition of z n , we get
x n + 1 x ¯ z n x ¯ = ( 1 α n ) x n x ¯ + α n u x ¯ max { x n x ¯ , u x ¯ } max { x 1 x ¯ , u x ¯ } ,
which shows that { x n } is bounded. Consequently, { z n } , { A z n } and { y n } are all bounded.
Now,
1 β n ( x n + 1 z n ) = 1 β n ( 1 β n ) z n + β n y n z n = y n z n ,
and
y n z n 2 = 1 β n 2 x n + 1 z n 2 = α n β n x n + 1 z n 2 α n β n .
Using (18) and (19), we have
x n + 1 x ¯ 2 z n x ¯ 2 1 β n β n x n + 1 z n 2 .
From the definition of z n , we have
z n x ¯ 2 = ( 1 α n ) x n + α n u x ¯ 2 = ( 1 α n ) 2 x n x ¯ 2 + α n 2 u x ¯ 2 + 2 α n ( 1 α n ) x n x ¯ , u x ¯ ( 1 α n ) x n x ¯ 2 + α n 2 u x ¯ 2 + 2 α n ( 1 α n ) x n x ¯ , u x ¯ .
Thus, (21) and (22) gives
x n + 1 x ¯ 2 ( 1 α n ) x n x ¯ 2 + α n 2 u x ¯ 2 + 2 α n ( 1 α n ) x n x ¯ , u x ¯ 1 β n β n x n + 1 z n 2 .
That is,
x n + 1 x ¯ 2 ( 1 α n ) x n x ¯ 2 α n Γ n ,
where
Γ n = α n u x ¯ 2 + 2 ( 1 α n ) x ¯ x n , u x ¯ + 1 β n α n β n x n + 1 z n 2 .
We know that { x n } is bounded and so it is bounded below. Hence, Γ n is bounded below. Furthermore, using Lemma 5 and (C3), we have
lim sup n x n x ¯ lim sup n ( Γ n ) = lim inf n Γ n .
Therefore, lim inf n Γ n is a finite real number and by (C3), we have
lim inf n Γ n = lim inf n ( 2 x ¯ x n , u x ¯ + 1 β n α n β n x n + 1 z n 2 ) .
Since { x n } is bounded, there exists a subsequence { x n k } of { x n } such that x n k p for some p H 1 and
lim inf n Γ n = lim inf k ( 2 x ¯ x n k , u x ¯ + 1 β n k α n k β n k x n k + 1 z n k 2 ) .
Since { x n } is bounded and lim inf n Γ n is finite, we have that 1 β n k α n k β n k x n k + 1 z n k 2 is bounded. Also, by (C4), we have 1 β n α n β n 1 b α n β n > 0 and so we have that 1 α n k β n k x n k + 1 z n k 2 is bounded.
Observe from (C3) and (C4), we have
0 < α n k β n k α n k a 0 , k .
Therefore, we obtain from (20) and α n k β n k 0 , k that
y n k z n k 0 , k .
From the definition of x n + 1 , we have
x n k + 1 z n k = β n k y n k z n k 0 , k ,
and
z n k x n k = α n k u x n k 0 , k .
Hence,
x n k + 1 x n k x n k + 1 z n k + z n k x n k 0 , k .
Now, using (16), we obtain
ρ n k ( 4 λ ¯ λ ρ n k ) j = 1 M δ j ( f j , n k ( z n k ) + g n k ( z n k ) ) 2 d j ( z n k ) ( z n k x ¯ y n k x ¯ ) ( z n k x ¯ + y n k x ¯ ) z n k y n k ( z n k x ¯ + y n k x ¯ ) .
Therefore, (27), (29) and (C5) gives
ρ n k ( 4 λ ¯ λ ρ n k ) j = 1 M δ j ( f j , n k ( z n k ) + g n k ( z n k ) ) 2 d j ( z n k ) 0 , k .
Again using (C5) together with (30) yields
j = 1 M δ j ( f j , n k ( z n k ) + g n k ( z n k ) ) 2 d j ( z n k ) 0 , k .
Hence, in view of (31) and restriction condition (C2), we have
( f j , n k ( z n k ) + g n k ( z n k ) ) 2 d j ( z n k ) 0 , k ,
for all j { 1 , , M } .
For each i { 1 , , N } and for each j { 1 , , M } , f j , n ( . ) and g i , n ( . ) are Lipschitz continuous with constant A 2 and 1, respectively. Since the sequence { z n } is bounded and
f j , n ( z n ) = f j , n ( z n ) f j , n ( x ¯ ) A 2 z n x ¯ , j { 1 , , M } ,
g i , n ( z n ) = g i , n ( z n ) g i , n ( x ¯ ) z n x ¯ , i { 1 , , N } ,
we have the sequences { g i , n ( z n ) } n = 1 and { f j , n ( z n ) } n = 1 , which are bounded. Hence, we have { d j ( z n ) } n = 1 bounded and hence { d j ( z n k ) } k = 1 is bounded. Consequently, from (32), we have
lim k f j , n k ( z n k ) = lim k g n k ( z n k ) = 0 , j { 1 , , M } .
From the definition of g n k ( z n k ) , we can have
g i , n k ( z n k ) g n k ( z n k ) , i { 1 , , N } .
Therefore, (33) and (34) gives
lim k f j , n k ( z n k ) = lim k g i , n k ( z n k ) = 0 , i { 1 , , N } , j { 1 , , M } .
That is, for all i { 1 , , N } , j { 1 , , M } , we have
lim k ( I P Q j , n k ) A z n k = lim k ( I P C i , n k ) z n k = 0 .
Therefore, since { z n } is bounded and from the boundedness assumption of the subdifferential operator q j , the sequence { ε j , n } n = 1 is bounded. In view of this and (35), for all j { 1 , , M } we have
q j ( A z n k ) ε j , n k , A z n k P Q j , n k ( A z n k ) ε j , n k ( I P Q j , n k ) A z n k 0 , k .
Similarly, from the boundedness of { ξ i , n } n = 1 and (35), for all i { 1 , , N } we obtain
c i ( z n k ) ξ i , n k , z n k P C i , n k ( z n k ) ξ i , n k ( I P C i , n k ) z n k 0 , k .
Since x n k p and using (28), we have z n k p and hence A z n k A p .
The weak lower semi-continuity of q j ( . ) and (36) implies that
q j ( A p ) lim inf k q j ( A z n k ) lim sup k q j ( A z n k ) 0 , j { 1 , , M } .
That is, A p Q j for all j { 1 , , M } .
Likewise, the weak lower semi-continuity of c i ( . ) and (37) implies that
c i ( p ) lim inf k c i ( z n k ) 0 , i { 1 , , N } .
That is, p C i for all i { 1 , , N } . Hence, p Ω .
Take x ¯ = P Ω u . Then, we obtain from (26) and Lemma 1 that
lim inf n Γ n = lim inf k 2 x ¯ x n k , u x ¯ + 1 β n k α n k β n k x n k + 1 z n k 2 2 lim inf k x ¯ x n k , u x ¯ 2 x ¯ p , u x ¯ 0 .
Then we have from (25) that
lim sup n x n x ¯ 2 lim sup n ( Γ n ) = lim inf n Γ n 0 .
Therefore, x n x ¯ 0 and this implies that { x n } converges strongly to x ¯ . This completes the proof. ☐
Remark 1. 
i.
When the point u in HSRPA is taken to be 0, from Theorem 1, we see that the limit point x ¯ of the sequence { x n } is the unique minimum norm solution of the MSSFP, i.e., x ¯ = min x Ω x .
ii.
In the algorithm (HSRPA), the stepsize τ j , n can also be replaced by
τ j , n = ρ n f j , n ( z n ) + g n ( z n ) d j 2 ( z n ) ,
where
d j ( z n ) = 1 , i f max { g n ( z n ) , f j , n ( z n ) } = 0 max { g n ( z n ) , f j , n ( z n ) } , o t h e r w i s e .
The proof for the strong convergence of the HSRPA using the stepsize τ j , n defined in (38) is almost the same as the proof of Theorem 1. To be precise, only slight rearrangement in (14) is required in the proof of Theorem 1.
If M = N = 1 , we have the following algorithm as a consequence of HSRPA concering SFP (3) assuming that C and Q are given as sub-level sets of convex functions (5) and by constructing half-spaces (6) and (7), and defining g n ( x ) = 1 2 ( I P C n ) x 2 , g n ( x ) = ( I P C n ) x , f n ( x ) = 1 2 ( I P Q n ) A x 2 and f n ( x ) = A ( I P Q n ) A x .
Corollary 1. 
Assume that x ¯ C A 1 ( Q ) Ø . Then, the sequence { x n } generated by Algorithm 2 converges strongly to the solution x ¯ = P C A 1 ( Q ) ( u ) of SFP (3).
Algorithm 2: Algorithm for solving the SFP.
Initialization: Choose u, x 1 H 1 . Let the positive real constants λ 1 and λ 2 , and the real
  sequences { α n } , { β n } and { ρ n } satisfy the following conditions:
  (A1) 
λ 1 , λ 2 ( 0 , 1 ) and λ 1 + λ 2 = 1 .
  (A2) 
0 < α n < 1 , lim n α n = 0 and n = 1 α n = .
  (A3) 
0 < a β n b < 1 for all n N .
  (A4) 
0 < λ ρ n < 2 λ ¯ and lim inf n ρ n ( 4 λ ¯ λ ρ n ) > 0 where λ = max { λ 1 , λ 2 }
 and λ ¯ = min { λ 1 , λ 2 } .
Iterative Step: Proceed with the following computations:
z n = ( 1 α n ) x n + α n u , y n = z n τ n ( λ 1 g n ( z n ) + λ 2 f n ( z n ) ) , x n + 1 = ( 1 β n ) z n + β n y n ,
 where
τ n = ρ n f n ( z n ) + g n ( z n ) d ( z n ) ,
 for
d ( z n ) = 1 , if g n ( z n ) 2 + f n ( z n ) 2 = 0 g n ( z n ) 2 + f n ( z n ) 2 , otherwise .

4. Preliminary Numerical Results and Applications

In this section, we illustrate the numerical performance and applicability of HSRPA by solving some problems. In the first example, we investigate the numerical performance of HSRPA with regards to different choices of the control parameters α n , β n and ρ n . In Example 2, we illustrate the numerical properties of HSRPA in comparison with three other strongly convergent algorithms, namely the gradient projection method (GPM) by Censor et al. ([7], Algorithm 1), the perturbed projection method (PPM) by Censor et al. ([43], Algorithm 5), and the self-adaptive projection method (SAPM) by Zhao and Yang ([46], Algorithm 3.2). As mentioned in Remark 1 ( i i ) , the stepsize in HSRPA can be replaced by the stepsize (38). Therefore in Example 3, we analyze the effect of the two stepsizes in HSRPA for different choices of λ 1 and λ 2 . Additionally, we compare HSRPA with et al. ([48], Algorithm 2.1). Also, a comparison of Algorithm 2 with the strong convergence result of SFP proposed by Shehu et al. [56] is given in Example 4. Finally in Section 4.1, we present a sparse signal recovery experiment to illustrate the efficiency of Algorithm 2 by comparing with algorithms proposed by Lopez [2] and Yang [37]. The numerical results are completed on a standard TOSHIBA laptop with Intel(R) Core(TM) i5-2450M [email protected] 2.5GHz with memory 4GB. The programme is implemented in MATLAB R2020a.
Example 1. 
Consider MSSFP (2) for H 1 = R s , H 2 = R t , A : R s R t given by A ( x ) = G t × s ( x ) , where G t × s is t × s matrix, the closed convex subsets C i ( i { 1 , , N } ) of R s are given by
C i = x = ( x 1 , , x s ) T R s : c i ( x ) 0 ,
where c i ( x ) = x x i 0 2 r i 2 such that
r i = r for all i { 2 , , N } , where r is a positive real number,
x i 0 = ( x 1 , i , , x s , i ) T = ( 0 , , 0 , i 1 ) T R s for each i = 1 , , N ,
and the closed convex subsets Q j ( j { 1 , , M } ) of R t are given by
Q j = y = ( y 1 , y 2 , , y t ) T R t : q j ( y ) 0 ,
where
q j ( y ) = k = 1 t a k , j ( y k y k , j ) b j ,
such that for each j { 1 , , M } ;
a k , j = 2 j , k = 1 , , t ,
b j = j 1 ,
y t , j = r θ b j a t , j , and y k , j = 0 , k = 1 , , t 1 , where θ is a nonzero real number.
Notice that, i = 1 N C i = Ø for N > 2 r + 1 , i = 1 N C i Ø , i = 1 N C i contains infinite points for 0 < N < 2 r + 1 , i = 1 N C i = { ( 0 , , 0 , r ) T } for N = 2 r + 1 , and r is a natural number. Moreover, j = 1 M Q j Ø , and ( 0 , , 0 , r θ ) T j = 1 M Q j .
We consider for s = t , G t × s = θ I s × s , N = 2 r + 1 , and M = 4 , where r is natural number, and I s × s is s × s identity matrix. Thus, A ( ( 0 , , 0 , r ) T ) = ( 0 , , 0 , r ϵ ) T , and hence the solution set of the MSSFP is Ω = { ( 0 , , 0 , r ) T } .
For each i { 1 , , N } and j { 1 , , M } , the subdifferential is given by
c i ( z n ) = z n x i 0 z n x i 0 , i f   z n x i 0 0 , { ε i R s : ε i 1 } , o t h e r w i s e .
and q j ( A z n ) = { ( a 1 , j , , a t , j ) T } .
Note that the projection
P C i , n ( z n ) = arg min x z n : x C i , n ,
where C i , n = { x H 1 : c i ( z n ) ξ i , n , z n x } , is solving the following quadratic programming problems with inequality constraint
m i n i m i z e   1 2 x T H ¯ x + B ¯ n T x + c ¯ s u b j e c t   t o   D ¯ i , n x F ¯ i ,
where H ¯ = 2 I s × s , B ¯ n = 2 z n , c ¯ = z n 2 , D ¯ i , n = ξ i , n = [ ξ i , n , 1 , , ξ i , n , s ] , F ¯ i = r i 2 z n x i 0 2 + ξ n , j , z n . Moreover, the projection P Q j , n ( A z n ) for where Q j , n = { y H 2 : q j ( A z n ) ε j , n , A z n y } , is solving the following quadratic programming problems with inequality constraint
m i n i m i z e   1 2 w T H ^ w + B ^ n T w + c ^ s u b j e c t   t o   D ^ j , n ( w ) F ^ j ,
where H ^ = 2 I t × t , B ^ n = 2 A z n , c ^ = A z n 2 , D ^ j , n = ε j , n , F ^ j = b j + ε j , n , A z n + a j , y j 0 A z n for a j = ( a 1 , j , , a t , j ) . The problems (39) and (40) can be effectively solved by its appropriate solver in MATLAB.In the following our experiments we took
ξ i , n = z n x i 0 z n x i 0 , i f   z n x i 0 0 , 0 , o t h e r w i s e ,
and ε j , n = ( a 1 , j , , a t , j ) T .
We study the numerical behavior of HSRPA for different parameters α n , β n , ρ n and for different dimensions s = t , where r = 2 (i.e., N = 5 ), λ 1 = λ 2 = 1 2 , δ j = j 4 for all j { 1 , 2 , 3 , 4 } are fixed. Notice that for the choice of λ 1 = λ 2 = 1 2 the parameter ρ n is chosen in such a way that 0 < ρ n < 4 and lim inf n ρ n ( 4 ρ n ) > 0 . The numerical results are shown in Figure 1 and Figure 2.
In view of Figure 1 and Figure 2, we see that our algorithm works better for
(i) 
A sequence { α n } in which the terms are very close to zero;
(ii) 
A sequence { β n } in which terms are very close to 1;
(iii) 
A sequence { ρ n } in which terms are larger but not exceeding 4.
Example 2. 
Comparing HSRPA with GPM, PPM and SAPM.
Consider MSSFP (2) for H 1 = R 3 , H 2 = R 3 , A : R 3 R 3 given by A = θ I 3 × 3 , the closed convex subsets C i ( i { 1 , , N } ) of R 3 are given by
C i = x R 3 : c i ( x ) 0 ,
where θ is a real constant, I 3 × 3 is 3 × 3 identity matrix and
c i ( x ) = ( 1 ) i ( x , w i 0 γ i )
such that w i 0 = ( i , i + 1 , i + 2 2 ) T and γ i = 5 i + 3 , and the closed convex subsets Q j ( j { 1 , , M } ) of R 3 are given by
Q j = y R 3 : q j ( y ) 0 ,
where q 1 ( y ) = y 2 12 θ 2 , q j ( y ) = ( 1 ) j ( y , z j 0 3 θ ( j + j 2 ) ) and z j 0 = ( j , j 2 , 2 j 2 ) T for each j { 2 , , M } .
For each choice of θ the solution set of the MSSFP is { ( 3 , 1 , 2 ) } , i.e., x ¯ = ( 3 , 1 , 2 ) .
We choose GPM, PPM and SAPM because the problem under consideration is the same and the approach has some common features with our approach.
GPM:The proposed iterative algorithm solving MSSFP is reduced to
x n + 1 = x n + ϱ i = 1 N σ i ( P C i I ) x n + j = 1 M ν j A ( P Q j I ) A x n ,
where ϱ ( 0 , 2 L ) , L = i = 1 N σ i + ω j = 1 M ν j and i = 1 N σ i + j = 1 M ν j = 1 for σ i > 0 , ν j > 0 and ω is the spectral radius of A A = A t A .
PPM:The proposed algorithm is obtained by replacing the projections on the closed convex subset C i and Q j in (41) by the half-space C i , n and C j , n .
SAPM:The proposed iterative algorithm for solving the MSSFP is reduced to
y n = x n ϱ n p n ( x n ) , ϱ n = γ ς l n , w h e r e min { l : p n ( x n ) p n ( y n ) μ x n y n γ ς l , l = 0 , 1 , 2 , } , x n + 1 = x n ϱ n p n ( y n ) ,
where p n ( x n ) = i = 1 N σ i ( I P C i , n ) x n + j = 1 M ν j A ( I P Q j , n ) A x n , i = 1 N σ i + j = 1 M ν j = 1 for σ i > 0 , ν j > 0 , γ > 0 , ς ( 0 , 1 ) , μ ( 0 , 1 ) .
In our algorithm we took ξ i , n = ( 1 ) i x i 0 for each i { 1 , , N } and
ε 1 , n = z n z 1 0 z n z 1 0 , i f   z n z 1 0 0 , 0 , o t h e r w i s e ,
and ε j , n = ( 1 ) j z j 0 for each j { 2 , , M } .
For the purpose of comparison we took the following data:
HSRPA: λ 1 = λ 2 = 1 2 , δ j = j 1 + + M for j { 1 , , M } , α n = 1 n + 1 , β n = n + 2 2 n + 6 , ρ n = 1 .
GPM: ϱ = 1 3 , σ i = i 2 ( 1 + + N ) for i { 1 , , N } , ν j = j 2 ( 1 + + M ) for j { 1 , , M } .
PPM: ϱ = 1 3 , σ i = i 2 ( 1 + + N ) for i { 1 , , N } , ν j = j 2 ( 1 + + M ) for j { 1 , , M } .
SAPM: γ = 1 , ς = 1 2 = μ , σ i = i 2 ( 1 + + N ) for i { 1 , , N } , ν j = j 2 ( 1 + + M ) for j { 1 , , M } .
The numerical results are shown in Table 1 and Figure 3. To permit comparisons between the three algorithms about the number of iterations (Iter(n)) and the time of execution in seconds (CPUt(s)), we have used the relative difference x n x n + 1 x 1 u , which is less than ϵ, as the stopping criteria in Table 1.
From the numerical results in Table 1 and Figure 3 of Example 2, we can see that our algorithm (HSRPA) has better performance than GPM, PPM and SAPM. To be specific, HSRPA converges faster and requires less iterations than GPM, PPM and SAPM. In view of CPU time of execution, our algorithm has comparable performance with GPM, PPM and SAPM.
Example 3. 
Consider the MSSFP with H 1 = R s , H 2 = R t and C i = { x R s : 5 i e 1 x ( N i ) e 1 } ( i { 1 , , N } ) and Q j = { y R t : ( j M ) e 1 y j e 1 } , ( j { 1 , , M } ), with a different number of feasible sets N and M, and dimensions s and t. We randomly generated the operator A = ( a i j ) t × s , with a i j [ 0 , 10 ] . In this example, we see the effect of the different choices of λ 1 and λ 2 for the numerical results of the HSRPA using both stepsizes τ j , n and τ j , n (given in (38)), and also we compare the HSRPA with the algorithm proposed by Zhao et al. ([48], Algorithm 2.1). For the HSRPA we take u = ( 1 , 1 , 2 ) T , x 1 = r a n d [ 100 , 100 ] , N = 3 , M = 10 , s = t = 3 , ρ n = 3 , α n = 1 n + 1 , β n = n + 5 2 n + 6 and δ j = j 1 + + M , j { 1 , , M } . For ([48], Algorithm 2.1), we take x 0 = r a n d [ 100 , 100 ] , t = 3 , r = 10 , N = M = 3 , ω k = 1 , α i = ( 1013 3000 ) i , i = 1 , 2 , 3 , and β j = ( 1013 3000 ) j , j = 1 , 2 , , 10 . We use x n + 1 x n < 10 4 as the stopping criteria. The results are presented in Table 2 below.
Interestingly, it can be observed from Table 2 that, for λ 1 λ 2 HSRPA with the stepsize τ j , n is faster in terms of less number of iterations and CPU-run time than HSRPA with the stepsize τ j , n . On the contrary, HSRPA ( τ j , n ) has better performance for λ 1 > λ 2 and HSRPA with either of the stepsizes converges faster and requires less number of iterations than the compared algorithm ([48], Algorithm 2.1).
Example 4. 
Consider the Hilbert space H 1 = H 2 = L 2 ( [ 0 , 1 ] ) with norm x | | : = 0 1 | x ( t ) | 2 d t and the inner product given by x , y = 0 1 x ( t ) y ( t ) d t . The two nonempty, closed and convex sets are C = { x L 2 ( [ 0 , 1 ] ) : x ( t ) , 3 t 2 = 0 } and Q = { x L 2 ( [ 0 , 1 ] ) : x , t 3 1 } , and the linear operator is given as ( A x ) ( t ) = x ( t ) , i.e., A = 1 or A = I is the identity. The orthogonal projection onto C and Q have an explicit formula; see, for example, [57]
P C ( w ( t ) ) = w ( t ) w ( t ) , 3 t 2 3 t 2 L 2 2 3 t 2 , i f w ( t ) , 3 t 2 0 , w ( t ) , i f w ( t ) , 3 t 2 = 0 .
P Q ( w ( t ) ) = w ( t ) w ( t ) , t 3 1 t 3 L 2 2 ( t 3 ) , i f w ( t ) , t 3 < 1 , w ( t ) , i f w ( t ) , t 3 1 .
We consider the following problem
f i n d x C s u c h t h a t A x Q .
It is clear that Problem (42) has a nonempty solution set Ω since 0 Ω . In this case, the iterative scheme in Algorithm 2( u , x 1 C , with λ 1 = λ 2 = 1 2 , ρ n = 7 2 , α n = 1 n + 1 , a n d β n = n + 2 2 n + 6 ) becomes
z n = ( 1 1 n + 1 ) x n + u n + 1 , y n = z n τ n 2 ( ( I P C n ) z n + A ( I P Q n ) A z n ) , x n + 1 = ( 1 n + 2 2 n + 6 ) z n + ( n + 2 2 n + 6 ) y n ,
where
τ n = 7 ( ( I P C n ) z n 2 + ( I P Q n ) A z n 2 ) d ( z n ) ,
for
d ( z n ) = 1 , i f ( I P C n ) z n 2 + A ( I P Q n ) A z n 2 = 0 ( I P C n ) z n 2 + A ( I P Q n ) A z n 2 , o t h e r w i s e .
In this example, we compare Algorithm 2 with the strong convergence result of SFP proposed by Shehu et al. [56]. The iterative scheme (27) in [56] for u , x 1 C , with α n = 1 n + 1 , β n = n 2 ( n + 1 ) = γ n and t n = 1 A 2 was reduced into the following form
y n = [ x n 1 A 2 A ( A x n P Q n ( A x n ) ) ] x n + 1 = P C u n + 1 + n x n 2 ( n + 1 ) + n y n 2 ( n + 1 ) , n 1 .
We see here that our iterative scheme can be implemented to solve the problem (42) considered in this example. We use x n + 1 x n < 10 3 as stopping criteria for both algorithms and the outcome of the numerical experiment is reported in Figure 4. It can be observed from Figure 4 that, for different choices of u and x 1 , Algorithm 2 is faster in terms of less number of iterations and CPU-run time than the algorithm proposed by Shehu et al. [56].

4.1. Application to Signal Recovery

In this part, we consider the problem of recovering a noisy sparse signal. Compressed sensing can be modeled as the following linear equation:
b = A x + ϵ ,
where x R N is a vector with L nonzero components to be recovered, b R M is the measured data with noisy ϵ (when ϵ = 0 , it means that there is no noise to the observed data) and A is an M × N bounded linear observation operator with ( N > M ). The problem in Equation (44) can be seen as the LASSO problem, which has wide application in signal processing theory [58].
min x R N 1 2 A x b 2 2 subject to x 1 t ,
where t > 0 is a given constant. It can be observed that (45) indicates the potential of finding a sparse solution of the SFP (3) due to the 1 constraint. Thus, we apply Algorithm 2 to solve problem (45).
Let C : = { x : x 1 t } and Q = { b } , then the minimization problem (45) can be seen as an SFP (3). Denote the level set C n by,
C n = { x R N : ω ( x n ) + ς n , x x n 0 } ,
where ς n ω ( x n ) with the convex function ω ( x ) = x 1 t .
For a special case where Q = Q n = { b } , Algorithm 2 converges to the solution of (45) and moreover, since the projection onto the level set has an explicit formula, Algorithm 2 can be easily implemented. In the sequel, we present a sparse signal recovery experiment to illustrate the efficiency of Algorithm 2 by comparing with algorithms proposed by Lopez [2] and Yang [37].
The vector x is an L sparse signal with L non-zero elements that is generated from uniform distribution in the interval [ 2 , 2 ] . The matrix A is generated from a normal distribution with mean zero and one variance. The observation b is generated by white Gaussian noise with signal-to-noise ratio S N R = 40 . The process is started with t = L , x 0 and u are randomly generated N × 1 vectors. The goal is then to recover the L sparse signal x by solving (45). The restoration accuracy is measured by the mean squared error as follows:
M S E = 1 N x n + 1 x n ϵ ,
where x n is an estimated signal of x, and ϵ > 0 is a given small constant. We take the stopping criteria ϵ = 10 6 and we choose the corresponding parameters ρ n = 3.5 , λ 1 = λ 2 = 0.5 for Algorithm 2. We also choose γ = 1 A 2 and ρ n = 2 for the algorithm by Yang [37] and algorithm by Lopez [2], respectively.
It can be observed from Figure 5, Figure 6, Figure 7 and Figure 8 that the recovered signal by the proposed algorithm has less number of iterations and MSE. Therefore, the quality of the signal recovered by the proposed algorithm is better than the compared algorithms.

5. Conclusions

In this paper, we present a strong convergence iterative algorithm solving MSSFP with a way of selecting the stepsizes such that the implementation of the algorithm does not need any prior information as regards the operator norms. Preliminary numerical results are reported to illustrate the numerical behavior of our algorithm (HSRPA), and to compare it with those well known in the literature, including, Censor et al. ([7], Algorithm 1), Censor et al. ([43], Algorithm 5), Zhao and Yang ([46], Algorithm 3.2) and Zhao et al. ([48], Algorithm 2.1). The numerical results show that our proposed Algorithm is practical and promising for solving MSSFP. Algorithm 2 is applied in signal recovery. The experiment results show that Algorithm 2 has fewer iterations than that of algorithms proposed by Yang [37], Lopez [2] and Shehu [56].

Author Contributions

Conceptualization, G.H.T. and K.S.; data curation, A.G.G.; formal analysis, G.H.T.; funding acquisition, P.K.; investigation, K.S.; methodology, K.S.; project administration, P.K.; resources, P.K.; software, A.G.G.; supervision, P.K.; validation, G.H.T. and P.K.; visualization, A.G.G.; writing—original draft, G.H.T.; writing—review and editing, A.G.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), Faculty of Science, KMUTT. The first author was supported by the ‬Petchra Pra Jom Klao Ph.D. Research Scholarship‭ from King Mongkut’s University of Technology Thonburi with Grant No. 37/2561.

Acknowledgments

The authors acknowledge the financial support provided by the Center of Excellence in Theoretical and Computational Science (TaCS-CoE), KMUTT. Guash Haile Taddele is supported by the Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi (Grant No.37/2561). Moreover, Kanokwan Sitthithakerngkiet was supported by the Faculty of Applied Science, King Mongkut’s University of Technology, North Bangkok. Contract no. 6342101.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar] [CrossRef]
  2. López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
  3. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2003, 20, 103. [Google Scholar] [CrossRef] [Green Version]
  4. Combettes, P. The convex feasibility problem in image recovery. In Advances in Imaging and Electron Physics; Elsevier: Amsterdam, The Netherlands, 1996; Volume 95, pp. 155–270. [Google Scholar]
  5. Qu, B.; Xiu, N. A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21, 1655. [Google Scholar] [CrossRef]
  6. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  7. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071. [Google Scholar] [CrossRef] [Green Version]
  8. Censor, Y.; Lent, A. Cyclic subgradient projections. Math. Program. 1982, 24, 233–235. [Google Scholar] [CrossRef]
  9. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. The split common null point problem. arXiv 2011, arXiv:1108.5953. [Google Scholar]
  10. Censor, Y.; Segal, A. The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16, 587–600. [Google Scholar]
  11. Dang, Y.; Gao, Y.; Li, L. Inertial projection algorithms for convex feasibility problem. J. Syst. Eng. Electron. 2012, 23, 734–740. [Google Scholar] [CrossRef]
  12. He, Z. The split equilibrium problem and its convergence algorithms. J. Inequalities Appl. 2012, 2012, 162. [Google Scholar] [CrossRef] [Green Version]
  13. Taiwo, A.; Jolaoso, L.O.; Mewomo, O.T. Viscosity approximation method for solving the multiple-set split equality common fixed-point problems for quasi-pseudocontractive mappings in Hilbert spaces. J. Ind. Manag. Optim. 2017, 13. [Google Scholar] [CrossRef]
  14. Aremu, K.O.; Izuchukwu, C.; Ogwo, G.N.; Mewomo, O.T. Multi-step iterative algorithm for minimization and fixed point problems in p-uniformly convex metric spaces. J. Manag. Optim. 2017, 13. [Google Scholar] [CrossRef] [Green Version]
  15. Oyewole, O.K.; Abass, H.A.; Mewomo, O.T. A strong convergence algorithm for a fixed point constrained split null point problem. Rend. Circ. Mat. Palermo Ser. 2 2020, 13, 1–20. [Google Scholar]
  16. Izuchukwu, C.; Ugwunnadi, G.C.; Mewomo, O.T.; Khan, A.R.; Abbas, M. Proximal-type algorithms for split minimization problem in P-uniformly convex metric spaces. Numer. Algorithms 2019, 82, 909–935. [Google Scholar] [CrossRef] [Green Version]
  17. Jolaoso, L.O.; Alakoya, T.O.; Taiwo, A.; Mewomo, O.T. Inertial extragradient method via viscosity approximation approach for solving equilibrium problem in Hilbert space. Optimization 2020, 1–20. [Google Scholar] [CrossRef]
  18. He, S.; Wu, T.; Cho, Y.J.; Rassias, T.M. Optimal parameter selections for a general Halpern iteration. Numer. Algorithms 2019, 82, 1171–1188. [Google Scholar] [CrossRef]
  19. Dadashi, V.; Postolache, M. Forward–backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2020, 9, 89–99. [Google Scholar] [CrossRef] [Green Version]
  20. Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017, 18, 875–882. [Google Scholar]
  21. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441. [Google Scholar] [CrossRef]
  22. Dang, Y.; Gao, Y. The strong convergence of a KM–CQ-like algorithm for a split feasibility problem. Inverse Probl. 2010, 27, 015007. [Google Scholar] [CrossRef]
  23. Jung, J.S. Iterative algorithms based on the hybrid steepest descent method for the split feasibility problem. J. Nonlinear Sci. Appl. 2016, 9, 4214–4225. [Google Scholar] [CrossRef]
  24. Wang, F.; Xu, H.K. Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Anal. Theory Methods Appl. 2011, 74, 4105–4111. [Google Scholar] [CrossRef]
  25. Xu, H.K. An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116, 659–678. [Google Scholar] [CrossRef]
  26. Xu, H.K. A variable Krasnosel’skii–Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021. [Google Scholar] [CrossRef]
  27. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
  28. Yu, X.; Shahzad, N.; Yao, Y. Implicit and explicit algorithms for solving the split feasibility problem. Optim. Lett. 2012, 6, 1447–1462. [Google Scholar] [CrossRef]
  29. Shehu, Y.; Mewomo, O.T.; Ogbuisi, F.U. Further investigation into approximation of a common solution of fixed point problems and split feasibility problems. Acta Math. Sci. 2016, 36, 913–930. [Google Scholar] [CrossRef]
  30. Shehu, Y.; Mewomo, O.T. Further investigation into split common fixed point problem for demicontractive operators. Acta Math. Sin. Engl. Ser. 2016, 32, 1357–1376. [Google Scholar] [CrossRef]
  31. Mewomo, O.T.; Ogbuisi, F.U. Convergence analysis of an iterative method for solving multiple-set split feasibility problems in certain Banach spaces. Quaest. Math. 2018, 41, 129–148. [Google Scholar] [CrossRef]
  32. Dong, Q.L.; Tang, Y.C.; Cho, Y.J.; Rassias, T.M. “Optimal” choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim. 2018, 71, 341–360. [Google Scholar] [CrossRef]
  33. Cegielski, A. Landweber-type operator and its properties. Contemp. Math. 2016, 658, 139–148. [Google Scholar]
  34. Cegielski, A. General method for solving the split common fixed point problem. J. Optim. Theory Appl. 2015, 165, 385–404. [Google Scholar] [CrossRef] [Green Version]
  35. Cegielski, A.; Reich, S.; Zalas, R. Weak, strong and linear convergence of the CQ-method via the regularity of Landweber operators. Optimization 2020, 69, 605–636. [Google Scholar] [CrossRef]
  36. Hendrickx, J.M.; Olshevsky, A. Matrix p-norms are NP-hard to approximate if p ≠ 1,2,∞. SIAM J. Matrix Anal. Appl. 2010, 31, 2802–2812. [Google Scholar] [CrossRef] [Green Version]
  37. Yang, Q. The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20, 1261. [Google Scholar] [CrossRef]
  38. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef] [Green Version]
  39. Fukushima, M. A relaxed projection method for variational inequalities. Math. Program. 1986, 35, 58–70. [Google Scholar] [CrossRef]
  40. Alakoya, T.O.; Jolaoso, L.O.; Mewomo, O.T. Modified inertial subgradient extragradient method with self adaptive stepsize for solving monotone variational inequality and fixed point problems. Optimization 2020, 1–30. [Google Scholar] [CrossRef]
  41. Jolaoso, L.O.; Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. A unified algorithm for solving variational inequality and fixed point problems with application to the split equality problem. Comput. Appl. Math. 2020, 39, 38. [Google Scholar] [CrossRef]
  42. Buong, N. Iterative algorithms for the multiple-sets split feasibility problem in Hilbert spaces. Numer. Algorithms 2017, 76, 783–798. [Google Scholar] [CrossRef]
  43. Censor, Y.; Motova, A.; Segal, A. Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327, 1244–1256. [Google Scholar] [CrossRef] [Green Version]
  44. Latif, A.; Vahidi, J.; Eslamian, M. Strong convergence for generalized multiple-set split feasibility problem. Filomat 2016, 30, 459–467. [Google Scholar] [CrossRef] [Green Version]
  45. Masad, E.; Reich, S. A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 2007, 8, 367. [Google Scholar]
  46. Zhao, J.; Yang, Q. Self-adaptive projection methods for the multiple-sets split feasibility problem. Inverse Probl. 2011, 27, 035009. [Google Scholar] [CrossRef]
  47. Zhao, J.; Yang, Q. Several acceleration schemes for solving the multiple-sets split feasibility problem. Linear Algebra Appl. 2012, 437, 1648–1657. [Google Scholar] [CrossRef] [Green Version]
  48. Zhao, J.; Yang, Q. A simple projection method for solving the multiple-sets split feasibility problem. Inverse Probl. Sci. Eng. 2013, 21, 3537–3546. [Google Scholar] [CrossRef]
  49. Yao, Y.; Postolache, M.; Zhu, Z. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 2019, 69, 269–281. [Google Scholar] [CrossRef]
  50. Osilike, M.O.; Isiogugu, F.O. Weak and strong convergence theorems for nonspreading-type mappings in Hilbert spaces. Nonlinear Anal. Theory Methods Appl. 2011, 74, 1814–1822. [Google Scholar] [CrossRef]
  51. Aubin, J.P. Optima and Equilibria: An Introduction to Nonlinear Analysis; Springer Science & Business Media: Berlin, Germany, 2013; Volume 140. [Google Scholar]
  52. Ceng, L.C.; Ansari, Q.H.; Yao, J.C. An extragradient method for solving split feasibility and fixed point problems. Comput. Math. Appl. 2012, 64, 633–642. [Google Scholar] [CrossRef] [Green Version]
  53. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin/Heidelberg, Germany, 2011; Volume 408. [Google Scholar]
  54. Maingé, P.E.; Măruşter, Ş. Convergence in norm of modified Krasnoselski–Mann iterations for fixed points of demicontractive mappings. Appl. Math. Comput. 2011, 217, 9864–9874. [Google Scholar] [CrossRef]
  55. He, B. Inexact implicit methods for monotone general variational inequalities. Math. Program. 1999, 86, 199–217. [Google Scholar] [CrossRef]
  56. Shehu, Y. Strong convergence result of split feasibility problems in Banach spaces. Filomat 2017, 31, 1559–1571. [Google Scholar] [CrossRef] [Green Version]
  57. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Springer: Berlin/Heidelberg, Germany, 2012; Volume 2057. [Google Scholar]
  58. Gibali, A.; Liu, L.W.; Tang, Y.C. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2018, 12, 817–830. [Google Scholar] [CrossRef]
Figure 1. For t = s = 100 , ρ n = 2 , θ = 10 and for randomly generated starting points x 1 and u.
Figure 1. For t = s = 100 , ρ n = 2 , θ = 10 and for randomly generated starting points x 1 and u.
Mca 25 00047 g001
Figure 2. For α n = 5 5 n + 1 , β n = 1 2 + 1 n + 4 , θ = 150 and for randomly generated starting points x 1 and u.
Figure 2. For α n = 5 5 n + 1 , β n = 1 2 + 1 n + 4 , θ = 150 and for randomly generated starting points x 1 and u.
Mca 25 00047 g002
Figure 3. For N = 50 and starting points u = ( 1 , , 1 ) R N (in HSRPA) and x 1 = 2 u .
Figure 3. For N = 50 and starting points u = ( 1 , , 1 ) R N (in HSRPA) and x 1 = 2 u .
Mca 25 00047 g003
Figure 4. Comparison of Algorithm 2 and algorithm by Shehu [56] for different choices of u and x 1 .
Figure 4. Comparison of Algorithm 2 and algorithm by Shehu [56] for different choices of u and x 1 .
Mca 25 00047 g004
Figure 5. The original L sparse signal versus the recovered sparse signal by Algorithm 2, algorithms by Lopez [2] and Yang [37] when N = 512 , M = 120 and L = 30 .
Figure 5. The original L sparse signal versus the recovered sparse signal by Algorithm 2, algorithms by Lopez [2] and Yang [37] when N = 512 , M = 120 and L = 30 .
Mca 25 00047 g005
Figure 6. MSE versus the number of iterations, and the comparison of Algorithm 2 with that of algorithms by Lopez [2] and Yang [37] when N = 512 ,   M = 120 and L = 30 .
Figure 6. MSE versus the number of iterations, and the comparison of Algorithm 2 with that of algorithms by Lopez [2] and Yang [37] when N = 512 ,   M = 120 and L = 30 .
Mca 25 00047 g006
Figure 7. The original L sparse signal versus the recovered sparse signal by Algorithm 2, algorithms by Lopez [2] and Yang [37] when N = 4096 ,   M = 960 and L = 60 .
Figure 7. The original L sparse signal versus the recovered sparse signal by Algorithm 2, algorithms by Lopez [2] and Yang [37] when N = 4096 ,   M = 960 and L = 60 .
Mca 25 00047 g007
Figure 8. MSE versus the number of iterations, and the comparison of Algorithm 2 with that of algorithms by Lopez [2] and Yang [37] when N = 4096 ,   M = 960 and L = 60 .
Figure 8. MSE versus the number of iterations, and the comparison of Algorithm 2 with that of algorithms by Lopez [2] and Yang [37] when N = 4096 ,   M = 960 and L = 60 .
Mca 25 00047 g008
Table 1. For θ = 3 , u = ( 4 , 6 , 9 ) and x 1 = ( 4 , 7 , 19 ) .
Table 1. For θ = 3 , u = ( 4 , 6 , 9 ) and x 1 = ( 4 , 7 , 19 ) .
ϵ = 10 2 ,   N = 3 ,   M = 2 ϵ = 10 3 ,   N = 4 = M
Iter(n) 271 359
HSRPACPUt(s) 2.478534 2.7094060
x n x ¯ 1.29153127891261 × 10 11 3.95548438843874 × 10 5
Iter(n) 286 421
GPMCPUt(s) 2.350976 3.0921890
x n x ¯ 1.32185889807060 × 10 11 4.00584274497442 × 10 5
Iter(n) 293 368
PPMCPUt(s) 2.462534 2.6903446
x n x ¯ 1.34325049285599 × 10 11 3.97217074806315 × 10 5
Iter(n) 277 397
SAPMCPUt(s) 2.192005 2.6393334
x n x ¯ 1.30707774298130 × 10 11 4.0068355895327 × 10 5
Table 2. Comparative result of the Half-Space Relaxation Projection Algorithm (HSRPA) (with two different stepsizes τ j , n and τ j , n ) with ([48], Algorithm 2.1) for different choices of λ 1 and λ 2 .
Table 2. Comparative result of the Half-Space Relaxation Projection Algorithm (HSRPA) (with two different stepsizes τ j , n and τ j , n ) with ([48], Algorithm 2.1) for different choices of λ 1 and λ 2 .
Choices of λ 1 , λ 2 HSRPA ( τ j , n )HSRPA ( τ j , n )Algorithm 2.1
Ite.CPUt(s)Ite.CPUt(s)Ite.CPUt(s)
λ 1 = 0.9 , λ 2 = 0.1 568.9971517.362823528.7590
λ 1 = 0.8 , λ 2 = 0.2 555.8883373.7528
λ 1 = 0.7 , λ 2 = 0.3 454.7402393.9150
λ 1 = 0.6 , λ 2 = 0.4 424.6510383.9080
λ 1 = 0.5 , λ 2 = 0.5 383.9295454.5573
λ 1 = 0.4 , λ 2 = 0.6 363.1803444.4400
λ 1 = 0.3 , λ 2 = 0.7 424.3067505.0784
λ 1 = 0.2 , λ 2 = 0.8 293.4536363.8640
λ 1 = 0.1 , λ 2 = 0.9 323.3220384.1409

Share and Cite

MDPI and ACS Style

Taddele, G.H.; Kumam, P.; Gebrie, A.G.; Sitthithakerngkiet, K. Half-Space Relaxation Projection Method for Solving Multiple-Set Split Feasibility Problem. Math. Comput. Appl. 2020, 25, 47. https://doi.org/10.3390/mca25030047

AMA Style

Taddele GH, Kumam P, Gebrie AG, Sitthithakerngkiet K. Half-Space Relaxation Projection Method for Solving Multiple-Set Split Feasibility Problem. Mathematical and Computational Applications. 2020; 25(3):47. https://doi.org/10.3390/mca25030047

Chicago/Turabian Style

Taddele, Guash Haile, Poom Kumam, Anteneh Getachew Gebrie, and Kanokwan Sitthithakerngkiet. 2020. "Half-Space Relaxation Projection Method for Solving Multiple-Set Split Feasibility Problem" Mathematical and Computational Applications 25, no. 3: 47. https://doi.org/10.3390/mca25030047

APA Style

Taddele, G. H., Kumam, P., Gebrie, A. G., & Sitthithakerngkiet, K. (2020). Half-Space Relaxation Projection Method for Solving Multiple-Set Split Feasibility Problem. Mathematical and Computational Applications, 25(3), 47. https://doi.org/10.3390/mca25030047

Article Metrics

Back to TopTop