Next Article in Journal
Change of Basis Transformation from the Bernstein Polynomials to the Chebyshev Polynomials of the Fourth Kind
Next Article in Special Issue
Iterative Methods for Computing the Resolvent of Composed Operators in Hilbert Spaces
Previous Article in Journal
A Further Extension for Ramanujan’s Beta Integral and Applications
Previous Article in Special Issue
Extension of Extragradient Techniques for Variational Inequalities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Relaxed CQ Iterative Algorithms for the Split Feasibility Problem †

1
College of Air Traffic Management, Civil Aviation University of China, Tianjin 300300, China
2
College of Science, Civil Aviation University of China, Tianjin 300300, China
*
Author to whom correspondence should be addressed.
Supported by National Natural Science Foundation of China (No. 61571441) and Scientic Research project of Tianjin Municipal Education Commission (No. 2018KJ253).
Mathematics 2019, 7(2), 119; https://doi.org/10.3390/math7020119
Submission received: 20 November 2018 / Revised: 17 January 2019 / Accepted: 21 January 2019 / Published: 23 January 2019
(This article belongs to the Special Issue Fixed Point Theory and Related Nonlinear Problems with Applications)

Abstract

:
The split feasibility problem models inverse problems arising from phase retrievals problems and intensity-modulated radiation therapy. For solving the split feasibility problem, Xu proposed a relaxed CQ algorithm that only involves projections onto half-spaces. In this paper, we use the dual variable to propose a new relaxed CQ iterative algorithm that generalizes Xu’s relaxed CQ algorithm in real Hilbert spaces. By using projections onto half-spaces instead of those onto closed convex sets, the proposed algorithm is implementable. Moreover, we present modified relaxed CQ algorithm with viscosity approximation method. Under suitable conditions, global weak and strong convergence of the proposed algorithms are proved. Some numerical experiments are also presented to illustrate the effectiveness of the proposed algorithms. Our results improve and extend the corresponding results of Xu and some others.

1. Introduction

Throughout this paper, we always assume that H is a real Hilbert space with inner product · , · and norm · . Let I denote the identity operator on H. Let C and Q be nonempty closed convex subset of real Hilbert spaces H 1 and H 2 , respectively.
The split feasibility problem can mathematically be formulated as the problem of finding a point u * C with the property
A u * Q ,
where A : H 1 H 2 is a bounded linear operator. The SFP (SFP) in finite-dimensional Hilbert spaces was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and medical image reconstruction [2], with particular progress in intensity-modulated radiation therapy [3,4]. It has been found that the SFP can also be used in the air traffic flow management problems. Many researchers studied the SFP and introduced various algorithms to solve it (see [5,6,7,8,9,10,11,12,13,14,15] and references therein).
The original algorithm introduced in [1] involves the computation of the inverse A 1 (assuming the existence of the inverse of A) and thus does not become popular. A more popular algorithm that solves the SFP (Equation (1)) seems to be the following CQ algorithm of Byrne [2,16]:
u n + 1 = P C ( u n μ A * ( I P Q ) A u n ) ,
where P C and P Q are the (orthogonal) projections onto C and Q, respectively, and A * is the adjoint of A and μ ( 0 , 2 / λ ) with λ being the spectral radius of the operator A * A . The CQ algorithm only involves the computations of the projections P C and P Q onto the sets C and Q, respectively, and is therefore implementable in the case where P C and P Q have closed-form expressions (e.g., C and Q are the closed balls or half-spaces). It remains however a challenge how to implement the CQ algorithm in the case where the projections P C and/or P Q fail to have closed-form expressions though theoretically we can prove (weak) convergence of the algorithm.
We assume that the SFP (Equation (1)) is consistent, and use Φ to denote the solution set of the SFP (Equation (1)), i.e.,
Φ = { u C : A u Q } .
Thus, the set Φ is closed, convex and nonempty.
The CQ algorithm is found to be a gradient-projection method (GPM) in convex minimization (it is also a special case of the proximal forward-backward splitting method). We can reformulate the SFP (Equation (1)) as an optimization problem [17]. We may introduce the (convex) objective function
g ( u ) : = 1 2 ( I P Q ) A u 2
and consider the convex minimization problem
min u C g ( u ) .
The objective function g is continuously differentiable with gradient given by
g ( u ) = A * ( I P Q ) A u .
Because I P Q is (firmly) nonexpansive, we obtain that g is Lipschitz continuous with Lipschitz constant L = A 2 . It is well known that the gradient-projection algorithm (GPM), for solving the minimization problem in Equation (4), generates the following iterative sequence { u n } :
u n + 1 = P C ( u n μ g ( u n ) ) ,
where μ is chosen in the interval ( 0 , 2 / L ) with L being the Lipschitz constant of g . For solving the problem in Equation (4), the GPM with gradient g given as in Equation (5) is the CQ algorithm in Equation (2).
By Equation (4), the SFP (Equation (1)) can be written as the following convex separable minimization problem:
min u H 1 ι C ( u ) + g ( u ) ,
where g ( u ) is defined by Equation (3) and ι C ( u ) is an indicator function of the set C defined by
ι C ( u ) = 0 , u C , + , u C .
Chen et al. [18] designed and discussed an efficient algorithm for minimizing the sum of two proper lower semi-continuous convex functions, i.e.,
min u R n g 1 ( u ) + g 2 ( u ) ,
where g 1 , g 2 Γ 0 ( R n ) (all proper lower semi-continuous convex functions from R n to ( , + ] ) and g 2 is differentiable on R n with 1 / β -Lipschitz continuous gradient for some β ( 0 , + ) . For g Γ 0 ( R n ) and ρ ( 0 , + ) , the proximal operator of g with order ρ , denoted by p r o x ρ g , is defined by: for each x R n ,
p r o x ρ g ( x ) = a r g min y R n { g ( y ) + 1 2 ρ x y 2 } .
To solve the convex separable problem in Equation (9), they obtained the following fixed point formulation: the point u * is a solution of Equation (9) if and only if there exists e * R n such that
e * = ( I p r o x μ λ g 1 ) ( u * μ g 2 ( u * ) + ( 1 λ ) e * ) , u * = u * μ g 2 ( u * ) λ e * ,
where λ > 0 and μ > 0 . They introduced the following Picard iterative sequence:
e n + 1 = ( I p r o x μ λ g 1 ) ( u n μ g 2 ( u n ) + ( 1 λ ) e n ) , u n + 1 = u n μ g 2 ( u n ) λ e n + 1 .
It was shown [18] that, under appropriate conditions, the sequence { u n } converges to a solution of the problem in Equation (9). Moreover, u is the primal variable and e is the dual variable of the primal-dual form (see [18]) related to Equation (9).
For solving the SFP (Equation (1)), we note that the CQ algorithm and many related iterative algorithms (see [19,20,21,22,23,24]) only involves the computations of the projections P C and P Q onto the sets C and Q, respectively, and is therefore implementable in the case where P C and P Q have closed-form expressions. However, in some cases it is impossible or needs too much work to exactly compute an orthogonal projection. Therefore, if it is the case, the efficiency of projection type methods will be seriously affected. To overcome this difficulty, Fukushima [25] suggested a so-called relaxed projection method to calculate the projection onto a level set of a convex function by computing a sequence of projections onto half-spaces containing the original level set. Theoretical analysis and numerical experiments show the efficiency of his method.
Let C and Q be level sets of convex functions, i.e.,
C = { u H 1 : c ( u ) 0 } , Q = { v H 2 : q ( v ) 0 } ,
where c : H 1 R and q : H 2 R are convex and lower semi-continuous functions with the subdifferentials
c ( u ) = { z H 1 : c ( x ) c ( u ) + x u , z , x H 1 }
for all u C and
q ( v ) = { w H 2 : q ( y ) q ( v ) + y v , w , y H 2 }
for all v Q . Set
C n = { u H 1 : c ( u n ) + ξ n , u u n 0 } ,
where ξ n c ( u n ) , and
Q n = { v H 2 : q ( A u n ) + η n , v A u n 0 } ,
where η n q ( A u n ) . Obviously, C n and Q n are half-spaces and the projections onto half-spaces C n and Q n have closed forms.
In the setting of finite-dimensional spaces, relaxed projection method was followed by Yang [26], who introduced the following relaxed CQ algorithms for solving the SFP (Equation (1)) where the closed convex subsets C and Q are level sets of convex functions:
u n + 1 = P C n ( u n μ A T ( I P Q n ) A u n ) , n 1 ,
where μ ( 0 , 2 / L ) with L being the largest eigenvalue of matrix A T A , C n and Q n are given in Equations (13) and (14), respectively. Due to the special form of C n and Q n , the proposed algorithm can be easily implemented.
Recently, for the purpose of generality, the SFP (Equation (1)) is studied in a more general setting. For instance, Xu [27] considered the SFP (Equation (1)) where H 1 and H 2 are infinite-dimensional Hilbert spaces. Xu [27] proposed the following relaxed CQ algorithm where C and Q are given in Equation (12):
u n + 1 = P C n ( u n μ A * ( I P Q n ) A u n ) , n 1 ,
where μ ( 0 , 2 / A 2 ) , C n and Q n are given in Equations (13) and (14), respectively. Since the projections P C n and P Q n have closed-form expressions, the above relaxed CQ algorithm is implementable. In [27], the relaxed CQ algorithm has the weak convergence result. He and Zhao [28] introduced a Halpern-type relaxed CQ algorithm such that the strong convergence is guaranteed. Some relaxed algorithms have been proposed to solve the SFP (Equation (1)) (see [29,30,31]).
Inspired and motivated by the works mentioned above, for solving the SFP (Equation (1)) in real Hilbert spaces, we use the dual variable to propose a new relaxed CQ iterative algorithm:
t n = u n μ n A * ( I P Q n ) A u n , e n + 1 = ( I P C n ) ( t n + ( 1 λ ) e n ) , u n + 1 = t n λ e n + 1 ,
where e 0 and u 0 H 1 are arbitrarily chosen, 0 < λ 1 and 0 < μ n 2 A 2 . Taking λ = 1 , the proposed algorithm in Equation (17) becomes the relaxed CQ algorithm in Equation (16) (Xu [27]). Moreover, we present modified relaxed CQ algorithm with viscosity approximation method. Proposed two relaxed CQ iterative algorithms which only involve orthogonal projections onto half-spaces, so that the algorithms are implementable. Under suitable conditions, global weak and strong convergence of the proposed algorithms are proved. Some numerical experiments are also presented to illustrate the effectiveness of the proposed algorithms. Our results improve and extend the corresponding results of Xu and some others.
The rest of this paper is organized as follows. In the next section, some necessary concepts and important facts are collected. The weak convergence theorem of the proposed algorithm is established in Section 3. In Section 4, we modify the proposed algorithm by viscosity method so that it has strong convergence result. Finally, we give some numerical experiments to illustrate the efficiency of the proposed iterative methods.

2. Preliminaries

In this paper, we use → and ⇀ to denote the strong convergence and weak convergence, respectively. We use ω w ( u n ) = { u : u n j u } to stand for the weak ω -limit set of { u n } .
Definition 1.
A mapping S : H H is said to be nonexpansive if
S u S v u v
for all u , v H .
Definition 2.
A mapping S : H H is said to be firmly nonexpansive if 2 S I is nonexpansive or, equivalently,
u v , S u S v S u S v 2
for all u , v H .
Alternatively, a mapping S : H H is firmly nonexpansive if and only if S can be expressed as
S = 1 2 ( I + U ) ,
where I denotes the identity mapping on H and U : H H is a nonexpansive mapping.
Definition 3.
A mapping h : H H is said to be ρ-contraction if there exists a constant ρ [ 0 , 1 ) such that
h ( u ) h ( v ) ρ u v
for all u , v H .
Definition 4.
A mapping h : C H is said to be η-strongly monotone if there exists a positive constant η such that
h ( u ) h ( v ) , u v η u v 2
for all u , v C .
It is obvious that, if h is a ρ -contraction, then I h is a (1- ρ )-strongly monotone mapping. Recall the variational inequality problem [32] is to find a point u * C such that
F u * , u u * 0
for all u C , where C is a nonempty closed convex subset of H and F : C H is a nonlinear operator. It is well known that [33] if F : C H is a Lipschitzian and strongly monotone operator, then the above variational inequality problem has a unique solution.
Definition 5.
A mapping S : H H is said to be α-inverse strongly monotone (α-ism) if there exists a positive constant α such that
u v , S u S v α S u S v 2
for all u , v H .
Recall that the metric (nearest point) projection from H onto a nonempty closed convex subset C of H, denoted by P C , is defined as follows: for each u H ,
P C ( u ) = a r g min v C { u v } .
Then, P C is characterized by the inequality (for u H )
u P C u , z P C u 0 , z C .
It is well known that P C and I P C are firmly nonexpansive and 1-ism.
Definition 6.
A function h : H R is said to be weakly lower semi-continuous (w-lsc) at u if u n u implies
h ( u ) lim inf n h ( u n ) .
Lemma 1.
[34] Let K be a nonempty closed convex subset of real Hilbert space H. Let { u n } be a sequence which satisfies the following properties:
(a) 
every weak limit point of { u n } lies in K; and
(b) 
lim n u n u exists for every u K .
Then, { u n } converges weakly to a point in K.
Lemma 2.
[35] Assume that { s n } is a sequence of nonnegative real numbers such that
s n + 1 ( 1 λ n ) s n + λ n δ n , s n + 1 s n η n + μ n ,
for each n 0 , where { λ n } is a sequence in ( 0 , 1 ) , { η n } is a sequence of nonnegative real numbers and { δ n } and { μ n } are two sequences in R such that:
(a) 
Σ n = 1 λ n = ;
(b) 
lim n μ n = 0 ; and
(c) 
lim l η n l = 0 implies lim sup l δ n l 0 for any subsequence { n l } { n } .
Then, lim n s n = 0 .
Lemma 3.
[36] Let H be a real Hilbert space. Then, for all t [ 0 , 1 ] and u , v H ,
t u + ( 1 t ) v 2 = t u 2 + ( 1 t ) v 2 t ( 1 t ) u v 2 .

3. Weak Convergence Theorems

The CQ algorithm in Equation (2) involves two projections P C and P Q and hence might be hard to be implemented in the case where one of them fails to have a closed-form expression. Now, we use the dual variable to propose a new relaxed CQ algorithm for solving the SFP (Equation (1)) where the closed convex subsets C and Q are level sets of convex functions. We just need projections onto half-spaces, thus the algorithm is implementable in this case.
Let
C = { u H 1 : c ( u ) 0 } , Q = { v H 2 : q ( v ) 0 } ,
where c : H 1 R and q : H 2 R are convex and lower semi-continuous functions. We assume that c and q are subdifferentiable on H 1 and H 2 , respectively. Namely, the subdifferentials,
c ( u ) = { z H 1 : c ( x ) c ( u ) + x u , z , x H 1 }
for all u C and
q ( v ) = { w H 2 : q ( y ) q ( v ) + y v , w , y H 2 }
for all v Q . We also assume that c and q are bounded operators (i.e., bounded on bounded sets). In this paper, we solve the SFP (Equation (1)) under the above assumptions. We note that every convex function defined on a finite-dimensional Hilbert space is subdifferentible and its subdifferential operator is a bounded operator.
Set
C n = { u H 1 : c ( u n ) + ξ n , u u n 0 } ,
where ξ n c ( u n ) , and
Q n = { v H 2 : q ( A u n ) + η n , v A u n 0 } ,
where η n q ( A u n ) . Obviously, C n and Q n are half-spaces and it is easy to verify the C C n and Q Q n for every n 0 from the subdifferentiable inequality.
Algorithm 1.
Let u 0 , e 0 H 1 be arbitrary. For n 1 , let
t n = u n μ n A * ( I P Q n ) A u n , e n + 1 = ( I P C n ) ( t n + ( 1 λ ) e n ) , u n + 1 = t n λ e n + 1 ,
where 0 < λ 1 , 0 < μ n 2 A 2 .
Theorem 1.
Suppose 0 < λ 1 and 0 < lim inf n μ n lim sup n μ n < 2 A 2 . Let { ( e n , u n ) } be the sequence generated by Algorithm 1, then the sequence { u n } converges weakly to a point u * Φ and the sequence { ( e n , u n ) } weakly converges to the point ( 0 , u * ) .
Proof. 
First, we show that lim n u n u exists for any u Φ . Taking u Φ , we have u C C n and A u Q Q n for all n N . We know that I P C n and I P Q n are 1-ism for all n N . Thus, from Algorithm 1, we have
e n + 1 2 = ( I P C n ) ( t n + ( 1 λ ) e n ) 2 = ( I P C n ) ( t n + ( 1 λ ) e n ) ( I P C n ) u 2 e n + 1 , t n u + ( 1 λ ) e n = e n + 1 , t n u + ( 1 λ ) e n , e n + 1
and
u n + 1 u 2 = t n λ e n + 1 u 2 = t n u 2 2 λ t n u , e n + 1 + λ 2 e n + 1 2 .
Thus, from Equations (22) and (23), we have
    u n + 1 u 2 + λ e n + 1 2 = t n u 2 2 λ t n u , e n + 1 + λ 2 e n + 1 2 + λ e n + 1 2 t n u 2 + 2 λ ( 1 λ ) e n , e n + 1 λ ( 1 λ ) e n + 1 2 = t n u 2 + λ ( 1 λ ) ( 2 e n , e n + 1 e n + 1 2 ) .
Since
2 e n , e n + 1 e n + 1 2 = e n 2 e n + 1 e n 2 ,
we obtain
u n + 1 u 2 + λ e n + 1 2 t n u 2 + λ ( 1 λ ) e n 2 λ ( 1 λ ) e n + 1 e n 2 .
It follows from
u n u , A * ( I P Q n ) A u n = A u n A u , ( I P Q n ) A u n = A u n A u , ( I P Q n ) A u n ( I P Q n ) A u ( I P Q n ) A u n ( I P Q n ) A u 2 = ( I P Q n ) A u n 2
that
t n u 2 = u n μ n A * ( I P Q n ) A u n u 2 = u n u 2 2 μ n u n u , A * ( I P Q n ) A u n + μ n 2 A 2 ( I P Q n ) A u n 2 u n u 2 2 μ n ( I P Q n ) A u n 2 + μ n 2 A 2 ( I P Q n ) A u n 2 = u n u 2 μ n ( 2 μ n A 2 ) ( I P Q n ) A u n 2 .
By Equations (25) and (27), we obtain
u n + 1 u 2 + λ e n + 1 2 u n u 2 μ n ( 2 μ n A 2 ) ( I P Q n ) A u n 2 + λ ( 1 λ ) e n 2 λ ( 1 λ ) e n + 1 e n 2 = u n u 2 + λ e n 2 λ 2 e n 2 λ ( 1 λ ) e n + 1 e n 2 μ n ( 2 μ n A 2 ) ( I P Q n ) A u n 2 .
Let s n = u n u 2 + λ e n 2 , then the sequence { s n } is lower bounded. By the assumptions on { μ n } and λ , from Equation (28) we can get s n + 1 s n , which implies that the sequence { s n } is non-increasing and thus lim n s n exists. Thus, it follows that { s n } is bounded and hence { u n } is bounded.
Moreover, from Equation (28), we also have
λ 2 e n 2 + λ ( 1 λ ) e n + 1 e n 2 + μ n ( 2 μ n A 2 ) ( I P Q n ) A u n 2 s n s n + 1 ,
which implies that
lim n ( I P Q n ) A u n = 0
and
lim n e n = 0 .
Thus, lim n u n u 2 = lim n ( s n λ e n 2 ) = lim n s n exists.
Next, we prove ω ω ( u n ) Φ . From Algorithm 1, we have
u n t n = μ n A * ( I P Q n ) A u n
and
u n + 1 t n = λ e n + 1 .
Combining Equations (29) and (30), we get
lim n u n t n = lim n u n + 1 t n = 0 .
It follows from Algorithm 1 that
u n + 1 = t n λ e n + 1 = t n λ ( I P C n ) ( t n + ( 1 λ ) e n ) = t n λ t n + ( 1 λ ) e n P C n ( t n + ( 1 λ ) e n ) = ( 1 λ ) t n λ ( 1 λ ) e n + λ P C n ( t n + ( 1 λ ) e n ) ,
which implies that
P C n ( t n + ( 1 λ ) e n ) = 1 λ u n + 1 1 λ λ t n + ( 1 λ ) e n .
Let z n + 1 = 1 λ u n + 1 1 λ λ t n + ( 1 λ ) e n , then z n + 1 C n . Since c is bounded on bounded sets, there exists a constant ξ > 0 such that ξ n ξ for all n N . It follows that
c ( u n ) ξ n , u n + 1 u n ξ n , 1 λ u n + 1 1 λ λ t n + ( 1 λ ) e n u n = ξ n , 1 λ u n + 1 1 λ t n + t n + ( 1 λ ) e n u n ξ ( 1 λ ( u n + 1 t n ) + t n u n + ( 1 λ ) e n ) .
By Equations (30), (33) and (36), we have
lim sup n c ( u n ) 0 .
Assume that u ^ ω w ( u n ) , i.e., there exists a subsequence { u n j } of { u n } such that u n j u ^ as j . By the weak lower semicontinuity of c and Equation (37), we have
c ( u ^ ) lim inf j c ( u n j ) 0 .
Therefore, u ^ C .
Now, we show that A u ^ Q . Since η n q ( A u n ) , so we have η n η for all n N . It follows from P Q n A u n Q n that
q ( A u n ) + η n , P Q n A u n A u n 0 ,
which implies that
q ( A u n ) η n , A u n P Q n A u n η A u n P Q n A u n .
It follows from Equation (28), the weak lower semicontinuity of q and the fact that A u n j A u ^ that
q ( A u ^ ) lim inf j q ( A u n j ) 0 .
Namely, A u ^ Q .
Thus, u ^ Φ , hence ω w ( u n ) Φ . By Lemma 1, we have u n u * and the sequence { ( e n , u n ) } weakly converges to the point ( 0 , u * ) , where u * Φ . This completes the proof. □
Remark 1.
When λ = 1 , Algorithm 1 becomes the relaxed CQ algorithm in Equation (16) proposed by Xu [27] for solving the SFP where the closed convex subsets C and Q are level sets of convex functions. Thus, Theorem 1 extends the related results of Xu [27] for solving the SFP (Equation (1)).

4. Strong Convergence Theorems

In this section, we modify the proposed Algorithm 1 to show that the algorithm has strong convergence. It is known that the viscosity approximation method is often used to approximate a fixed point of a nonexpansive mapping U in Hilbert spaces with the strong convergence, which it is defined as follows [37]:
u n + 1 = β n h ( u n ) + ( 1 β n ) U ( u n )
for each n 1 , where { β n } [ 0 , 1 ] and h is a contractive mapping. Now, we adapt the viscosity approximation method to get the strong convergence result for solving the SFP (Equation (1)) where the closed convex subsets C and Q are given in Equation (18).
Algorithm 2.
Let h : H 1 H 1 be a ρ -contraction mapping and C n and Q n given in Equations (19) and (20), respectively. Let u 0 , e 0 H 1 be arbitrary. For n 0 , let
t n = u n μ n A * ( I P Q n ) A u n , e ¯ n = ( I P C n ) ( t n + ( 1 λ ) e n ) , u ¯ n = t n λ e ¯ n , e n + 1 = β n h ( e n ) + ( 1 β n ) e ¯ n , u n + 1 = β n h ( u n ) + ( 1 β n ) u ¯ n ,
where 0 < λ 1 , 0 < μ n 2 A 2 and { β n } [ 0 , 1 ] .
Theorem 2.
Assume that { β n } , { μ n } , λ and ρ satisfy the following assumptions:
(i) 
lim n β n = 0 and n = 0 β n = ;
(ii) 
0 < lim inf n μ n lim sup n μ n < 2 A 2 ; and
(iii) 
0 ρ < 1 2 and 0 < λ < 1 .
Then, the sequence ( e n , u n ) generated by Algorithm 2 strongly converges to ( 0 , u * ) , where u * Φ and u * solves the following variational inequality problem:
( I h ) u * , u u * 0
for any u Φ .
Proof. 
Let u * Φ be unique solution of the variational inequality problem ( 42 ) . Then, u * C C n and A u * Q Q n for all n 0 . It follows from Equation (28) that
u ¯ n u * 2 + λ e ¯ n 2 u n u * 2 + λ e n 2 μ n ( 2 μ n A 2 ) ( I P Q n ) A u n 2 λ 2 e n 2 λ ( 1 λ ) e ¯ n e n 2 .
In particular, we have
u ¯ n u * 2 + λ e ¯ n 2 u n u * 2 + λ e n 2 .
From Algorithm 2, we get
u n + 1 u * 2 + λ e n + 1 2 = β n h ( u n ) + ( 1 β n ) u ¯ n u * 2 + λ β n h ( e n ) + ( 1 β n ) e ¯ n 2 β n h ( u n ) u * 2 + ( 1 β n ) u ¯ n u * 2 + λ β n h ( e n ) 2 + ( 1 β n ) e ¯ n 2 2 β n h ( u n ) h ( u * ) 2 + h ( u * ) u * 2 + ( 1 β n ) u ¯ n u * 2 + λ [ 2 β n h ( e n ) h ( 0 ) 2 + h ( 0 ) 2 + ( 1 β n ) e ¯ n 2 ] 2 β n ρ 2 u n u * 2 + 2 β n h ( u * ) u * 2 + ( 1 β n ) u ¯ n u * 2 + 2 λ β n ρ 2 e n 2 + 2 λ β n h ( 0 ) 2 + λ ( 1 β n ) e ¯ n 2 = ( 1 β n ) ( u ¯ n u * 2 + λ e ¯ n 2 ) + 2 β n ρ 2 ( u n u * 2 + λ e n 2 ) + 2 β n ( h ( u * ) u * 2 + λ h ( 0 ) 2 ) ( 1 β n ) ( u n u * 2 + λ e n 2 ) + 2 β n ρ 2 ( u n u * 2 + λ e n 2 ) + 2 β n ( h ( u * ) u * 2 + λ h ( 0 ) 2 ) = [ 1 β n ( 1 2 ρ 2 ) ] ( u n u * 2 + λ e n 2 ) + β n ( 1 2 ρ 2 ) 2 1 2 ρ 2 ( h ( u * ) u * 2 + λ h ( 0 ) 2 ) .
Setting s n = u n u * 2 + λ e n 2 , we have
s n + 1 [ 1 β n ( 1 2 ρ 2 ) ] s n + β n ( 1 2 ρ 2 ) 2 1 2 ρ 2 ( h ( u * ) u * 2 + λ h ( 0 ) 2 ) .
It follows from induction that
s n max s 0 , 2 1 2 ρ 2 ( h ( u * ) u * 2 + λ h ( 0 ) 2 )
for each n 0 , which implies that { u n } and { e n } are bounded. In addition, { u ¯ n } , { e ¯ n } , { h ( u n ) } and { h ( e n ) } are bounded. It follows from Algorithm 2 that
u n + 1 u * 2 + λ e n + 1 2 = β n h ( u n ) + ( 1 β n ) u ¯ n u * 2 + λ β n h ( e n ) + ( 1 β n ) e ¯ n 2 = β n 2 h ( u n ) u * 2 + 2 β n ( 1 β n ) h ( u n ) u * , u ¯ n u * + ( 1 β n ) 2 u ¯ n u * 2 + λ β n 2 h ( e n ) 2 + 2 β n ( 1 β n ) h ( e n ) , e ¯ n + ( 1 β n ) 2 e ¯ n 2 = β n 2 h ( u n ) u * 2 + ( 1 β n ) 2 u ¯ n u * 2 + λ β n 2 h ( e n ) 2 + λ ( 1 β n ) 2 e ¯ n 2 + 2 β n ( 1 β n ) h ( u n ) h ( u * ) , u ¯ n u * + h ( u * ) u * , u ¯ n u * + 2 λ β n ( 1 β n ) ( h ( e n ) h ( 0 ) , e ¯ n + h ( 0 ) , e ¯ n ) β n 2 h ( u n ) u * 2 + ( 1 β n ) 2 u ¯ n u * 2 + λ β n 2 h ( e n ) 2 + λ ( 1 β n ) 2 e ¯ n 2 + β n ( 1 β n ) ( h ( u n ) h ( u * ) 2 + u ¯ n u * 2 ) + 2 β n ( 1 β n ) h ( u * ) u * , u ¯ n u * + λ β n ( 1 β n ) ( h ( e n ) h ( 0 ) 2 + e ¯ n 2 ) + 2 λ β n ( 1 β n ) h ( 0 ) , e ¯ n β n 2 h ( u n ) u * 2 + ( 1 β n ) u ¯ n u * 2 + λ β n 2 h ( e n ) 2 + λ ( 1 β n ) e ¯ n 2 + β n ( 1 β n ) ρ 2 u n u * 2 + 2 β n ( 1 β n ) h ( u * ) u * , u ¯ n u * + λ β n ( 1 β n ) ρ 2 e n 2 + 2 λ β n ( 1 β n ) h ( 0 ) , e ¯ n .
Thus, by Equations (44) and (47), we have
s n + 1 [ 1 β n ( 1 ( 1 β n ) ρ 2 ) ] s n + β n [ β n ( h ( u n ) u * 2 + λ h ( e n ) 2 ) + 2 ( 1 β n ) h ( u * ) u * , u ¯ n u * + λ h ( 0 ) , e ¯ n ] = ( 1 λ n ) s n + λ n δ n ,
where
λ n = β n ( 1 ( 1 β n ) ρ 2 )
and
δ n = 2 ( 1 β n ) ( h ( u * ) u * , u ¯ n u * + λ h ( 0 ) , e ¯ n ) 1 ( 1 β n ) ρ 2 + β n ( h ( u n ) u * 2 + λ h ( e n ) 2 ) 1 ( 1 β n ) ρ 2 .
On the other hand, from Equations (43) and (47), we have
s n + 1 [ 1 β n ( 1 ( 1 β n ) ρ 2 ) ] s n + β n 2 ( h ( u n ) u * 2 + λ h ( e n ) 2 ) + 2 β n ( 1 β n ) ( h ( u * ) u * , u ¯ n u * + λ h ( 0 ) , e ¯ n ) ( 1 β n ) μ n ( 2 μ n A 2 ) ( I P Q n ) A u n 2 + λ 2 e n 2 + λ ( 1 λ ) e ¯ n e n 2 s n + β n 2 ( h ( u n ) u * 2 + λ h ( e n ) 2 ) + 2 β n ( 1 β n ) ( h ( u * ) u * , u ¯ n u * + λ h ( 0 ) , e ¯ n ) ( 1 β n ) μ n ( 2 μ n A 2 ) ( I P Q n ) A u n 2 + λ 2 e n 2 + λ ( 1 λ ) e ¯ n e n 2 .
Now, by setting
a n = β n 2 ( h ( u n ) u * 2 + λ h ( e n ) 2 ) + 2 β n ( 1 β n ) ( h ( u * ) u * , u ¯ n u * + λ h ( 0 ) , e ¯ n )
and
η n = ( 1 β n ) μ n ( 2 μ n A 2 ) ( I P Q n ) A u n 2 + λ 2 e n 2 + λ ( 1 λ ) e ¯ n e n 2 ,
Equation (50) can be rewritten in the following form:
s n + 1 s n η n + a n
for each n 0 . By the assumptions on { β n } and ρ , we have
k = 0 λ n = , lim n a n = 0 .
To use Lemma 2, it suffices to verify that, for any subsequence { n l } { n } , lim l η n l = 0 implies
lim sup l δ n l 0 .
Since lim l η n l = 0 , from the assumptions on λ and { μ n } , we obtain
lim l ( I P Q n l ) A u n l = lim l e n l = lim l e ¯ n l = 0 .
From
u n l u ¯ n l = u n l u n l + μ n l A * ( I P Q n l ) A u n l + λ e ¯ n l = μ n l A * ( I P Q n l ) A u n l + λ e ¯ n l μ n l A ( I P Q n l ) A u n l + λ e ¯ n l ,
we obtain
lim l u n l u ¯ n l = 0 .
In a similar way to the proof of Theorem 1, we can get ω w ( u n l ) Φ . Since
lim l ( 1 ( 1 β n l ) ρ 2 ) = 1 ρ 2
and
lim l β n l ( h ( u n l ) u * 2 + λ h ( e n l ) 2 ) = 0 ,
to get Equation (52), we only need to verify
lim sup l ( h ( u * ) u * , u ¯ n l u * + λ h ( 0 ) , e ¯ n l ) 0 .
From Equation (54), we can take subsequence { ( e n l j , u n l j ) } of { ( e n l , u n l ) } such that u n l j u ˜ as j and
lim sup l ( h ( u * ) u * , u ¯ n l u * + λ h ( 0 ) , e ¯ n l ) = lim j ( h ( u * ) u * , u ¯ n l j u * + λ h ( 0 ) , e ¯ n l j ) = lim j h ( u * ) u * , u n l j u * = h ( u * ) u * , u ˜ u * .
Since ω w ( u n l ) Φ and u * is a solution of the variational inequality problem in Equation (42), it follows from Equation (55) that
lim sup l ( h ( u * ) u * , u ¯ n l u * + λ h ( 0 ) , e ¯ n l ) 0 .
Thus, it follows from Lemma 2 that
lim n s n = lim n ( u n u * 2 + λ e n 2 ) = 0 ,
which implies that u n u * , e n 0 and ( e n , u n ) ( 0 , u * ) , where u * Φ and u * solves the variational inequality problem in Equation (42). This completes the proof. □

5. Numerical Results

In this section, we provide some numerical experiments and show the performance of the proposed modified relaxed CQ iterative Algorithm 1 for solving the SFP (Equation (1)) where the closed convex subsets C and Q are level sets of convex functions. All codes were written in MATLAB and were performed on a personal Lenovo computer with Pentium(R) Dual-Core CPU @ 2.4GHz and RAM 2.00 GB.
Example 1.
We consider the SFP (Equation (1)) as follows: H 1 = H 2 = R 2 , the matrix A = ( a i , j ) N × N and a i , j ( 0 , 1 ) are generated randomly, the nonempty closed convex set C = { u R 2 | c ( u ) 0 } and Q = { v R 2 | q ( v ) 0 } , where
c ( u ) = u 1 + u 2 2
and
q ( v ) = v 1 + v 2 2
for all u = ( u 1 , u 2 ) T R 2 and v = ( v 1 , v 2 ) T R 2 .
Now, we compare the proposed modified relaxed CQ Algorithm 1 with the relaxed CQ algorithm in Equation (16) proposed by Xu [27] to solve Example 1. In the implementation, we took μ n = 1 . 55 A 2 and p ( u n ) < ε = 10 4 as the stopping criterion, where
p ( u n ) = u n P C n u n + A u n P Q n A u n .
We took different u 0 and e 0 as initial points. In Case 1, we took u 0 = ( 5 , 4 ) T and e 0 = ( 0 , 1 ) T . In Case 2, we took u 0 = ( 10 , 4 ) T and e 0 = ( 2 , 10 ) T .
We tried different values of λ for solving this example. When the parameter λ = 1 , Algorithm 1 becomes the relaxed CQ algorithm in Equation (34). We report the numerical results in Table 1 and Table 2. In the tables, “Iter.” denotes the terminating iterative numbers, and C ( u ) and Q ( A u ) denote the value of c ( u ) and q ( A u ) at the terminal point, respectively.
Example 2.
Let H 1 = H 2 = R 3 , A = ( a i , j ) N × N and a i , j ( 0 , 1 ) are generated randomly, the nonempty closed convex set C = { u R 3 | c ( u ) 0 } and Q = { v R 3 | q ( v ) 0 } , where
c ( u ) = u 1 + u 2 2 + u 3 2
and
q ( v ) = v 1 + v 2 2 + v 3 2
for all u = ( u 1 , u 2 , u 3 ) T R 3 and v = ( v 1 , v 2 , v 3 ) T R 3 .
Similar to Example 1, we compared the proposed modified relaxed CQ Algorithm 1 with the relaxed CQ algorithm in Equation (34) proposed by Xu [27] to solve this example. We took μ n = 1 . 55 A 2 and the same stopping criterion as in Example 1. We took different u 0 and e 0 as initial points. The numerical results are given in Table 3 and Table 4.
We can see in Table 1, Table 2, Table 3 and Table 4 that Algorithm 1 was efficient and behaved better than the relaxed CQ algorithm in Equation (34) when choosing a suitable parameter λ for solving Examples 1 and 2.

Author Contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Funding

This article was supported by National Natural Science Foundation of China (No.61571441) and Scientic Research project of Tianjin Municipal Education Commission (No. 2018KJ253).

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Byrne, C. Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  3. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
  4. Censor, Y.; Motova, A.; Segal, A. Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327, 1244–1256. [Google Scholar] [CrossRef]
  5. Dong, Q.L.; Cho, Y.J.; Rassias, T.M. The projection and contraction methods for finding common solutions to variational inequality problems. Optim. Lett. 2018, 12, 1871–1896. [Google Scholar] [CrossRef]
  6. Dong, Q.L.; Tang, Y.C.; Cho, Y.J.; Rassias, T.M. “Optimal” choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim. 2018, 71, 341–360. [Google Scholar] [CrossRef]
  7. Dong, Q.L.; Yuan, H.B.; Cho, Y.J.; Rassias, T.M. Modified inertial Mann algorithm and Inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 2018, 12, 87–102. [Google Scholar] [CrossRef]
  8. Giang, D.M.; Strodiot, J.J.; Nguyen, V.H. Strong convergence of an iterative method for solving the multiple-set split equality fixed point problem in a real Hilbert space. In Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A. Matemáticas; Springer: Berlin, Germany, 2017; Volume 111, pp. 983–998. [Google Scholar]
  9. He, S.; Tian, H.; Xu, H.K. The selective projection method for convex feasibility and split feasibility problems. J. Nonlinear Convex Anal. 2018, 19, 1199–1215. [Google Scholar]
  10. Hieu, D.V.; Cho, Y.J.; Xiao, Y. Modified extragradient algorithms for solving equilibrium problems. Optimization 2018, 67, 2003–2029. [Google Scholar] [CrossRef]
  11. Qu, B.; Xiu, N. A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21, 1655–1665. [Google Scholar] [CrossRef]
  12. Witthayarat, U.; Cho, Y.J.; Cholamjiak, P. On solving proximal split feasibility problems and applications. Ann. Funct. Anal. 2018. [Google Scholar] [CrossRef]
  13. Yao, Y.; Yao, Z.; Abdou, A.A.; Cho, Y.J. Self-adaptive algorithms for proximal split feasibility problems and strong convergence analysis. Fixed Point Theory Appl. 2015, 2015, 205. [Google Scholar] [CrossRef]
  14. Zhao, J. Solving split equality fixed-point problem of quasi-nonexpansive mappings without prior knowledge of operators norms. Optimization 2015, 64, 2619–2630. [Google Scholar] [CrossRef]
  15. Zhao, J.; Hou, D. A self-adaptive iterative algorithm for the split common fixed point problems. Numer. Algorithms 2018. [Google Scholar] [CrossRef]
  16. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  17. Combettes, P.; Wajs, V. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef]
  18. Chen, P.; Huang, J.; Zhang, X. A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration. Inverse Probl. 2013, 29, 025011. [Google Scholar] [CrossRef]
  19. Chang, S.S.; Kim, J.K.; Cho, Y.J.; Sim, J.Y. Weak and strong convergence theorems of solutions to split feasibility problem for nonspreading type mapping in Hilbert spaces. Fixed Point Theory Appl. 2014, 2014, 11. [Google Scholar] [CrossRef]
  20. Masad, E.; Reich, S. A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 2007, 8, 367–371. [Google Scholar]
  21. Qin, X.; Wang, L. A fixed point method for solving a split feasibility problem in Hilbert spaces. In Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A. Matemáticas; Springer: Berlin, Germany, 2017. [Google Scholar] [CrossRef]
  22. Qin, X.; Petrusel, A.; Yao, J.C. CQ iterative algorithms for fixed points of nonexpansive mappings and split feasibility problems in Hilbert spaces. J. Nonlinear Convex Anal. 2018, 19, 157–165. [Google Scholar]
  23. Xu, H.K. A variable Krasnosel’ski i ˘ -Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021–2034. [Google Scholar] [CrossRef]
  24. Zhao, J.; Zong, H. Iterative algorithms for solving the split feasibility problem in Hilbert spaces. J. Fixed Point Theory Appl. 2018, 20, 11. [Google Scholar] [CrossRef]
  25. Fukushima, M. A relaxed projection method for variational inequalities. Math. Program. 1986, 35, 58–70. [Google Scholar] [CrossRef]
  26. Yang, Q. The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20, 1261–1266. [Google Scholar] [CrossRef]
  27. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
  28. He, S.; Zhao, Z. Strong convergence of a relaxed CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2013, 2013, 197. [Google Scholar] [CrossRef]
  29. Lopez, G.; Martin, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
  30. Wang, Z.; Yang, Q.; Yang, Y. The relaxed inexact projection methods for the split feasibility problem. Appl. Math. Comput. 2011, 217, 5347–5359. [Google Scholar] [CrossRef]
  31. Zhou, H.; Wang, P. Adaptively relaxed algorithms for solving the split feasibility problem with a new step size. J. Inequal. Appl. 2014, 2014, 448. [Google Scholar] [CrossRef]
  32. Stampacchia, G. Formes bilineaires coercivites sur les ensembles convexes. C. R. Acad. Sci. Paris 1964, 258, 4413–4416. [Google Scholar]
  33. Lions, J.L.; Stampacchia, G. Variational inequalities. Commun. Pure Appl. Math. 1967, 20, 493–512. [Google Scholar] [CrossRef]
  34. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: London, UK, 2011; p. 38. [Google Scholar]
  35. He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 2013, 942315. [Google Scholar] [CrossRef]
  36. Matinez-Yanes, C.; Xu, H.K. Strong convergence of the CQ method for fixed point processes. Nonlinear Anal. 2006, 64, 2400–2411. [Google Scholar] [CrossRef]
  37. Moudafi, A. Viscosity approxmation methods for fixed points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef]
Table 1. Numerical results for solving Example 1 with different λ .
Table 1. Numerical results for solving Example 1 with different λ .
u 0 = ( 5 , 4 ) T , e 0 = ( 0 , 1 ) T
λ Iter. C ( u ) Q ( Au )
17 1 . 4363 × 10 7 2 . 7797 × 10 4
0.95−0.1930 0 . 0082
0.86−0.1940 0 . 1720
0.76−0.3159 0 . 2697
0.66−0.3916 0 . 3400
0.56−0.4044 0 . 3793
0.46−0.4158 0 . 3316
0.38−0.1115 0 . 7977
0.27−0.4482 0 . 1027
0.110−0.3799 0 . 3628
Table 2. Numerical results for solving Example 1 with different λ .
Table 2. Numerical results for solving Example 1 with different λ .
u 0 = ( 10 , 4 ) T , e 0 = ( 2 , 10 ) T
λ Iter. C ( u ) Q ( Au )
19 9 . 4358 × 10 7 −0.7504
0.97−0.1351 0 . 7958
0.84−0.2007 0 . 8981
0.78−0.1166 0 . 8570
0.66−0.3025 0 . 8072
0.57−0.1696 0 . 4115
0.49−0.0020 1 . 0547
0.311−0.0030 1 . 0185
0.28−0.4556 0 . 2518
0.111−0.5349 0 . 4600
Table 3. Numerical results for solving Example 2 with different λ .
Table 3. Numerical results for solving Example 2 with different λ .
u 0 = ( 5 , 4 , 3 ) T , e 0 = ( 0 , 1 , 1 ) T
λ Iter. C ( u ) Q ( Au )
113 2 . 7532 × 10 9 1 . 2270 × 10 4
0.99 0 . 0094 6 . 6177 × 10 5
0.87 0 . 0954 0 . 0055
0.77−0.0228 0 . 0247
0.68−0.4571 0 . 0030
0.58−0.6276 0 . 0322
0.49−0.8269 0 . 0114
0.310−0.9781 0 . 0132
0.211−1.1005 0 . 0017
0.116−1.1647 0 . 0139
Table 4. Numerical results for solving Example 2 with different λ .
Table 4. Numerical results for solving Example 2 with different λ .
u 0 = ( 2 , 1 , 15 ) T , e 0 = ( 3 , 2 , 1 ) T
λ Iter. C ( u ) Q ( Au )
19 6 . 8996 × 10 7 −1.0408
0.98−0.0250 1 . 0249
0.87−0.0369−0.9827
0.77−0.3059−0.8919
0.69−0.1699−0.9188
0.510−0.2647−0.8608
0.49−0.1480−0.9394
0.310−0.0890−0.9616
0.210−0.3267−0.8054
0.112−0.2652−0.8550

Share and Cite

MDPI and ACS Style

Wang, X.; Zhao, J.; Hou, D. Modified Relaxed CQ Iterative Algorithms for the Split Feasibility Problem. Mathematics 2019, 7, 119. https://doi.org/10.3390/math7020119

AMA Style

Wang X, Zhao J, Hou D. Modified Relaxed CQ Iterative Algorithms for the Split Feasibility Problem. Mathematics. 2019; 7(2):119. https://doi.org/10.3390/math7020119

Chicago/Turabian Style

Wang, Xinglong, Jing Zhao, and Dingfang Hou. 2019. "Modified Relaxed CQ Iterative Algorithms for the Split Feasibility Problem" Mathematics 7, no. 2: 119. https://doi.org/10.3390/math7020119

APA Style

Wang, X., Zhao, J., & Hou, D. (2019). Modified Relaxed CQ Iterative Algorithms for the Split Feasibility Problem. Mathematics, 7(2), 119. https://doi.org/10.3390/math7020119

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop