Next Article in Journal
A Generalized Framework for Analyzing Taxonomic, Phylogenetic, and Functional Community Structure Based on Presence–Absence Data
Next Article in Special Issue
Strong Convergence Theorems for Fixed Point Problems for Nonexpansive Mappings and Zero Point Problems for Accretive Operators Using Viscosity Implicit Midpoint Rules in Banach Spaces
Previous Article in Journal
Bounds of Riemann-Liouville Fractional Integrals in General Form via Convex Functions and Their Applications
Previous Article in Special Issue
Best Proximity Point Results in b-Metric Space and Application to Nonlinear Fractional Differential Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Combination Projection Method for Solving Convex Feasibility Problems

Tianjin Key Laboratory for Advanced Signal Processing and College of Science, Civil Aviation University of China, Tianjin 300300, China
*
Author to whom correspondence should be addressed.
Mathematics 2018, 6(11), 249; https://doi.org/10.3390/math6110249
Submission received: 17 October 2018 / Revised: 4 November 2018 / Accepted: 7 November 2018 / Published: 12 November 2018
(This article belongs to the Special Issue Fixed Point Theory and Related Nonlinear Problems with Applications)

Abstract

:
In this paper, we propose a new method, which is called the combination projection method (CPM), for solving the convex feasibility problem (CFP) of finding some x * C : = i = 1 m { x H | c i ( x ) 0 } , where m is a positive integer, H is a real Hilbert space, and { c i } i = 1 m are convex functions defined as H . The key of the CPM is that, for the current iterate x k , the CPM firstly constructs a new level set H k through a convex combination of some of { c i } i = 1 m in an appropriate way, and then updates the new iterate x k + 1 only by using the projection P H k . We also introduce the combination relaxation projection methods (CRPM) to project onto half-spaces to make CPM easily implementable. The simplicity and easy implementation are two advantages of our methods since only one projection is used in each iteration and the projections are also easy to calculate. The weak convergence theorems are proved and the numerical results show the advantages of our methods.

1. Introduction

Let H be a real Hilbert space with inner product · , · and norm · , respectively. Recall that the projection operator of a nonempty closed convex subset D of H , P D : H D , is defined by
P D ( x ) : = arg min y D x y 2 , x H .
It is inevitable to use projections to solve convex feasibility problems (CFP) [1,2,3] split feasibility problems (SFP) [4,5], and variational inequality problems (VIP) [6,7].
If the set D is simple, such as a hyperplane or a halfspace, the projection onto D can be calculated explicitly. However, it is well known that in general, D is very complex, and P D has no closed form formula, for which, the computation of P D is rather difficult [8]. So, how to efficiently compute P D is a very important and interesting problem. Fukushima [9] suggested the half-space relaxation projection method and the idea was followed by many authors to introduce relaxed projection algorithms for solving the SFP [10,11] and the VIP [12,13].
Let m be a positive integer and { C i } i = 1 m be a finite family of nonempty closed convex subsets of H with a nonempty intersection. The convex feasibility problem [14] is to find
x * C : = i = 1 m C i ,
which is very common problem in diverse areas of mathematics and physical sciences [15]. In the last twenty years, there has been growing interests in the CFP since it was found to have various applications in imaging science [16,17], medical treatments [18], and statistics [19].
A great deal of literature on methods for solving the CFP have been published (e.g., [20,21,22,23]). The classical method traces back at least to the alternating projection method introduced by von Neumann [14] in 1930s, which is called the successive orthogonal projection method (SOP) in Reference [24]. The SOP in Reference [14] solves the CFP with C 1 and C 2 being two closed subspaces in H , and generates a sequence { x k } k = 1 via the iterating process:
x k = ( P C 1 P C 2 ) k x 0 , k 0 ,
where x 0 H is an arbitrary initial guess. Von Neumann [14] proved that the sequence { x k } k = 1 converges strongly to P C 1 C 2 x 0 . In 1965, Bregman [2] extended von Neumann’s results to the case where C 1 and C 2 are closed convex subsets and proved the weak convergence. Hundal [25] showed that for two closed convex subsets C 1 and C 2 , the SOP does not always converge in norm by providing an explicit counterexample. Further results on the SOP were obtained by Gubin et al. [26] and Bruck et al. [27].
The SOP is the most fundamental method to solve CFP, and many existing algorithms [24,28] can be regarded as generalizations or variants of the SOP. Let { C i } i = 1 m be a finite family of level sets of convex functions { c i } i = 1 m (i.e., C i = { x | c i ( x ) 0 } ) such that C : = i = 1 m C i . Adopting Fukushima’s relaxed technique [9], He et al. [28] introduced a contraction type sequential projection algorithm which generates the iterating process:
x k + 1 = λ k u + ( 1 λ k ) P C k m P C k m 1 P C k 2 P C k 1 x k , k 0 ,
where the sequence { λ k } k = 0 ( 0 , 1 ) , u H is a given point and { C k i } i = 1 m is a finite family of half-spaces such that C k i C i for i = 1 , 2 , , m and all k 0 . They proved that the sequence { x k } k = 1 converges strongly to P C u under certain conditions. Because the projection operators onto half-spaces have closed-form formulae, the algorithm (3) seems to be easily implemented. However, one common feature of SOP-type algorithms is that they need to evaluate all the projections { P C i } i = 1 m (or relaxed projections { P C k i } i = 1 m ) in each iteration (see, e.g., Reference [24]), which results in prohibitive computational cost for large scale problems.
Therefore, to solve the CFP (1) efficiently, it is necessary to design methods which use fewer projections in each iteration. He et al. [29,30] proposed the selective projection method (SPM) for solving the CFP (1) where C is the intersection of a finite family of level sets of convex functions. An advantage of the SPM is that we only need to compute one (appropriately selected) projection in each iteration, and the weak convergence of the algorithm is still guaranteed. More precisely, the SPM consists of two steps in each iteration. Step one, once the k-th iterate x k is obtained, according to a certain criterion, we select one set C i k or C k i k from the sets { C i } i = 1 m , or the relaxed sets { C k i } i = 1 m , where C k i is some half-space containing C i for all i = 1 , 2 , , m and k 0 , respectively. Step two, we then update the new iterate x k + 1 via the process:
x k + 1 = P C i k x k ( or P C k i k x k ) .
Because (4) only involves one projection, the SPM is simpler than the SOP-type algorithms.
The main purpose of this paper is to propose a new method, which is called the combination projection method (CPM), for solving the convex feasibility problem of finding some
x * C : = i = 1 m { x H | c i ( x ) 0 } ,
where m is a positive integer and { c i } i = 1 m are convex functions defined as H . The key of the CPM is that for the current iterate x k , the CPM firstly constructs a new level set H k through a convex combination of some of { c i } i = 1 m , and then updates the new iterate x k + 1 by using the projection P H k . The simplicity and ease of implementation are two of the advantages of our method since only one projection is used in each iteration and the projections are easy to calculate. To make the CPM easily implementable, we also introduce the combination relaxation projection method (CRPM), which involves projection onto half-spaces. The weak convergence theorems are proved and the numerical results show the advantages of our methods. In fact, the methods in this paper can be easily extended to solve other nonlinear problems, for example, the SFP and the VIP.

2. Preliminaries

Let H be a real Hilbert space and T : H H be a mapping. Recall that
  • T is nonexpansive if T x T y x y for all x , y H .
  • T is firmly nonexpansive if T x T y 2 x y 2 ( I T ) x ( I T ) y 2 for all x , y H .
  • T : H H is an averaged mapping if there exists some α ( 0 , 1 ) and nonexpansive mapping V : H H such that T = ( 1 α ) I + α V ; in this case, T is also said to be α -averaged.
  • T is inverse strongly monotone (ISM) if there exists some ν > 0 such that
    T x T y , x y ν T x T y 2 , x , y H .
    In this case, we say that T is ν -ISM.
Lemma 1
([31]). For a mapping T : H H , the following are equivalent:
(i)
T is 1 2 -averaged;
(ii)
T is 1-ISM;
(iii)
T is firmly nonexpansive;
(iv)
I T is firmly nonexpansive.
Recall that the projection onto a closed convex subset D of H is defined by
P D ( x ) : = arg min y D x y 2 , x H .
It is well known that P D is characterized by the inequality
P D ( x ) D , x P D ( x ) , y P D ( x ) 0 , x H , y D .
Some useful properties of the projection operators are collected in the lemma below.
Lemma 2
([31]). For any nonempty closed convex subset D of H , the projection P D is both 1 2 -averaged and 1-ISM. Equivalently, P D is firmly nonexpansive.
Lemma 3
([32]). Let D be a nonempty closed convex subset of H . Let { u k } k = 0 H satisfy the properties:
(i)
lim k u k u exists for each u D ;
(ii)
ω w ( u k ) D .
Then, { u k } k = 0 converges weakly to a point in D.
Lemma 4.
Let { c i } i = 1 m be a finite family of convex functions defined as H such that their level sets C i = { x H | c i ( x ) 0 } , i = 1 , 2 , , m , with nonempty intersection. Let H = { x H | i = 1 m β i c i ( x ) 0 } with { β i } i = 1 m ( 0 , 1 ) such that i = 1 m β i = 1 . Then, the following properties are satisfied.
(i)
If each C i is a half-space, i.e., c i = x , v i d i with d i R and v i H such that v i 0 , in addition, if the vector group { v i } i = 1 m is also linearly independent, then H is a half-space;
(ii)
H is a closed ball if each C i is a closed ball;
(iii)
H is a closed ball if each C i is a closed ball or a half-space, and at least one of them is a closed ball.
Proof. 
(i) Obviously, for any { β i } i = 1 m ( 0 , 1 ) with i = 1 m β i = 1 , we have
i = 1 m β i c i ( x ) = x , i = 1 m β i v i i = 1 m β i d i .
Since { v i } i = 1 m is a linearly independent group, we assert that i = 1 m β i v i 0 , and hence, it is easy to see from (6) that H is a half-space.
(ii) If C i = { x H | c i ( x ) 0 } is a closed ball with center x i H and radius r i , then c i ( x ) = x x i 2 r i 2 , i = 1 , 2 , , m . For any { β i } i = 1 m ( 0 , 1 ) with i = 1 m β i = 1 , noting the identity
x i = 1 m β i x i 2 = i = 1 m β i ( x x i ) 2 = i = 1 m β i x x i 2 i < j β i β j x i x j 2 ,
We directly deduce
i = 1 m β i c i ( x ) = i = 1 m β i x x i 2 i = 1 m β i r i 2 = x i = 1 m β i x i 2 + i < j β i β j x i x j 2 i = 1 m β i r i 2 .
Consequently,
H = { x H | x i = 1 m β i x i 2 i = 1 m β i r i 2 i < j β i β j x i x j 2 } .
Since i = 1 m C i , there exists some z H such that c i ( z ) 0 for all i = 1 , 2 , , m , thus this implies that
i = 1 m β i r i 2 i < j β i β j x i x j 2 z i = 1 m β i x i 2 0 ,
that is, H is a closed ball.
(iii) Assume that C 1 is a closed ball and C 2 is a half-space, then c 1 and c 2 have the forms c 1 ( x ) : = x x 0 2 r 2 and c 2 ( x ) : = x , v d , respectively, where x 0 , v H , r R + and d R . For any β ( 0 , 1 ) , we have from calculating β c 1 ( x ) + ( 1 β ) c 2 ( x ) that
H = { x H | x ( x 0 1 β 2 β v ) 2 x 0 1 β 2 β v 2 x 0 2 + r 2 + 1 β β d } .
By using the same argument as in (ii), we assert x 0 1 β 2 β v 2 x 0 2 + r 2 + 1 β β d 0 , and this means that H is indeed a closed ball. This together with (i) and (ii) indicates that the conclusion is true for the general case. □
Suppose f : H ( , ] is a proper, lower-semicontinuous (lsc), convex function. Recall that an element ξ H is said to be a subgradient of f at x if
f ( z ) f ( x ) + ξ , z x , z H .
We denote by f ( x ) the set of all subgradients of f at x. Recall that f is said to be subdifferentiable at x if f ( x ) , and f is said to be subdifferentiable (on H ) if it is subdifferentiable at every x H . Recall also that the inequality (9) is called the subdifferential inequality of f at x.

3. The CPM for Solving Convex Feasibility Problems

In this section, the combination projection method (CPM) is proposed for solving the convex feasibility problem (CFP):
Find a point x * such that x * C = i = 1 m C i ,
where
C i = { x H c i ( x ) 0 } , i = 1 , 2 , , m ,
with c i : H R a convex function for each i = 1 , 2 , , m . The algorithm proposed below for solving the CFP (10) is called the combination projection method (CPM) for the reason that the projection that is used to update the next iterate is on the level set of a convex combination of some of { c i } i = 1 m in an appropriate way. Throughout this section, we always assume that C and use I to represent the index set { 1 , 2 , , m } for convenience.
Remark 1.
Algorithm 1 suits for the case where { P H k } k = 0 have closed-form representations. For example, according to Lemma 4, if each of { C i } i = 1 m is a closed ball or a half-space, then H k is also a closed ball or a half-space for each k 0 , and hence, P H k has the closed-form representation for all k 0 . In this case, Algorithm 1 is easy implementable.
Algorithm 1: (The Combination Projection Method)
Step 1:
Choose x 0 H arbitrarily and set k : = 0 .
Step 2:
Given the current iterate x k . Check the index set I k : = { i I | c i ( x k ) > 0 } . If I k = , i.e., c i ( x k ) 0 for all i = 1 , 2 , , m , then stop and x k is a solution of the CFP (10).
Otherwise, select { β i ( k ) } i I k ( 0 , 1 ) such that i I k β i ( k ) = 1 , and construct the level set:
H k : = { x H | i I k β i ( k ) c i ( x ) 0 } ,
Step 3:
Compute the new iterate
x k + 1 : = P H k ( x k ) .
Set k : = k + 1 and return to Step 2.
Remark 2.
The simplicity and ease of implementation of Algorithm 1 can be illustrated through a simple example in R 3 . Compute the projection P C u , where u = ( 0 , 0 , 4 ) R 3 and C R 3 is given by
C = i = 1 4 C i : = i = 1 4 { ( x 1 , x 2 , x 3 ) R 3 | c i ( x 1 , x 2 , x 3 ) 0 } ,
where
c 1 ( x 1 , x 2 , x 3 ) = ( x 1 1 ) 2 + 2 x 2 2 + 4 x 3 2 5 , c 2 ( x 1 , x 2 , x 3 ) = 2 x 1 2 + ( 2 x 2 1 ) 2 + x 3 2 2 , c 3 ( x 1 , x 2 , x 3 ) = ( x 1 + 1 ) 2 + x 2 2 + 3 x 3 2 4 , c 4 ( x 1 , x 2 , x 3 ) = x 1 2 + ( 2 x 2 + 1 ) 2 + 2 x 3 2 3 .
Selecting the initial guess x 0 = u and using the CPM (Algorithm 1), only one iteration step is needed to get the exact solution of the problem. Indeed, since c i ( 0 , 0 , 4 ) > 0 for each i = 1 , 2 , 3 , 4 , then I 0 = { 1 , 2 , 3 , 4 } . Taking the convex combination coefficients as β i = 1 4 , i = 1 , 2 , 3 , 4 , the CPM firstly generates a new set H 0 = { ( x 1 , x 2 , x 3 ) | 5 x 1 2 4 + 11 x 2 2 4 + 5 x 3 2 2 5 2 0 } , i.e., the level set of the convex function i = 1 4 β i c i ( x 1 , x 2 , x 3 ) = 5 x 1 2 4 + 11 x 2 2 4 + 5 x 3 2 2 5 2 , then, the CPM updates the new iterate x 1 = P H 0 x 0 = ( 0 , 0 , 1 ) = P C u . However, if we adopt the SOP to get P C u , the iteration process will be complicated. On one hand, although there is an expression for the projection onto an ellipsoid [8], obtaining a constant in the expression requires solving an algebraic equation. On the other hand, the actual calculation shows that after several iterations, we can only get an approximate solution of P C u .
We have the following convergence result for Algorithm 1.
Theorem 1.
Assume that for each i = 1 , 2 , , m , c i : H R is a bounded uniformly continuous (i.e., uniformly continuous on each bounded subset of H ) and convex function. If β * = inf { β i ( k ) | i I k , k 0 } > 0 , then the sequence { x k } k = 0 generated by Algorithm 1 converges weakly to a solution of CFP (10).
Proof. 
Obviously, we may assume that x k C for all k 0 with no loss of generality. By the definition of H k , it is very easy to see that C H k holds for all k 0 . For any x * C , we have by Lemma 2 that
x k + 1 x * 2 = P H k x k P H k x * 2 x k x * 2 x k x k + 1 2 .
From (11), we assert that { x k x * } is nonincreasing; hence, { x k } k = 0 is bounded and lim k x k x * 2 exists. Furthermore, we also get
k = 0 x k x k + 1 2 < + .
Particularly, x k x k + 1 0 as k . By Lemma 3, all we need to prove is that ω w ( x k ) C . To see this, take x ^ ω w ( x k ) and let { x k j } j = 1 be a subsequence of { x k } k = 0 weakly converging to x ^ . Noticing x k + 1 H k , we get
i I k β i ( k ) c i ( x k + 1 ) 0 .
For each fixed i I and any j 0 , if i I k j , then
c i ( x k j ) 0 .
If i I k j , by virtue of the definition of I k j and (12), we get
β i ( k j ) c i ( x k j ) l I k j β l ( k j ) c l ( x k j ) l I k j β l ( k j ) c l ( x k j ) l I k j β l ( k j ) c l ( x k j + 1 ) l = 1 m | c l ( x k j ) c l ( x k j + 1 ) | .
Moreover, noting β * = inf { β i ( k ) | i I k , k 0 } > 0 , the combination of (13) and (14) yields
c i ( x k j ) 1 β * l = 1 m | c l ( x k j ) c l ( x k j + 1 ) | ,
for each i = 1 , 2 , , m and all j 0 . Since x k j x ^ , x k j x k j + 1 0 , and { c l } l = 1 m are w-lsc and bounded uniformly continuous, we can obtain c i ( x ^ ) 0 by taking the limit in (15) as j . Hence x ^ C and ω w ( x k ) C . This completes the proof. □
The second algorithm for solving the CFP (10) is named the combination relaxation projection method (CRPM), which works for the case where the projection operators { P H k } k = 0 do not have closed-form formulae. In this case, we assume that the convex functions { c i } i = 1 m are subdifferentiable on H .
The convergence of Algorithm 2 is given as follows.
Algorithm 2: (The combination Relaxation Projection Method)
Step 1:
Choose x 0 H arbitrarily and set k : = 0 .
Step 2:
Given the current iterate x k . Check the index set I k : = { i I | c i ( x k ) > 0 } . If I k = , i.e., c i ( x k ) 0 for all i = 1 , 2 , , m , then stop and x k is a solution of the CFP (10).
Otherwise, select { β i ( k ) } i I k ( 0 , 1 ) such that i I k β i ( k ) = 1 , and construct a half space by
H k R : = { x H i I k β i ( k ) c i ( x k ) + i I k β i ( k ) ξ k i , x x k 0 } ,
where ξ k i c i ( x k ) for each i I k .
Step 3:
Compute the new iterate
x k + 1 : = P H k R ( x k ) .
Set k : = k + 1 and return to Step 2.
Theorem 2.
Assume that for each i = 1 , 2 , , m , c i : H R is a w-lsc, subdifferentiable, convex function such that the subdifferential mapping c i is bounded (i.e., bounded on bounded subsets of H ). If β * = inf { β i ( k ) | i I k , k 0 } > 0 , then the sequence { x k } k = 0 generated by Algorithm 2 converges weakly to a solution of the CFP (10).
Proof. 
With no loss of generality, we assume x k C for all k 0 . First of all, we show that H k R is a half-space, i.e., i I k β i ( k ) ξ k i 0 . Indeed, if otherwise, it can be asserted by (16) that c i ( x k ) 0 holds for each i I k and this is contradictory to the definition of I k . We next show C H k R . Indeed, for each x C , we have from the subdifferential inequality (9) that
c i ( x k ) + ξ k i , x x k c i ( x ) 0 , i I k ,
where ξ k i c i ( x k ) . Summing (16) over i I k , we have
i I k β i ( k ) c i ( x k ) + i I k β i ( k ) ξ k i , x x k 0 .
By the definition of H k R (see (16)), we assert from (17) that x H k R and hence C H k R . For any x * C , noting x * C H k R , we have by Lemma 2 that
x k + 1 x * 2 = P H k R x k P H k R x * 2 x k x * 2 x k + 1 x k 2 .
This implies that { x k } k = 0 is bounded, lim k x k x * 2 exists, and lim k x k + 1 x k = 0 . Now we verify that ω w ( x k ) C . Since { x k } k = 0 is bounded and c i ( i = 1 , 2 , , m ) is a bounded operator, there exists a constant M 0 such that ξ k i M for all k 0 and i I k . By the definition of H k R and the fact that x k + 1 H k R , we get
i I k β i ( k ) c i ( x k ) + i I k β i ( k ) ξ k i , x k + 1 x k 0 .
For each i I and k 0 , if i I k , then
c i ( x k ) 0 ,
and if i I k , it follows from the definition of H k R and (18) that
β i ( k ) c i ( x k ) l I k β l ( k ) c l ( x k ) | l I k β l ( k ) ξ k l , x k + 1 x k | M x k + 1 x k 0 ( k ) .
Hence, for each i I , the combination of (19) and (20) leads to
c i ( x k ) M β * x k + 1 x k 0 ( k ) .
From (21), the containment ω w ( x k ) C follows immediately from an argument similar to the final part of the proof of Theorem 1. □

4. Numerical Results

In this section, we compare the behavior of the CPM (Algorithm 1) and SOP by solving two synthetic examples in the Euclidean space R n . All the codes were written by Matlab R2010a and all the numerical experiments were conducted on a HP Pavilion notebook with Intel(R) Core(TM) i5-3230M [email protected] GHz and 4 GB RAM running on Windows 7 Home Premium operating system.
Example 1.
Consider the convex feasibility problem:
Find a point x * C = i = 1 m C i : = i = 1 m { x R n v i , x d i 0 } ,
where { v i } i = 1 m R n and { d i } i = 1 m are nonnegative real numbers. Take n = 6 , m = 8 ,
v 1 = ( 5.5 , 10 , 1.5 , 10 , 80 , 260.7 ) , v 2 = ( 14 , 3 , 13.6 , 14.5 , 7.1 , 200.3 ) , v 3 = ( 13.7 , 13 , 10 , 390 , 10 , 179.5 ) , v 4 = ( 16 , 17 , 10.5 , 16.5 , 17.3 , 99.3 ) , v 5 = ( 16.5 , 15.7 , 19.3 , 3 , 19 , 98.5 ) , v 6 = ( 28 , 90.1 , 14.9 , 17 , 19 , 89.7 ) , v 7 = ( 26 , 6 , 22.5 , 15 , 17 , 5.3 ) , v 8 = ( 29.9 , 11 , 13.5 , 5.9 , 12.5 , 4.3 ) ,
d 1 = 1 , d 2 = 1 , d 3 = 2 , d 4 = 1 , d 5 = 2 , d 6 = 1 . 2 , d 7 = 2 , d 8 = 1 and the initial point x 0 is randomly chosen in ( 0 , 10 ) 6 .
Obviously, 0 C , i.e., problem (22) is solvable. We use x k = ( x 1 k , x 2 k , , x n k ) to denote the k-th iterate and define
E r r k : = max 1 i m c i ( x k )
to measure the error of the k-th iteration, which also serves as the role of checking whether or not the proposed algorithm converges to a solution. In fact, it is easy to see that if E r r k is less than or equal to zero, then x k is an exact solution of Problem (22) and the iteration can be terminated; if E r r k is greater than zero, then x k is just an approximate solution and the smaller E r r k , the smaller the error of x k to a solution.
Let | I k | denote the number of elements of the set I k . We give two ways to choose β i ( k ) .
(1)
β i ( k ) = 1 / | I k | , i I k . Denote the corresponding combination projection method by CPM1.
(2)
β i ( k ) = c i ( x k ) / j I k c j ( x k ) , i I k . Denote the corresponding combination projection method by CPM2.
Table 1 illustrates that the set I k is generally different each iteration. From Figure 1, we conclude that the behaviors of the CPM1 and CPM2 depend on the initial point x 0 . The errors for CPM1 and CPM2 oscillate which may be because only partial information about the convex sets { C i } i = 1 m is used in each iteration. However, in view of the SOP, all the information about the convex sets { C i } i = 1 m is used in each iteration since it involves all projections { P C i } i = 1 m in each iteration. From Figure 2, the CPM1 behaves better than the SOP.
Example 2.
Consider the linear equation system:
A x = b
where A is an m × n matrix, m < n , and b is a vector in R m . If the noise is taken into consideration, Problem (24) is stated as
A x b 2 ϵ
where ϵ > 0 measures the level of errors.
Let
C i = { x R n | A ( i , : ) , x b ( i ) | ϵ i 0 } .
where ϵ i 0 . It is easy to show that Problem (25) is equal to the convex feasibility problem:
Find a point x * C = i = 1 m C i
The set C is nonempty since the linear equation system has infinite solutions.
Set
E r r k : = max 1 i m | A ( i , : ) x k b ( i ) | .
The initial point x 0 is randomly chosen in ( 0 , 10 ) n . We compared the CPM2 and SOP for different m and n. From Figure 3, the behavior of the CPM2 is better than that of the SOP. The error of the CPM2 has a bigger oscillation than that in Example 2, the oscillation seems to decrease when the iteration is very big. The SOP behaves well when m and n are small, while its error is very big for big m and n. In Figure 4, we compare the CPU time of the CPM2 and SOP, which illustrates that the CPU time of the CPM2 is less than that of the SOP. Furthermore, the CPU time of the SOP exceeds that of the CPM2 with the iteration.

5. Conclusions

In this paper, we propose the combination projection method (CPM) for solving the convex feasibility problem (CFP). The CPM is simple and easy to implement, and has a fast convergence speed. How to further speed up the convergence for the CPM through selecting the convex combination coefficients { β i ( k ) } i I k in Algorithms 1 and 2 is worthy of further study.

Author Contributions

S.H. and Q.-L.D. contributed equally in this work.

Funding

This work was supported by the Fundamental Research Fund for the Central Universities (3122017078).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problem. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef]
  2. Bregman, L.M. The method of successive projections for finding a common point of convex sets. Sov. Math. Dokl. 1965, 6, 688–692. [Google Scholar]
  3. Aleyner, A.; Reich, S. Block-iterative algorithms for solving convex feasibility problems in Hilbert and Banach spaces. J. Math. Anal. Appl. 2008, 343, 427–435. [Google Scholar] [CrossRef]
  4. López, G.; Martín-Márquez, V.; Wang, F.H.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 374–389. [Google Scholar] [CrossRef]
  5. Dong, Q.L.; Tang, Y.C.; Cho, Y.J.; Rassias, T.M. “Optimal” choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim. 2018, 71, 341–360. [Google Scholar] [CrossRef]
  6. Facchinei, F.; Pang, J.-S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Series in Operations Research; Springer: New York, NY, USA, 2003; Volume II. [Google Scholar]
  7. Dong, Q.L.; Lu, Y.Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
  8. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Springer: Paris, France, 2013. [Google Scholar]
  9. Fukushima, M. A relaxed projection method for variational inequalities. Math. Programs 1986, 35, 58–70. [Google Scholar] [CrossRef]
  10. He, S.; Zhao, Z.; Luo, B. A relaxed self-adaptive CQ algorithm for the multiple-sets split feasibility problem. Optimization 2015, 64, 1907–1918. [Google Scholar] [CrossRef]
  11. He, H.; Xu, H.K. Splitting methods for split feasibility problems with application to Dantzig selectors. Inverse Probl. 2017, 33, 055003. [Google Scholar] [CrossRef]
  12. Gibali, A.; Censor, Y.; Reich, S. The subgradient extragradient method for solving variational inequalities in in Hilbert space. J. Optim. Anal. Theory Appl. 2011, 148, 318–335. [Google Scholar]
  13. He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 2013, 942315. [Google Scholar] [CrossRef]
  14. von Neumann, J. Functional Operators, Volume II: The Geometry of Orthogonal Spaces. In Annals of Mathematical Studies; Reprint of 1933 Lecture Notes; Princeton University Press: Princeton, NJ, USA, 1950; Volume 22. [Google Scholar]
  15. Zaslavski, A.J. Approximate Solutions of Common Fixed-Point Problems; Springer: New York, NY, USA, 2018. [Google Scholar]
  16. Combettes, P.L. The convex feasibility problem in image recovery. In Advances in Imaging and Electron Physics; Hawkes, P., Ed.; Academic Press: New York, NY, USA, 1996; Volume 95, pp. 155–270. [Google Scholar]
  17. Censor, Y.; Gibali, A.; Lenzen, F.; Schnörr, C. The implicit convex feasibility problem and its application to adaptive image denoising. J. Comput. Math. 2016, 34, 1–16. [Google Scholar] [CrossRef]
  18. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problem. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
  19. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  20. Zhao, X.; Ng, K.F.; Li, C.; Yao, Y.C. Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Opt. 2018, 78, 613–641. [Google Scholar] [CrossRef]
  21. Burachik, R.S.; Martín-Márquez, V. An approach for the convex feasibility problem via monotropic programming. J. Math. Anal. Appl. 2017, 453, 746–760. [Google Scholar] [CrossRef]
  22. Gibali, A.; Küfer, K.-H.; Reem, D.; Süss, P. A generalized projection-based scheme for solving convex constrained optimization problems. Comput. Optim. Appl. 2018, 70, 737–762. [Google Scholar] [CrossRef] [Green Version]
  23. Aragón Artachoa, F.J.; Censor, Y.; Gibali, A. The cyclic Douglas-Rachford algorithm with r-sets-DouglasCRachford operators. Optim. Method Softw. 2018. [Google Scholar] [CrossRef]
  24. Censor, Y.; Zenios, S.A. Parallel Optimization: Theory, Algorithm, and Applications; Oxdord University Press: New York, NY, USA, 1997. [Google Scholar]
  25. Hundal, H.S. An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57, 35–61. [Google Scholar] [CrossRef]
  26. Gubin, L.G.; Polyak, B.T.; Raik, E.V. The method of projections for finding the common point of convex sets. USSR Comput. Math. Math. Phys. 1967, 7, 1–24. [Google Scholar] [CrossRef]
  27. Bruck, R.E.; Reich, S. Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 1977, 3, 459–470. [Google Scholar]
  28. He, S.; Zhao, Z.; Luo, B. A simple algorithm for computing projection onto intersection of finite level sets. J. Inequal. Appl. 2014, 2014, 307. [Google Scholar] [CrossRef] [Green Version]
  29. He, S.; Tian, H. Selective projection methods for solving a class of variational inequalities. Numer. Algorithms 2018, 1–18. [Google Scholar] [CrossRef]
  30. He, S.; Tian, H.; Xu, H.K. The selective projection method for convex feasibility and split feasibility problems. J. Nonlinear Convex Anal. 2018, 19, 1199–1215. [Google Scholar]
  31. Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
  32. López Acedo, G.; Xu, H.K. Iterative methods for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2007, 67, 2258–2271. [Google Scholar] [CrossRef]
Figure 1. The comparison of two choices of β i ( k ) for different random choices of x 0 for Example 1. (a) Number of iteration.
Figure 1. The comparison of two choices of β i ( k ) for different random choices of x 0 for Example 1. (a) Number of iteration.
Mathematics 06 00249 g001
Figure 2. The comparison of the CPM and the successive orthogonal projection method (SOP) for Example 1.
Figure 2. The comparison of the CPM and the successive orthogonal projection method (SOP) for Example 1.
Mathematics 06 00249 g002
Figure 3. (a) ( m , n ) = (20, 40); (b) ( m , n ) = (200, 400); (c) ( m , n ) = (500, 1000); (d) ( m , n ) = (1000, 2000). Comparison of the CPM1 and SOP for different m and n of Example 2.
Figure 3. (a) ( m , n ) = (20, 40); (b) ( m , n ) = (200, 400); (c) ( m , n ) = (500, 1000); (d) ( m , n ) = (1000, 2000). Comparison of the CPM1 and SOP for different m and n of Example 2.
Mathematics 06 00249 g003
Figure 4. The comparison of the CPU time of the CPM and SOP for ( m , n ) = (1000, 2000) of Example 2.
Figure 4. The comparison of the CPU time of the CPM and SOP for ( m , n ) = (1000, 2000) of Example 2.
Mathematics 06 00249 g004
Table 1. Comparison of I k of the combination projection method (CPM) 1 and CPM2.
Table 1. Comparison of I k of the combination projection method (CPM) 1 and CPM2.
k I k | I K |
CPM1CPM2CPM1CPM2
1 { 1 , 8 } { 1 , 8 } 22
2 { 2 , 4 , 5 , 6 , 8 } { 2 , 4 , 5 , 6 , 8 } 55
3 { 1 , 5 , 8 } { 1 , 8 } 32
4 { 2 , 4 , 5 , 6 , 8 } { 2 , 4 , 5 , 6 , 8 } 55
5 { 1 , 5 , 6 , 8 } { 1 , 5 , 8 } 43
6 { 2 , 4 , 5 , 7 , 8 } { 2 , 4 , 5 , 6 , 8 } 55
7 { 1 , 4 , 5 , 7 , 8 } { 1 , 5 , 8 } 53
8 { 2 , 4 , 5 , 6 , 7 , 8 } { 2 , 4 , 5 , 6 , 8 } 65

Share and Cite

MDPI and ACS Style

He, S.; Dong, Q.-L. The Combination Projection Method for Solving Convex Feasibility Problems. Mathematics 2018, 6, 249. https://doi.org/10.3390/math6110249

AMA Style

He S, Dong Q-L. The Combination Projection Method for Solving Convex Feasibility Problems. Mathematics. 2018; 6(11):249. https://doi.org/10.3390/math6110249

Chicago/Turabian Style

He, Songnian, and Qiao-Li Dong. 2018. "The Combination Projection Method for Solving Convex Feasibility Problems" Mathematics 6, no. 11: 249. https://doi.org/10.3390/math6110249

APA Style

He, S., & Dong, Q. -L. (2018). The Combination Projection Method for Solving Convex Feasibility Problems. Mathematics, 6(11), 249. https://doi.org/10.3390/math6110249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop