1. Introduction
Let
be a real Hilbert space with inner product
and norm
, respectively. Recall that the projection operator of a nonempty closed convex subset
D of
,
, is defined by
It is inevitable to use projections to solve convex feasibility problems (CFP) [
1,
2,
3] split feasibility problems (SFP) [
4,
5], and variational inequality problems (VIP) [
6,
7].
If the set
D is simple, such as a hyperplane or a halfspace, the projection onto
D can be calculated explicitly. However, it is well known that in general,
D is very complex, and
has no closed form formula, for which, the computation of
is rather difficult [
8]. So, how to efficiently compute
is a very important and interesting problem. Fukushima [
9] suggested the half-space relaxation projection method and the idea was followed by many authors to introduce relaxed projection algorithms for solving the SFP [
10,
11] and the VIP [
12,
13].
Let
m be a positive integer and
be a finite family of nonempty closed convex subsets of
with a nonempty intersection. The convex feasibility problem [
14] is to find
which is very common problem in diverse areas of mathematics and physical sciences [
15]. In the last twenty years, there has been growing interests in the CFP since it was found to have various applications in imaging science [
16,
17], medical treatments [
18], and statistics [
19].
A great deal of literature on methods for solving the CFP have been published (e.g., [
20,
21,
22,
23]). The classical method traces back at least to the alternating projection method introduced by von Neumann [
14] in 1930s, which is called the successive orthogonal projection method (SOP) in Reference [
24]. The SOP in Reference [
14] solves the CFP with
and
being two closed subspaces in
, and generates a sequence
via the iterating process:
where
is an arbitrary initial guess. Von Neumann [
14] proved that the sequence
converges strongly to
. In 1965, Bregman [
2] extended von Neumann’s results to the case where
and
are closed convex subsets and proved the weak convergence. Hundal [
25] showed that for two closed convex subsets
and
, the SOP does not always converge in norm by providing an explicit counterexample. Further results on the SOP were obtained by Gubin et al. [
26] and Bruck et al. [
27].
The SOP is the most fundamental method to solve CFP, and many existing algorithms [
24,
28] can be regarded as generalizations or variants of the SOP. Let
be a finite family of level sets of convex functions
(i.e.,
) such that
. Adopting Fukushima’s relaxed technique [
9], He et al. [
28] introduced a contraction type sequential projection algorithm which generates the iterating process:
where the sequence
,
is a given point and
is a finite family of half-spaces such that
for
and all
. They proved that the sequence
converges strongly to
under certain conditions. Because the projection operators onto half-spaces have closed-form formulae, the algorithm (
3) seems to be easily implemented. However, one common feature of SOP-type algorithms is that they need to evaluate all the projections
(or relaxed projections
) in each iteration (see, e.g., Reference [
24]), which results in prohibitive computational cost for large scale problems.
Therefore, to solve the CFP (
1) efficiently, it is necessary to design methods which use fewer projections in each iteration. He et al. [
29,
30] proposed the
selective projection method (SPM) for solving the CFP (
1) where
C is the intersection of a finite family of level sets of convex functions. An advantage of the SPM is that we only need to compute one (appropriately selected) projection in each iteration, and the weak convergence of the algorithm is still guaranteed. More precisely, the SPM consists of two steps in each iteration. Step one, once the
k-th iterate
is obtained, according to a certain criterion, we select one set
or
from the sets
, or the relaxed sets
, where
is some half-space containing
for all
and
, respectively. Step two, we then update the new iterate
via the process:
Because (
4) only involves one projection, the SPM is simpler than the SOP-type algorithms.
The main purpose of this paper is to propose a new method, which is called the combination projection method (CPM), for solving the convex feasibility problem of finding some
where
m is a positive integer and
are convex functions defined as
. The key of the CPM is that for the current iterate
, the CPM firstly constructs a new level set
through a convex combination of some of
, and then updates the new iterate
by using the projection
. The simplicity and ease of implementation are two of the advantages of our method since only one projection is used in each iteration and the projections are easy to calculate. To make the CPM easily implementable, we also introduce the combination relaxation projection method (CRPM), which involves projection onto half-spaces. The weak convergence theorems are proved and the numerical results show the advantages of our methods. In fact, the methods in this paper can be easily extended to solve other nonlinear problems, for example, the SFP and the VIP.
2. Preliminaries
Let be a real Hilbert space and be a mapping. Recall that
T is nonexpansive if for all
T is firmly nonexpansive if for all .
is an averaged mapping if there exists some and nonexpansive mapping such that ; in this case, T is also said to be -averaged.
T is inverse strongly monotone (ISM) if there exists some
such that
In this case, we say that T is -ISM.
Lemma 1 ([
31]).
For a mapping , the following are equivalent:- (i)
T is -averaged;
- (ii)
T is 1-ISM;
- (iii)
T is firmly nonexpansive;
- (iv)
is firmly nonexpansive.
Recall that the projection onto a closed convex subset
D of
is defined by
It is well known that
is characterized by the inequality
Some useful properties of the projection operators are collected in the lemma below.
Lemma 2 ([
31]).
For any nonempty closed convex subset D of , the projection is both -averaged and 1-ISM. Equivalently, is firmly nonexpansive. Lemma 3 ([
32]).
Let D be a nonempty closed convex subset of . Let satisfy the properties:- (i)
exists for each ;
- (ii)
.
Then, converges weakly to a point in D.
Lemma 4. Let be a finite family of convex functions defined as such that their level sets , , with nonempty intersection. Let with such that . Then, the following properties are satisfied.
- (i)
If each is a half-space, i.e., with and such that , in addition, if the vector group is also linearly independent, then H is a half-space;
- (ii)
H is a closed ball if each is a closed ball;
- (iii)
H is a closed ball if each is a closed ball or a half-space, and at least one of them is a closed ball.
Proof. (i) Obviously, for any
with
, we have
Since
is a linearly independent group, we assert that
, and hence, it is easy to see from (
6) that
H is a half-space.
(ii) If
is a closed ball with center
and radius
, then
. For any
with
, noting the identity
Since
, there exists some
such that
for all
, thus this implies that
that is,
H is a closed ball.
(iii) Assume that
is a closed ball and
is a half-space, then
and
have the forms
and
, respectively, where
,
and
. For any
we have from calculating
that
By using the same argument as in (ii), we assert , and this means that H is indeed a closed ball. This together with (i) and (ii) indicates that the conclusion is true for the general case. □
Suppose
is a proper, lower-semicontinuous (lsc), convex function. Recall that an element
is said to be a subgradient of
f at
x if
We denote by
the set of all subgradients of
f at
x. Recall that
f is said to be subdifferentiable at
x if
, and
f is said to be subdifferentiable (on
) if it is subdifferentiable at every
. Recall also that the inequality (
9) is called the subdifferential inequality of
f at
x.
3. The CPM for Solving Convex Feasibility Problems
In this section, the combination projection method (CPM) is proposed for solving the convex feasibility problem (CFP):
where
with
a convex function for each
. The algorithm proposed below for solving the CFP (
10) is called the
combination projection method (CPM) for the reason that the projection that is used to update the next iterate is on the level set of a convex combination of some of
in an appropriate way. Throughout this section, we always assume that
and use
I to represent the index set
for convenience.
Remark 1. Algorithm 1 suits for the case where have closed-form representations. For example, according to Lemma 4, if each of is a closed ball or a half-space, then is also a closed ball or a half-space for each , and hence, has the closed-form representation for all . In this case, Algorithm 1 is easy implementable.
Algorithm 1: (The Combination Projection Method) |
- Step 1:
Choose arbitrarily and set . - Step 2:
Given the current iterate . Check the index set . If , i.e., for all , then stop and is a solution of the CFP ( 10). Otherwise, select such that , and construct the level set: - Step 3:
Set and return to Step 2.
|
Remark 2. The simplicity and ease of implementation of Algorithm 1 can be illustrated through a simple example in . Compute the projection , where and is given bywhere Selecting the initial guess and using the CPM (Algorithm 1), only one iteration step is needed to get the exact solution of the problem. Indeed, since for each then . Taking the convex combination coefficients as , , the CPM firstly generates a new set i.e., the level set of the convex function , then, the CPM updates the new iterate . However, if we adopt the SOP to get , the iteration process will be complicated. On one hand, although there is an expression for the projection onto an ellipsoid [8], obtaining a constant in the expression requires solving an algebraic equation. On the other hand, the actual calculation shows that after several iterations, we can only get an approximate solution of . We have the following convergence result for Algorithm 1.
Theorem 1. Assume that for each , is a bounded uniformly continuous (i.e., uniformly continuous on each bounded subset of ) and convex function. If , then the sequence generated by Algorithm 1 converges weakly to a solution of CFP (10). Proof. Obviously, we may assume that
for all
with no loss of generality. By the definition of
, it is very easy to see that
holds for all
. For any
, we have by Lemma 2 that
From (
11), we assert that
is nonincreasing; hence,
is bounded and
exists. Furthermore, we also get
Particularly,
as
. By Lemma 3, all we need to prove is that
. To see this, take
and let
be a subsequence of
weakly converging to
. Noticing
, we get
For each fixed
and any
, if
, then
If
, by virtue of the definition of
and (
12), we get
Moreover, noting
, the combination of (
13) and (
14) yields
for each
and all
. Since
,
, and
are
w-lsc and bounded uniformly continuous, we can obtain
by taking the limit in (
15) as
. Hence
and
. This completes the proof. □
The second algorithm for solving the CFP (
10) is named the
combination relaxation projection method (CRPM), which works for the case where the projection operators
do not have closed-form formulae. In this case, we assume that the convex functions
are subdifferentiable on
.
The convergence of Algorithm 2 is given as follows.
Algorithm 2: (The combination Relaxation Projection Method) |
- Step 1:
Choose arbitrarily and set . - Step 2:
Given the current iterate . Check the index set . If , i.e., for all , then stop and is a solution of the CFP ( 10). Otherwise, select such that , and construct a half space by
where for each . - Step 3:
Set and return to Step 2.
|
Theorem 2. Assume that for each , is a w-lsc, subdifferentiable, convex function such that the subdifferential mapping is bounded (i.e., bounded on bounded subsets of ). If , then the sequence generated by Algorithm 2 converges weakly to a solution of the CFP (10). Proof. With no loss of generality, we assume
for all
. First of all, we show that
is a half-space, i.e.,
. Indeed, if otherwise, it can be asserted by (16) that
holds for each
and this is contradictory to the definition of
. We next show
. Indeed, for each
, we have from the subdifferential inequality (
9) that
where
. Summing (
16) over
, we have
By the definition of
(see (16)), we assert from (
17) that
and hence
. For any
, noting
, we have by Lemma 2 that
This implies that
is bounded,
exists, and
. Now we verify that
. Since
is bounded and
is a bounded operator, there exists a constant
such that
for all
and
. By the definition of
and the fact that
, we get
For each
and
, if
, then
and if
, it follows from the definition of
and (
18) that
Hence, for each
, the combination of (
19) and (
20) leads to
From (
21), the containment
follows immediately from an argument similar to the final part of the proof of Theorem 1. □
4. Numerical Results
In this section, we compare the behavior of the CPM (Algorithm 1) and SOP by solving two synthetic examples in the Euclidean space
. All the codes were written by Matlab R2010a and all the numerical experiments were conducted on a HP Pavilion notebook with Intel(R) Core(TM) i5-3230M
[email protected] GHz and 4 GB RAM running on Windows 7 Home Premium operating system.
Example 1. Consider the convex feasibility problem:where and are nonnegative real numbers. Take ,
, , , , , , , and the initial point is randomly chosen in . Obviously,
, i.e., problem (
22) is solvable. We use
to denote the
k-th iterate and define
to measure the error of the
k-th iteration, which also serves as the role of checking whether or not the proposed algorithm converges to a solution. In fact, it is easy to see that if
is less than or equal to zero, then
is an exact solution of Problem (
22) and the iteration can be terminated; if
is greater than zero, then
is just an approximate solution and the smaller
, the smaller the error of
to a solution.
Let denote the number of elements of the set . We give two ways to choose
- (1)
, . Denote the corresponding combination projection method by CPM1.
- (2)
, . Denote the corresponding combination projection method by CPM2.
Table 1 illustrates that the set
is generally different each iteration. From
Figure 1, we conclude that the behaviors of the CPM1 and CPM2 depend on the initial point
The errors for CPM1 and CPM2 oscillate which may be because only partial information about the convex sets
is used in each iteration. However, in view of the SOP, all the information about the convex sets
is used in each iteration since it involves all projections
in each iteration. From
Figure 2, the CPM1 behaves better than the SOP.
Example 2. Consider the linear equation system:where A is an matrix, , and b is a vector in If the noise is taken into consideration, Problem (24) is stated aswhere measures the level of errors. Letwhere . It is easy to show that Problem (25) is equal to the convex feasibility problem: The set C is nonempty since the linear equation system has infinite solutions.
The initial point is randomly chosen in . We compared the CPM2 and SOP for different m and n. From Figure 3, the behavior of the CPM2 is better than that of the SOP. The error of the CPM2 has a bigger oscillation than that in Example 2, the oscillation seems to decrease when the iteration is very big. The SOP behaves well when m and n are small, while its error is very big for big m and n. In Figure 4, we compare the CPU time of the CPM2 and SOP, which illustrates that the CPU time of the CPM2 is less than that of the SOP. Furthermore, the CPU time of the SOP exceeds that of the CPM2 with the iteration.