1. Introduction
Throughout this paper, we denote two nonempty closed convex subsets of two real Hilbert spaces and by C and Q, respectively. We denote the orthogonal projections onto a set C by and let be an adjoint operator of , where A is a bounded linear operator.
Over the past decade, inverse problems have been widely studied since they stand at the core of image reconstruction problems and signal processing. The split feasibility problem (SFP) is one of the most popular inverse problems that has attracted the attention of many researchers. Cencer and Elfving first considered the split feasibility problem (SFP) in 1994 [
1]. The split feasibility problem (SFP) can mathematically be expressed as follows: find an element
x with:
As mentioned above, the SFP (
1) has received much attention from many researchers because it can be applied to various science branches. Several practical algorithms to solve the SFP (
1) presented in recent years were given in [
2,
3,
4,
5,
6,
7]. It is important to note that the split feasibility problem (SFP) (
1) is equivalent to the following minimization formulation:
In 2002, Byrne [
2] introduced a practical method called the CQ algorithm for solving the SFP, which is defined as follows:
for all
and
is arbitrarily chosen. They considered the step size
. The advantage of the CQ algorithm is that there is no need to compute the inverse of a matrix because it only deals with an orthogonal projection. However, the CQ algorithm still needs to compute an operator norm of
A.
A self-adaptive step size was then introduced by Yang [
8] to avoid computing an operator norm of
A. Yang designed the step size as follows:
where
is a positive sequence parameter that satisfies
and
. Moreover, there are two additional conditions for the self-adaptive step size: (1)
Q must be a bounded subset; (2)
A must be a full-column-rank matrix.
After that, López [
9] modified a self-adaptive step size to remove the two additional conditions of Yang [
8]. López then obtained a practical self-adaptive step size given by:
where
is a positive sequence bounded below by zero and bounded above by four.
The split minimization problem is presented below. Let
f and
g be two proper semi-continuous and convex functions on
and
, respectively. Moudafi and Thakur [
10] considered the interesting problem called the proximal split feasibility problem. This problem is defined to find a minimizer
x such that:
where
and
is the following Moreau–Yoshida approximate:
It is fascinating to observe the case of
. The minimization problem (
6) can be reduced to the SFP (
1) when we set
and
, where
and
are the indicator functions of the subsets
C and
Q, respectively. The reader can refer to [
11] for details. By using the relations (
7), we can then define the proximity operator of a function
g of order
as the following form:
Moreover, the subdifferential of function
f at the point
x is given by the following form:
Recall the following notations:
and
In the case of
, Moudafi and Thakur [
10] also considered a generalization for the minimization problem (
6), named the split minimization problem (SMP), which can be expressed to find:
Besides considering the SMP (
10), they also introduced an algorithm to solve the SMP (
10). It is defined as follows:
where
is arbitrarily chosen and
is a self-adaptive step size. In addition, Moudafi and Thakur [
10] proved a weak convergence result under some suitable conditions imposed on the parameters.
Recently, Abbas [
12] constructed and introduced two iterative algorithms to solve the split minimization problem (SMP) (
10). These algorithms are defined as follows:
and:
where
is arbitrarily chosen, step size
with
, and functions
and
are defined in
Section 3. Abbas [
12] provided the sequences generated by the algorithms (
12) and (
13), which converge strongly to a solution.
Furthermore, currently, fixed point problems of a nonexpansive mapping are still extensively studied by many research works since they are at the core of several problems in the real world, such as signal processing and image recovery. One of the famous algorithms to solve the fixed point problem of a nonexpansive mapping is as follows:
where
is a nonexpansive mapping and the initial point
is chosen in
C,
. The algorithm (
14) is known as Mann’s algorithm [
13]. It is well known that a Mann-type algorithm gives strong convergence provided the underlying space is smooth enough. There are many works in this direction. The reader can refer to [
14,
15,
16] for details.
Apart from studying all the above problems, speeding up the convergence rate of algorithms has been often studied by many authors. Polyak [
17] introduced a helpful technique to accelerate the rate of convergence called the heavy ball method. After that, many researchers have modified the heavy ball method to use with their algorithms. Nesterov [
18] modified the heavy ball method to improve the rate of convergence for the algorithms. This algorithm is known as the modified heavy ball method:
where
are arbitrarily chosen,
,
is an extrapolation factor, and the term
is called the inertia. For more details, the reader is directed to [
19,
20,
21].
Based on the above ideas, the aims of this work were: (1) to construct a new self-adaptive step size algorithm combine with the proximal algorithm, the modified Mann method with the inertial extrapolation to solve the split minimization problem (SMP) (
10), and the fixed point problems of a nonexpansive mapping; (2) to establish the strong convergence results for the SMP and fixed point problems using the proposed algorithm; (3) to give numerical examples for our algorithm to present its consistency, accuracy, and performance compared to the existing algorithms in the literature.
2. Preliminaries
Some notations used throughout this paper are presented in this section. For an element x in a Hilbert space, and are denoted by a strong convergence and a weak convergence, respectively.
Lemma 1. For every u and v in a real Hilbert space H, then,where . Proposition 1. Let be a mapping with , where u and v are elements in C. The mapping S is called:
- 1.
- 2.
ξ-inverse strongly monotone (ξ-ism) if: for some constants ;
- 3.
- 4.
It is well known that the metric projection
of
onto
C is a nonexpansive mapping where
is a nonempty closed convex, and it satisfies
for all
Moreover,
is characterized by the following properties:
and:
for all
and
. We denote
the collection of proper convex lower semicontinuous functions on
.
Definition 1. Refs. [22,23]: Let and . Define the proximal operator of g by: The proximal of g of order λ () is given by: Below are some of the valuable properties of the proximal operators.
Property 1. Refs. [24,25]: Let , , and Q be a nonempty closed convex subset of . - 1.
If where is an indicator function of Q, then the proximal operators , for all ;
- 2.
is firmly nonexpansive;
- 3.
, the resolvent of the subdifferential of g;
- 4.
if and only if .
Let
. In [
26], it was shown that
. Moreover, they showed that
and
are both firmly nonexpansive.
Lemma 2. Ref. [27]: Any sequence in a Hilbert space satisfies Opial’s condition if implies the following inequality:for every with . Lemma 3. Ref. [28]: Any sequence of nonnegative real number can be written in the following relation:and the following three conditions hold: - 1.
- 2.
- 3.
Then, .
Lemma 4. Ref. [29]: Let a sequence be nondecreasing at infinity in the sense that there is a subsequence such that for all . For an integer , define the integer sequence by: Then, a sequence does not decrease and verifies . Furthermore,for all 3. Results
This section proposes an iterative algorithm generating a sequence that strongly converges to a solution of split minimization problems (
10) and fixed point problems of a nonexpansive mapping. We established the convergence theorem of the proposed algorithm under the statements as follows:
Let
be a nonexpansive mapping. Denote the set of all solutions of a split minimization (
10) by
and the set of all fixed points of the mapping
S by
. Let
, and suppose that:
Then, we obtained the gradients of the functions
h and
l as follows:
Lemma 5. Let and be two functions that are defined as (21). Then, the gradients and are Lipschitz continuous. Proof. By the notation
, we find that:
where
. On the other hand,
By combining (
22) with (
23), we find that:
Therefore, we obtained that
is
-inverse strongly monotone. Moreover:
Similarly, one can prove that is also Lipschitz continuous. This completes the proof. □
A valuable assumption for analyzing our main theorem is given as follows.
Assumption 1. Suppose that , are positive sequences and , are sequences in interval that satisfy the following assumptions:
- (A1)
;
- (A2)
and ;
- (A3)
with ;
- (A4)
and as .
Theorem 1. Let and be two real Hilbert spaces and S be a nonexpansive mapping on . Assume that A is a bounded linear operator from to with its adjoint operator , and and are proper lower semicontinuous and convex functions. Assume that SMP (10) is consistent (that is, ), and let and v be in . Then, the sequence in Algorithm 1 strongly converges to , where Algorithm 1 A split minimization algorithm. |
Initialization: Let and be arbitrarily chosen. Choose some positive sequences , and satisfying Assumption 1. Set . Iterative step: Given the current iteration , calculate the next iterations as follows:
where: Stopping criterion: If , stop. Otherwise, put , and go to Iterative step. |
Proof. Assume that
. By using the firm non-expansiveness of
(see [
30,
31] for details), we find that:
and:
Next, we set
. For fixed
we obtain that:
and
Since
S is nonexpansive, we find that:
Thus, is bounded, and this implies that , and are also bounded.
Next, we observe that:
Next, we claim that
and
. Consider:
and:
Moreover, consider:
We next show the sequence by dividing into two possible cases.
Case 1. Assume that
is the non-increasing sequence. There exists
such that
for each
. Then, the sequence
converges, and so:
Since
, we obtain by using (
36) that:
and:
We then obtain by using
and
being bounded that:
and:
Thus, we obtain by using (
33) that:
Moreover, it easy to see that:
By applying (
36) and (
39) in the Formula (
32), we find that:
By using the fact that
, we find that:
Moreover, we observe that:
We then obtain by using (
38), (
41), and (
44) that:
Thus, we obtain immediately that:
We next show that
, where
To prove this, we can choose a subsequence
of
with:
Since is a bounded sequence, we can take a weakly convergent subsequence of that converges to , that is . By using the fact that we find that .
We next show
in two steps. First, we show that
w is a fixed point of
S. By contradiction, we assume that
. Since
and
by Opial’s conditions, we conclude that:
This is a contradiction. This implies
Second, we show
Since
w is a weak limit point of
, there is a
such that
. Since
h is lower semicontinuous, we find that:
Then, , and so, . This means that is a minimizer of the operator g.
Similarly, since
l is lower semicontinuous, we find that:
Thus, w is in a fixed point set of the proximal operator , that is . This means that w is a minimizer of the operator f. This implies . Therefore, we can conclude that
According to the properties of matric projections, since
and
, then
Consider,
In the final step, we show that the sequence
as
. We observe that:
and:
By combining (
27) with (
34) and (
51), we find that:
Thus, we obtain by using Lemma 3, Assumption (A4), Inequality (
50), and the boundedness of
that
Case 2. Assume that
is increasing. By applying Assumptions (A1), (A2), and (A4) to (
36), we find that
. Thus, we obtain by using (
33) that
.
Suppose that
, and for each
(where
large enough), define a mapping
as follows:
Thus,
when
n tends to infinity, and for each
,
We then obtain by using Inequality (
36) that:
Since
as
and also
when
n tends to infinity, we observe that:
and:
Moreover, we obtain that:
We now obtain by using Lemma 4 that:
as
. This implies that
and
The proof is complete. □
Remark 1. - (a)
If we put , and for all in our proposed algorithm, we found that the algorithm (11) of Moudafi and Thakur was obtained. Moreover, we obtained a strong convergence theorem, while Moudafi and Thakur [10] only obtained a weak convergence theorem; - (b)
If we put , and for all in our proposed algorithm, we found that Algorithm (1.2) in [32] was obtained; - (c)
If we put , , and for all in our proposed algorithm, we found that the Mann iteration algorithm in [13] was obtained. Moreover, we obtained a strong convergence theorem, while Mann [13] only obtained a weak convergence theorem; - (d)
As an extraordinary choice, an extrapolation factor in our proposed algorithm can be chosen as follows: , for each integer n greater than or equal to three and a positive sequence with , as . This choice was recently derived in [33,34] as an inertial extrapolated step.
4. Applications and Numerical Results
This section provides the numerical experiments to illustrate the performance and compare Algorithm 1 with and without the inertial term. Moreover, we present an experiment to compare our scheme with the Abbas algorithms [
12]. All code was written in MATLAB 2017b and run on a MacBook Pro 2012 with a 2.5 GHz Intel Core i5.
First, we illustrate the performance of our proposed algorithm by comparing the proposed algorithm with and without the inertial term as the following experiment:
Example 1. Suppose , and let . In problem (10), assume that and , where δ is the indicator function. Then: Thus, the problem (10) becomes the SEP (1). We next took the parameters and . Thus, by Algorithm 1, we obtained that: We then provide a comparison of the convergence of Algorithm 1 with:and Algorithm 1 with in terms of the number of iterations with the stopping criterion . The result of this experiment is reported in Figure 1. Remark 2. By observing the result of Example 1, we found that our proposed algorithm with inertia was faster and more efficient than our proposed algorithm without inertia ().
Second, we used the example in Abbas [
12] to show the performance of our algorithm by comparing our proposed algorithm with Algorithms (
12) and (
13) in terms of CPU time as the following experiment:
Example 2. Let and be the Euclidean norm in . The metric projection onto the Euclidean unit ball B is defined by the following: Thus, the proximal operator (the block soft thresholding) [24] is given by: For , let ,and: Then (see [35]),and:for all . Assume that , and let us consider the split minimization problem (SMP) (10) as follows: It is easy to check that is in the set of solutions of Problem (64). We now took and:for all . We next took , then we obtained by Algorithm 1 that: The iterative schemes (12) and (13) are:and:respectively, where was given in [12]. We now provide a comparison of the convergence of the iterative schemes (12) and (13) in Abbas’s work [12] with our proposed algorithm with in terms of CPU time, where initial points , were randomly generated vectors in . We tested this experiment with different choices of N as follows: . We used as the stopping criterion. The result of this experiment is reported in Table 1. Remark 3. By observing the result of Example 2, we found that our proposed algorithm was more efficient than Abbas’s Algorithms (12) and (13) regarding the CPU time. Finally, we show the average error of our algorithm as the following experiment:
Example 3. Let , and be defined as in example (2). In this experiment, we took , where . Let be a sequence generated by Algorithm 1 and the parameters , and The mean-error is given by: We used as the stopping criterion of this experiment. We then observed that the sequence generated by Algorithm 1 converged to a solution if converged to zero. Figure 2 shows the average error of our method in three groups of 20 initial points. Remark 4. By observing the result of Example 3, we found that the choice of the initial value did not affect the ability of our algorithm to achieve the solutions.