Next Article in Journal
Nonlocal Problem for a Third-Order Equation with Multiple Characteristics with General Boundary Conditions
Next Article in Special Issue
B-Fredholm Spectra of Drazin Invertible Operators and Applications
Previous Article in Journal
Relaxation Limit of the Aggregation Equation with Pointy Potential
Previous Article in Special Issue
Note on the Equivalence of Special Norms on the Lebesgue Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Self-Adaptive Algorithm for the Common Solution of the Split Minimization Problem and the Fixed Point Problem

by
Nattakarn Kaewyong
1 and
Kanokwan Sitthithakerngkiet
2,*
1
Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), Wongsawang, Bangsue, Bangkok 10800, Thailand
2
Intelligent and Nonlinear Dynamic Innovations Research Center, Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok (KMUTNB), Wongsawang, Bangsue, Bangkok 10800, Thailand
*
Author to whom correspondence should be addressed.
Axioms 2021, 10(2), 109; https://doi.org/10.3390/axioms10020109
Submission received: 27 April 2021 / Revised: 26 May 2021 / Accepted: 28 May 2021 / Published: 30 May 2021
(This article belongs to the Collection Mathematical Analysis and Applications)

Abstract

:
In this paper, a new self-adaptive step size algorithm to approximate the solution of the split minimization problem and the fixed point problem of nonexpansive mappings was constructed, which combined the proximal algorithm and a modified Mann’s iterative method with the inertial extrapolation. The strong convergence theorem was provided in the framework of Hilbert spaces and then proven under some suitable conditions. Our result improved related results in the literature. Moreover, some numerical experiments were also provided to show our algorithm’s consistency, accuracy, and performance compared to the existing algorithms in the literature.
MSC:
46N10; 47H10; 47J26; 65K10

1. Introduction

Throughout this paper, we denote two nonempty closed convex subsets of two real Hilbert spaces H 1 and H 2 by C and Q, respectively. We denote the orthogonal projections onto a set C by P C and let A : H 2 H 1 be an adjoint operator of A : H 1 H 2 , where A is a bounded linear operator.
Over the past decade, inverse problems have been widely studied since they stand at the core of image reconstruction problems and signal processing. The split feasibility problem (SFP) is one of the most popular inverse problems that has attracted the attention of many researchers. Cencer and Elfving first considered the split feasibility problem (SFP) in 1994 [1]. The split feasibility problem (SFP) can mathematically be expressed as follows: find an element x with:
x C such that A x Q .
As mentioned above, the SFP (1) has received much attention from many researchers because it can be applied to various science branches. Several practical algorithms to solve the SFP (1) presented in recent years were given in [2,3,4,5,6,7]. It is important to note that the split feasibility problem (SFP) (1) is equivalent to the following minimization formulation:
min x C 1 2 A x P Q A x 2 .
In 2002, Byrne [2] introduced a practical method called the CQ algorithm for solving the SFP, which is defined as follows:
x n + 1 = P C ( x n τ n A ( A P Q A ) x n ) ,
for all n 1 and x 1 H 1 is arbitrarily chosen. They considered the step size τ n ( 0 , 2 / A 2 ) . The advantage of the CQ algorithm is that there is no need to compute the inverse of a matrix because it only deals with an orthogonal projection. However, the CQ algorithm still needs to compute an operator norm of A.
A self-adaptive step size was then introduced by Yang [8] to avoid computing an operator norm of A. Yang designed the step size as follows:
τ n = ρ n A ( I P Q ) A x n
where ρ n is a positive sequence parameter that satisfies n = 0 ρ n = and n = 0 ρ n 2 < . Moreover, there are two additional conditions for the self-adaptive step size: (1) Q must be a bounded subset; (2) A must be a full-column-rank matrix.
After that, López [9] modified a self-adaptive step size to remove the two additional conditions of Yang [8]. López then obtained a practical self-adaptive step size given by:
τ n = ρ n ( I P Q ) A x n A ( I P Q ) A x n 2
where ρ n is a positive sequence bounded below by zero and bounded above by four.
The split minimization problem is presented below. Let f and g be two proper semi-continuous and convex functions on H 1 and H 2 , respectively. Moudafi and Thakur [10] considered the interesting problem called the proximal split feasibility problem. This problem is defined to find a minimizer x such that:
min x H 1 f ( x ) + g λ ( A x )
where λ > 0 and g λ ( A x ) is the following Moreau–Yoshida approximate:
g λ ( A x ) = min y H 2 g ( y ) + 1 2 λ y A x 2 .
It is fascinating to observe the case of C A 1 Q . The minimization problem (6) can be reduced to the SFP (1) when we set f = δ C and g = δ Q , where δ C and δ Q are the indicator functions of the subsets C and Q, respectively. The reader can refer to [11] for details. By using the relations (7), we can then define the proximity operator of a function g of order λ as the following form:
prox λ g ( y ) = arg min y ˜ H 2 g ( y ˜ ) + 1 2 λ y ˜ y 2 .
Moreover, the subdifferential of function f at the point x is given by the following form:
f ( x ) = x ^ H 1 | f ( x ) + x ^ , x ¯ x f ( x ¯ ) , x ¯ H 1 .
Recall the following notations:
arg min f = { x ^ H 1 : f ( x ^ ) f ( x ) , x H 1 }
and
arg min g = { y ^ H 2 : g ( y ^ ) g ( y ) , y H 2 } .
In the case of ( arg min f ) ( A 1 arg min g ) , Moudafi and Thakur [10] also considered a generalization for the minimization problem (6), named the split minimization problem (SMP), which can be expressed to find:
x arg min f such that A x arg min g .
Besides considering the SMP (10), they also introduced an algorithm to solve the SMP (10). It is defined as follows:
x n + 1 = prox λ τ n f ( I τ n A ( I prox λ g A ) x n ) , n 1 ,
where x 1 H 1 is arbitrarily chosen and τ n is a self-adaptive step size. In addition, Moudafi and Thakur [10] proved a weak convergence result under some suitable conditions imposed on the parameters.
Recently, Abbas [12] constructed and introduced two iterative algorithms to solve the split minimization problem (SMP) (10). These algorithms are defined as follows:
x n + 1 = prox λ τ n f ( ( 1 ϵ n ) τ n A ( I prox λ g ) A ) x n , n 1 ,
and:
x n + 1 = ( 1 ϵ n ) prox λ τ n f ( I τ n A ( I prox λ g ) A ) x n , n 1 ,
where x 1 is arbitrarily chosen, step size τ n = ρ n ( h ( x n ) + l ( x n ) ) h ( x n ) 2 + l ( x n ) 2 with ρ n ( 0 , 4 ) , and functions h , l , h and l are defined in Section 3. Abbas [12] provided the sequences generated by the algorithms (12) and (13), which converge strongly to a solution.
Furthermore, currently, fixed point problems of a nonexpansive mapping are still extensively studied by many research works since they are at the core of several problems in the real world, such as signal processing and image recovery. One of the famous algorithms to solve the fixed point problem of a nonexpansive mapping is as follows:
x n + 1 = ( 1 t n ) x n + t n S ( x n ) , n > 0 ,
where S : C C is a nonexpansive mapping and the initial point x 1 is chosen in C, { t n } [ 0 , 1 ] . The algorithm (14) is known as Mann’s algorithm [13]. It is well known that a Mann-type algorithm gives strong convergence provided the underlying space is smooth enough. There are many works in this direction. The reader can refer to [14,15,16] for details.
Apart from studying all the above problems, speeding up the convergence rate of algorithms has been often studied by many authors. Polyak [17] introduced a helpful technique to accelerate the rate of convergence called the heavy ball method. After that, many researchers have modified the heavy ball method to use with their algorithms. Nesterov [18] modified the heavy ball method to improve the rate of convergence for the algorithms. This algorithm is known as the modified heavy ball method:
w n = z n + θ n ( z n z n 1 ) z n + 1 = w n τ n f ( w n ) , n 2 ,
where z 1 , z 2 H 1 are arbitrarily chosen, τ n > 0 , 0 θ n < 1 is an extrapolation factor, and the term θ n ( z n x z 1 ) is called the inertia. For more details, the reader is directed to [19,20,21].
Based on the above ideas, the aims of this work were: (1) to construct a new self-adaptive step size algorithm combine with the proximal algorithm, the modified Mann method with the inertial extrapolation to solve the split minimization problem (SMP) (10), and the fixed point problems of a nonexpansive mapping; (2) to establish the strong convergence results for the SMP and fixed point problems using the proposed algorithm; (3) to give numerical examples for our algorithm to present its consistency, accuracy, and performance compared to the existing algorithms in the literature.

2. Preliminaries

Some notations used throughout this paper are presented in this section. For an element x in a Hilbert space, x n x and x n x are denoted by a strong convergence and a weak convergence, respectively.
Lemma 1.
For every u and v in a real Hilbert space H, then,
v w 2 = v 2 w 2 + 2 w v , w ,
v + w 2 v 2 + 2 w , w + v , a n d
κ v + ( 1 κ ) w 2 = κ v 2 + ( 1 κ ) w 2 κ ( 1 κ ) v w 2 ,
where κ [ 0 , 1 ] .
Proposition 1.
Let S : C H 1 be a mapping with C H 1 , where u and v are elements in C. The mapping S is called:
1
monotone if:
u v , S u S v 0 ;
2. 
ξ-inverse strongly monotone (ξ-ism) if:
u v , S u S v ξ S u S v 2 ,
for some constants ξ > 0 ;
3. 
nonexpansive if:
S u S v u v ;
4. 
firmly nonexpansive if:
S u S v 2 S u S v , u v .
It is well known that the metric projection P C of H 1 onto C is a nonexpansive mapping where C H 1 is a nonempty closed convex, and it satisfies P C u P C v 2 u v , P C u P C v for all u , v H 1 . Moreover, P C u is characterized by the following properties:
u P C u 2 + v P C u 2 u v 2 ,
and:
u v 2 P C u P C v 2 ( u v ) ( P C u P C v ) 2 ,
for all u H 1 and v C . We denote Γ ( H 2 ) the collection of proper convex lower semicontinuous functions on H 2 .
Definition 1.
Refs. [22,23]: Let g Γ ( H 2 ) and x H 2 . Define the proximal operator of g by:
p r o x g ( x ) = arg min u H 2 g ( u ) + 1 2 u x 2 .
The proximal of g of order λ ( λ > 0 ) is given by:
p r o x λ g ( x ) = arg min u H 2 g ( u ) + 1 2 λ u x 2 .
Below are some of the valuable properties of the proximal operators.
Property 1.
Refs. [24,25]: Let g Γ ( H 2 ) , λ ( 0 , ) , and Q be a nonempty closed convex subset of H 2 .
1. 
If g = δ Q where δ Q is an indicator function of Q, then the proximal operators p r o x λ g = P Q , for all λ > 0 ;
2. 
p r o x λ g is firmly nonexpansive;
3. 
p r o x λ g = ( I + λ g ) 1 = J λ g , the resolvent of the subdifferential g of g;
4. 
x = p r o x g ( x + y ) if and only if y g ( x ) .
Let g Γ ( H 2 ) . In [26], it was shown that F i x ( prox g ) = arg min H 2 g . Moreover, they showed that prox g and I prox g are both firmly nonexpansive.
Lemma 2.
Ref. [27]: Any sequence { υ n } in a Hilbert space H 1 satisfies Opial’s condition if υ n v implies the following inequality:
lim inf n υ n v < lim inf n υ n z ,
for every z H 1 with z v .
Lemma 3.
Ref. [28]: Any sequence of nonnegative real number { a n } can be written in the following relation:
z n + 1 ( 1 β n ) z n + β n γ n + ζ n , n 0 ,
and the following three conditions hold:
1. 
{ β n } [ 0 , 1 ] , β n = ;
2. 
lim sup n γ n 0 ;
3. 
ζ n 0 , ζ n < f o r a l l n > 0 .
Then, lim n z n = 0 .
Lemma 4.
Ref. [29]: Let a sequence { Λ n } R be nondecreasing at infinity in the sense that there is a subsequence { Λ n j } { Λ n } such that { Λ n j } < { Λ n j + 1 } for all j 0 . For an integer m 0 , define the integer sequence { η ( m ) } m m 0 by:
η ( m ) = max { k m | Λ k Λ k + 1 } .
Then, a sequence { η ( m ) } m m 0 does not decrease and verifies lim m η ( m ) = . Furthermore,
max { Λ η ( m ) , Λ m } Λ η ( m ) + 1 ,
for all m m 0 .

3. Results

This section proposes an iterative algorithm generating a sequence that strongly converges to a solution of split minimization problems (10) and fixed point problems of a nonexpansive mapping. We established the convergence theorem of the proposed algorithm under the statements as follows:
Let S : H 1 H 1 be a nonexpansive mapping. Denote the set of all solutions of a split minimization (10) by Γ and the set of all fixed points of the mapping S by F i x ( S ) . Let Ω = Γ F i x ( S ) , and suppose that:
l ( u ) = 1 2 ( I prox λ τ n f ) u 2 , h ( u ) = 1 2 ( I prox λ g ) A u 2 .
Then, we obtained the gradients of the functions h and l as follows:
l ( u ) = ( I prox λ τ n f ) u , h ( u ) = A ( I prox λ g ) A u .
Lemma 5.
Let h : H 2 R and l : H 1 R be two functions that are defined as (21). Then, the gradients h and l are Lipschitz continuous.
Proof. 
By the notation h ( u ) : = A ( I prox λ g ) A u , we find that:
h ( u ) h ( v ) 2 = A ( ( I prox λ g ) A u ( I prox λ g ) A v ) , A ( ( I prox λ g ) A u ( I prox λ g ) A v ) = ( I prox λ g ) A u ( I prox λ g ) A v , A A ( ( I prox λ g ) A u ( I prox λ g ) A v ) L ( I prox λ g ) A u ( I prox λ g ) A v 2 ,
where L = A A . On the other hand,
h ( u ) h ( v ) , u v = A ( ( A u prox λ g A u ) ( A v prox λ g A v ) , u v = ( I prox λ g ) A u ( I prox λ g ) A v , A u A v ( I prox λ g ) A u ( I prox λ g ) A v 2 .
By combining (22) with (23), we find that:
h ( u ) h ( v ) , u v 1 L h ( u ) h ( v ) 2 .
Therefore, we obtained that h is 1 L -inverse strongly monotone. Moreover:
h ( u ) h ( v ) L u v .
Similarly, one can prove that l is also Lipschitz continuous. This completes the proof.    □
A valuable assumption for analyzing our main theorem is given as follows.
Assumption 1.
Suppose that { ρ n } , { θ n } are positive sequences and { α n } , { δ n } are sequences in interval ( 0 , 1 ) that satisfy the following assumptions:
(A1) 
inf α n ( 1 α n ) > 0 ;
(A2) 
lim n δ n = 0 and n = 1 δ n = ;
(A3) 
inf ρ n ( 4 ρ n ) > 0 with 0 < ρ n < 4 ;
(A4) 
θ n x n x n 1 0 and θ n δ n x n x n 1 0 as n .
Theorem 1.
Let H 1 and H 2 be two real Hilbert spaces and S be a nonexpansive mapping on H 1 . Assume that A is a bounded linear operator from H 1 to H 2 with its adjoint operator A , and f : H 1 R { + } and g : H 2 R { + } are proper lower semicontinuous and convex functions. Assume that SMP (10) is consistent (that is, Ω ), and let x 1 and v be in H 1 . Then, the sequence { x n } in Algorithm 1 strongly converges to z Ω , where z = P Ω ( v ) .
Algorithm 1 A split minimization algorithm.
Initialization: Let λ > 0 and x 0 , x 1 H 1 be arbitrarily chosen. Choose some positive sequences { ρ n } , { δ n } and { α n } satisfying Assumption 1. Set n = 1 .
Iterative step: Given the current iteration x n , calculate the next iterations as follows:
u n = x n + θ n ( x n x n 1 ) y n = prox λ τ n f ( I τ n A ( A prox λ g A ) ) u n x n + 1 = α n x n + ( 1 α n ) S [ δ n v + ( 1 δ n ) y n ] ,
where:
τ n = ρ n h ( u n ) h ( u n ) 2 + l ( u n ) 2 , h ( u n ) 2 + l ( u n ) 2 0 , 0 , otherwise .
Stopping criterion: If x n + 1 = y n = u n = x n , stop.
Otherwise, put n = n + 1 , and go to Iterative step.
Proof. 
Assume that z = P Ω ( v ) Ω . By using the firm non-expansiveness of ( I prox λ g ) (see [30,31] for details), we find that:
h ( u n ) , u n z = h ( u n ) h ( z ) , u n z = ( I prox λ g ) A u n ( I prox λ g ) A z , A u n A z ( I prox λ g ) A z ( I prox λ g ) A u n 2 = 2 h ( u n ) ,
and:
y n z 2 = prox λ τ n f ( I τ n A ( I prox λ g ) A ) u n prox λ τ n f ( I τ n A ( I prox λ g ) A ) z 2 ( I τ n A ( I prox λ g ) A ) u n ( I τ n A ( I prox λ g ) A ) ) z 2 ( I τ n ) ( ( I τ n A ( I prox λ g ) A ) u n ( I τ n A ( I prox λ g ) A ) z ) 2 = u n z 2 2 τ n ( A ( I prox λ g ) A ) ( u n z ) , u n p + τ n 2 ( A ( I prox λ g ) A ) ( u n z ) 2 ( I τ n ) ( ( I τ n A ( I prox λ g ) A ) u n ( I τ n A ( I prox λ g ) A ) z ) 2 = u n z 2 2 τ n h ( u n ) h ( z ) , u n z + τ n 2 h ( u n ) h ( z ) 2 ( I τ n ) ( ( I τ n A ( I prox λ g ) A ) u n ( I τ n A ( I prox λ g ) A ) z ) 2 u n z 2 + τ n 2 h ( u n ) 2 4 τ n h ( u n ) ( I τ n ) ( ( I τ n A ( I prox λ g ) A ) u n ( I τ n A ( I prox λ g ) A ) z ) 2 u n z 2 ρ n ( 4 ρ n ) h 2 ( u n ) h ( u n ) 2 + l ( u n ) 2 ( I τ n ) ( I τ n A ( I prox λ g ) A ) u n 2 .
This implies:
y n z u n z .
Next, we set w n = δ n v + ( 1 δ n ) y n . For fixed v C , we obtain that:
w n z = δ n v + ( 1 δ n ) y n z + δ n z δ n z ( 1 δ n ) y n z + δ n v z ( 1 δ n ) u n z + δ n v z max { u n z , v z } ,
and
u n z θ n x n x n 1 + x n z .
Since S is nonexpansive, we find that:
x n + 1 z = α n x n + ( 1 α n ) S w n z + α n z α n z ( 1 α n ) w n z + α n x n z max { w n z , x n z } max { x n z , v z , x n z + θ n x n x n 1 } max { x 0 z , v z , x 0 z + θ 1 x 1 x 0 } .
Thus, { x n } is bounded, and this implies that { w n } , { y n } , and { u n } are also bounded.
Next, we observe that:
y n u n = prox λ τ n f ( I τ n A ( I prox λ g ) A ) u n u n = prox λ τ n f ( I τ n A ( I prox λ g ) A ) u n ( I τ n A ( I prox λ g ) A ) u n + ( I τ n A ( I prox λ g ) A ) u n u n prox λ τ n f ( I τ n A ( I prox λ g ) A ) u n ( I τ n A ( I prox λ g ) A ) u n + ( I τ n A ( I prox λ g ) A ) u n u n = ( I prox λ τ n f ) ( I τ n A ( I prox λ g ) A ) u n + τ n h ( u n ) .
Next, we claim that x n + 1 x n 0 and x n z . Consider:
x n + 1 x n = α n x n x n + ( 1 α n ) S w n = ( 1 α n ) ( S w n x n ) = ( 1 α n ) S w n x n ,
and:
u n z 2 = x n + θ n ( x n x n + 1 ) z 2 2 θ n x n x n 1 , u n z + x n z 2 2 θ n u n z x n x n 1 + x n z 2 .
Moreover, consider:
x n + 1 z 2 = ( 1 α n ) ( S w n z ) + α n ( x n z ) 2 ( 1 α n ) S w n z 2 + α n x n z 2 ( 1 α n ) α n S w n x n 2 ( 1 α n ) w n z 2 + α n x n z 2 ( 1 α n ) α n S w n x n 2 α n x n z 2 + ( 1 α n ) ( 1 δ n ) y n z 2 + δ n v z 2 ( 1 α n ) α n S w n x n 2 α n x n z 2 + δ n v z 2 α n ( 1 α n ) S w n x n 2 + ( 1 α n ) [ u n z 2 ρ n ( 4 ρ n ) h 2 ( u n ) h ( u n ) 2 + l ( u n ) 2 ( I τ n ) ( I τ n A ( I prox λ g ) A ) u n 2 ] ( α n + ( 1 α n ) ) x n z 2 + δ n v z 2 + 2 θ n u n z x n x n 1 ( 1 α n ) ρ n ( 4 ρ n ) h 2 ( u n ) h ( u n ) 2 + l ( u n ) 2 α n ( 1 α n ) S w n x n 2 ( I τ n ) ( I τ n A ( I prox λ g ) A ) u n 2 x n z 2 + 2 θ n u n z x n x n 1 + δ n v z 2 α n ( 1 α n ) S w n x n 2 ρ n ( 4 ρ n ) h 2 ( u n ) h ( u n ) 2 + l ( u n ) 2 ( I τ n ) ( I τ n A ( I prox λ g ) A ) u n 2 .
Therefore, we obtain:
ρ n ( 4 ρ n ) h 2 ( u n ) h ( u n ) 2 + l ( u n ) 2 + ( 1 α n ) α n x n S w n 2 + ( I τ n ) ( I τ n A ( I prox λ g ) A ) u n 2 x n z 2 x n + 1 z 2 + 2 θ n u n z x n x n 1 + δ n v z 2 .
We next show the sequence x n z 0 by dividing into two possible cases.
Case 1. Assume that { x n z 2 } is the non-increasing sequence. There exists n 0 N such that x n + 1 z 2 x n z 2 , for each n n 0 . Then, the sequence { x n z } converges, and so:
lim n ( x n + 1 z 2 x n z 2 ) = 0 .
Since lim n δ n = 0 , we obtain by using (36) that:
lim n ρ n ( 4 ρ n ) h 2 ( u n ) h ( u n ) 2 + l ( u n ) 2 = 0 ,
lim n α n ( 1 α n ) S w n x n = 0 ,
and:
lim n ( I τ n ) ( I τ n A ( I prox λ g ) A ) u n 2 = 0 .
We then obtain by using lim α n ( 1 α n ) = 0 , inf ρ n ( 4 ρ n ) > 0 , and h ( u n ) 2 + l ( u n ) 2 being bounded that:
lim n h ( u n ) = 0 ,
lim n S w n x n = 0 ,
and:
lim n ( I τ n ) ( I τ n A ( I prox λ g ) A ) u n 2 = 0 .
Thus, we obtain by using (33) that:
lim n x n + 1 x n = 0 .
Moreover, it easy to see that:
lim n u n x n = 0 .
By applying (36) and (39) in the Formula (32), we find that:
lim n y n u n = 0 .
We next observe that:
w n u n = u n δ n v + ( 1 δ n ) y n = δ n y n v + ( 1 δ n ) y n u n .
By using the fact that δ n 0 , we find that:
lim n w n u n = 0 .
Moreover, we observe that:
S w n w n S w n x n + u n x n + w n u n .
We then obtain by using (38), (41), and (44) that:
lim n S w n w n = 0 .
Next, we observe that:
y n prox λ τ n f y n u n τ n A ( I prox λ g ) A u n y n u n y n + | τ n | h ( u n ) .
Thus, we obtain immediately that:
lim n y n prox λ τ n f y n = 0 .
We next show that lim sup n v z , z S w n 0 , where z = P Ω ( v ) . To prove this, we can choose a subsequence { w n i } of { w n } with:
lim sup n v z , S w n z = lim i v z , S w n i z .
Since { w n i } is a bounded sequence, we can take a weakly convergent subsequence { w n i } of { w n } that converges to w H 1 , that is w n i w . By using the fact that S w n w n 0 , we find that S w n i w .
We next show w Ω in two steps. First, we show that w is a fixed point of S. By contradiction, we assume that y F i x ( S ) . Since w n i w and S w w , by Opial’s conditions, we conclude that:
lim inf i w n i w < lim inf i w n i S w lim inf i ( w n i S w n i + S w n i S w ) lim inf i ( w n i w ) .
This is a contradiction. This implies w F i x ( S ) . Second, we show w Γ . Since w is a weak limit point of { w n } , there is a { w n i } { w n } such that w n i w . Since h is lower semicontinuous, we find that:
0 h ( w ) lim inf i h ( w n i ) = lim n h ( w n ) = 0 .
This implies:
h ( w ) = 1 2 ( I prox λ g ) A w 2 = 0 .
Then, A w = prox λ g A w , and so, 0 g ( A w ) . This means that A w is a minimizer of the operator g.
Similarly, since l is lower semicontinuous, we find that:
0 l ( w ) lim inf i l ( w n i ) = lim n l ( w n ) = 0 .
This implies:
l ( w ) = 1 2 ( I prox λ τ n f ) w 2 = 0 .
Thus, w is in a fixed point set of the proximal operator prox λ τ n f , that is 0 f ( w ) . This means that w is a minimizer of the operator f. This implies w Γ . Therefore, we can conclude that w Ω .
According to the properties of matric projections, since w Ω and z = P Ω ( v ) , then v P Ω ( v ) , w P Ω ( v ) 0 . Consider,
lim sup n z v , z x n + 1 = lim sup n z v , z S w n = lim i z v , z S w n i = z v , z w 0 .
In the final step, we show that the sequence x n z 0 , as n . We observe that:
w n z 2 = δ n ( z v ) + ( 1 δ n ) ( z y n ) 2 2 δ n z v , z x n + 1 + ( 1 δ n ) z y n 2 ,
and:
x n + 1 z 2 = ( 1 α n ) ( S w n z ) + α n ( x n z ) , x n + 1 z = α n z x n , z x n + 1 + ( 1 α n ) z S w n , z x n + 1 α n 2 z x n 2 + α n 2 z x n + 1 2 + ( 1 α n ) 2 [ z w n 2 + z x n + 1 2 ] = α n 2 z x n 2 + 1 2 z x n + 1 2 + ( 1 α n ) 2 z w n 2 .
This implies:
x n + 1 z 2 α n z x n 2 + ( 1 α n ) z w n 2 .
By combining (27) with (34) and (51), we find that:
x n + 1 z 2 α n z x n 2 + ( 1 α n ) [ ( 1 δ n ) z y n 2 + 2 δ n z v , z x n + 1 ] α n x n z 2 + ( 1 δ n ) ( 1 δ n ) z x n + 1 2 + 2 θ n ( 1 α n ) ( 1 δ n ) z u n x n x n 1 ρ n ( 4 ρ n ) h 2 ( u n ) h ( u n ) 2 + l ( u n ) 2 + 2 δ n v z , x n + 1 z [ 1 ( 1 α n ) δ n ] x n z 2 + ( 1 α n ) δ n { 2 z v , z x n + 1 + 2 ( 1 δ n ) θ n δ n x n 1 x n z u n } .
Thus, we obtain by using Lemma 3, Assumption (A4), Inequality (50), and the boundedness of { u n } that x n z = P Ω ( v ) .
Case 2. Assume that { x n z 2 } is increasing. By applying Assumptions (A1), (A2), and (A4) to (36), we find that S w n x n 0 . Thus, we obtain by using (33) that x n + 1 x n 0 .
Suppose that Λ n = x n z 2 , and for each n n 0 (where n 0 large enough), define a mapping η : N N as follows:
η ( n ) : = max { k N : Λ k Λ k + 1 , k n } .
Thus, η ( n ) when n tends to infinity, and for each n n 0 ,
0 Λ η ( n ) Λ η ( n ) + 1 .
We then obtain by using Inequality (36) that:
δ η ( n ) v z 2 + 2 θ η ( n ) δ η ( n ) u η ( n ) z x η ( n ) x η ( n ) 1 Λ η ( n ) Λ η ( n ) + 1 + δ η ( n ) z v 2 + 2 θ η ( n ) δ η ( n ) u η ( n ) z x η ( n ) x η ( n ) 1 Λ η ( n ) Λ η ( n ) + 1 + δ η ( n ) v z 2 + 2 θ η ( n ) u η ( n ) z x η ( n ) x η ( n ) 1 ρ η ( n ) ( 4 ρ η ( n ) ) h 2 ( u η ( n ) ) h ( u η ( n ) ) 2 + l ( u η ( n ) ) 2 + α η ( n ) ( 1 α η ( n ) ) S w η ( n ) x η ( n ) 2 + ( I τ η ( n ) ) ( I τ η ( n ) A ( I prox λ g ) A ) u η ( n ) 2 .
Since δ η ( n ) 0 as n and also θ η ( n ) δ η ( n ) x η ( n ) x η ( n ) 1 0 when n tends to infinity, we observe that:
lim n h ( u η ( n ) ) = 0 ,
lim n S w η ( n ) x η ( n ) = 0 ,
lim n ( I τ η ( n ) ) ( I τ η ( n ) A ( I prox λ g ) A ) u η ( n ) 2 = 0 ,
and:
lim sup n v z , x η ( n ) + 1 z 0 .
Moreover, we obtain that:
Λ η ( n ) + 1 [ 1 ( 1 α n ) δ η ( n ) ] Λ η ( n ) + ( 1 α η ( n ) ) δ η ( n ) { 2 z v , z x η ( n ) + 1 + 2 θ η ( n ) ( 1 δ η ( n ) ) δ η ( n ) u η ( n ) z x η ( n ) x η ( n ) 1 } .
This implies that:
Λ η ( n ) 2 v z , x η ( n ) + 1 z + ( 1 δ η ( n ) ) 2 θ η ( n ) δ η ( n ) u η ( n ) z x η ( n ) x η ( n ) 1 .
Thus, we obtain:
lim sup n Λ η ( n ) = lim sup n x η ( n ) z = 0 .
We now obtain by using Lemma 4 that:
0 x n z max { x η ( n ) z , x n z } x η ( n ) + 1 z 0 ,
as n . This implies that x n z and z = P Ω ( v ) . The proof is complete. □
Remark 1. 
(a) 
If we put θ n = 0 , S I , α n = 0 , and δ n = 0 for all n 2 in our proposed algorithm, we found that the algorithm (11) of Moudafi and Thakur was obtained. Moreover, we obtained a strong convergence theorem, while Moudafi and Thakur [10] only obtained a weak convergence theorem;
(b) 
If we put A I , S I , f g 0 , and δ n = 0 for all n 2 in our proposed algorithm, we found that Algorithm (1.2) in [32] was obtained;
(c) 
If we put θ n = 0 , A I , S I , f g 0 , and δ n = 0 for all n 2 in our proposed algorithm, we found that the Mann iteration algorithm in [13] was obtained. Moreover, we obtained a strong convergence theorem, while Mann [13] only obtained a weak convergence theorem;
(d) 
As an extraordinary choice, an extrapolation factor θ n in our proposed algorithm can be chosen as follows: 0 θ n θ n ¯ ,
θ n ¯ = min n 1 n + κ 1 ϵ n x n x n 1 , if x n x n 1 , n 1 n + κ 1 , e l s e ,
for each integer n greater than or equal to three and a positive sequence { ϵ n } with ϵ n δ n 0 , as n . This choice was recently derived in [33,34] as an inertial extrapolated step.

4. Applications and Numerical Results

This section provides the numerical experiments to illustrate the performance and compare Algorithm 1 with and without the inertial term. Moreover, we present an experiment to compare our scheme with the Abbas algorithms [12]. All code was written in MATLAB 2017b and run on a MacBook Pro 2012 with a 2.5 GHz Intel Core i5.
First, we illustrate the performance of our proposed algorithm by comparing the proposed algorithm with and without the inertial term as the following experiment:
Example 1.
Suppose C = Q = { x R 100 : x 2 1 } , and let A x = x . In problem (10), assume that f = δ C and g = δ Q , where δ is the indicator function. Then:
p r o x λ τ n f ( x ) = P C ( x ) = P Q ( x ) = p r o x λ g = x x 2 , if x 2 > 1 x , o t h e r w i s e .
Thus, the problem (10) becomes the SEP (1). We next took the parameters ρ n = 2 , α n = 1 4000 and δ n = 1 n + 1 . Thus, by Algorithm 1, we obtained that:
u n = x n + θ n ( x n x n 1 ) y n = p r o x λ τ n f ( I τ n A ( A p r o x λ g A ) u n x n + 1 = 1 4000 x n + 1 1 4000 1 n + 1 v + 1 1 n + 1 y n .
We then provide a comparison of the convergence of Algorithm 1 with:
θ n = 0.5 if x n x n 1 = 0 , min 0.5 , 1 ( n + 1 ) x n x n 1 2 if x n 1 x n 0 ,
and Algorithm 1 with θ n = 0 in terms of the number of iterations with the stopping criterion A ( I P Q ) A x 2 2 + ( I P C ) x n 2 2 < 10 2 . The result of this experiment is reported in Figure 1.
Remark 2. 
By observing the result of Example 1, we found that our proposed algorithm with inertia was faster and more efficient than our proposed algorithm without inertia ( θ n = 0 ).
Second, we used the example in Abbas [12] to show the performance of our algorithm by comparing our proposed algorithm with Algorithms (12) and (13) in terms of CPU time as the following experiment:
Example 2.
Let H 1 = H 2 = R N and g = · 2 be the Euclidean norm in R N . The metric projection onto the Euclidean unit ball B is defined by the following:
P B ( x ) = x x 2 , if x 2 > 1 x , o t h e r w i s e .
Thus, the proximal operator (the block soft thresholding) [24] is given by:
p r o x g ( x ) = x x x 2 , if x 2 1 0 , o t h e r w i s e . .
For i = 1 , 2 , , N , let x i R ,
h i ( x i ) : = max { | x i | 1 , 0 } ,
and:
f ( x ) : = i = 1 N h i ( x i ) .
Then (see [35]),
p r o x h i ( x i ) = x i , i f | x i | < 1 s i g n ( x i ) , i f 1 | x i | 2 s i g n ( x i 1 ) , i f | x i | > 2 ,
and:
p r o x f ( x ) = ( p r o x h 1 ( x 1 ) , p r o x h 2 ( x 2 ) , , p r o x h N ( x N ) ) ,
for all x R N . Assume that A x = x , and let us consider the split minimization problem (SMP) (10) as follows:
z arg min f a n d A z arg min g .
It is easy to check that x = ( 0 , 0 , . . . , 0 ) is in the set of solutions of Problem (64). We now took ϵ n = 1 n + 1 and:
θ n = min 1 ( n + 1 ) x n x n 1 2 2 , 0.5 , i f x n x n 1 , 0.5 , o t h e r w i s e . ,
for all n 1 . We next took S I , then we obtained by Algorithm 1 that:
u n = x n + θ n ( x n x n 1 ) x n + 1 = p r o x λ τ n f ( I τ n A ( I p r o x λ g ) A ) u n .
The iterative schemes (12) and (13) are:
x n + 1 = p r o x λ γ n f 1 1 n + 1 x n γ n ( I p r o x λ g ) x n
and:
x n + 1 = 1 1 n + 1 p r o x λ γ n f ( I γ n ( I p r o x λ g ) ) x n ,
respectively, where γ n was given in [12].
We now provide a comparison of the convergence of the iterative schemes (12) and (13) in Abbas’s work [12] with our proposed algorithm with S I in terms of CPU time, where initial points x 1 , x 2 were randomly generated vectors in R N . We tested this experiment with different choices of N as follows: N = 100 , 500 , 1000 , 2000 .
We used x n + 1 x n x 2 x 1 < 10 2 as the stopping criterion. The result of this experiment is reported in Table 1.
Remark 3. 
By observing the result of Example 2, we found that our proposed algorithm was more efficient than Abbas’s Algorithms (12) and (13) regarding the CPU time.
Finally, we show the average error of our algorithm as the following experiment:
Example 3.
Let H 1 , H 2 , g , prox g , f , and prox f be defined as in example (2). In this experiment, we took x 1 : = x 1 ( i ) = ( x 1 1 ( i ) , x 1 2 ( i ) , , x 1 10 ( i ) ) , where i = 1 , 2 , , 20 . Let { x n ( i ) } be a sequence generated by Algorithm 1 and the parameters ρ n = 2 , v = 0.0025 , α n = 0.00025 , and δ n = 1 n + 1 . The mean-error is given by:
E r r o r ( x n ) : = 1 20 i = 1 20 x n + 1 ( i ) x n ( i ) .
We used E r r o r ( x n ) < 10 2 as the stopping criterion of this experiment. We then observed that the sequence { x n } generated by Algorithm 1 converged to a solution if E r r o r ( x n ) converged to zero. Figure 2 shows the average error of our method in three groups of 20 initial points.
Remark 4. 
By observing the result of Example 3, we found that the choice of the initial value did not affect the ability of our algorithm to achieve the solutions.

5. Conclusions

This paper discussed split minimization problems and fixed point problems of a nonexpansive mapping in the framework of Hilbert spaces. We introduced a new iterative scheme that combined the proximal algorithm and the modified Mann method with an inertial extrapolation and a self-adaptive step size. For the proposed algorithm, the main advantage was that there was no need to compute the operator norm of A. Moreover, we illustrated the performance of our proposed algorithm by comparing with other existing methods in terms of CPU time. The obtained results were improved and extended various existing results in existing pieces of literature.

Author Contributions

The authors contributed equally in writing this article. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by King Mongkuts University of Technology North Bangkok, Contract No. KMUTNB-PHD-62-03.

Acknowledgments

The authors would like to thank the Department of Mathematics, Faculty of Applied Science, King Mongkuts University of Technology North Bangkok.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441. [Google Scholar] [CrossRef]
  3. Yao, Y.; Liou, Y.C.; Postolache, M. Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization 2018, 67, 1309–1319. [Google Scholar] [CrossRef]
  4. Wang, F. Polyak’s gradient method for split feasibility problem constrained by level sets. Numer. Algorithms 2018, 77, 925–938. [Google Scholar] [CrossRef]
  5. Dong, Q.; Tang, Y.; Cho, Y.; Rassias, T.M. “Optimal” choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim. 2018, 71, 341–360. [Google Scholar] [CrossRef]
  6. Dadashi, V.; Postolache, M. Forward–backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2019, 9, 89–99. [Google Scholar] [CrossRef] [Green Version]
  7. Paunovic, L. Teorija Apstraktnih Metričkih Prostora–Neki Novi Rezultati; University of Priština: Leposavić, Serbia, 2017. [Google Scholar]
  8. Yang, Q. On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302, 166–179. [Google Scholar] [CrossRef] [Green Version]
  9. López, G.; Martín Márquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
  10. Moudafi, A.; Thakur, B. Solving proximal split feasibility problems without prior knowledge of operator norms. Optim. Lett. 2014, 8, 2099–2110. [Google Scholar] [CrossRef] [Green Version]
  11. Moudafi, A.; Xu, H.K. A DC Regularization of Split Minimization Problems. Appl. Anal. Optim. 2018, 2, 285–297. [Google Scholar]
  12. Abbas, M.; AlShahrani, M.; Ansari, Q.; Iyiola, O.S.; Shehu, Y. Iterative methods for solving proximal split minimization problems. Numer. Algorithms 2018, 78, 193–215. [Google Scholar] [CrossRef]
  13. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  14. Xu, H.K. A variable Krasnosel’skii–Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021. [Google Scholar] [CrossRef]
  15. Podilchuk, C.I.; Mammone, R.J. Image recovery by convex projections using a least-squares constraint. JOSA A 1990, 7, 517–521. [Google Scholar] [CrossRef]
  16. Youla, D. Mathematical theory of image restoration by the method of convex projections. In Image Recovery: Theory and Application; Academic Press: Orlando, FL, USA, 1987; pp. 29–77. [Google Scholar]
  17. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  18. Nesterov, Y. Introductory Lectures on Convex Optimization: A Basic Course; Springer Science & Business Media: Berlin, Germany, 2003; Volume 87. [Google Scholar]
  19. Dang, Y.; Sun, J.; Xu, H. Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. 2017, 13, 1383–1394. [Google Scholar] [CrossRef] [Green Version]
  20. Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim 2018, 14, 1595–1615. [Google Scholar] [CrossRef] [Green Version]
  21. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  22. Moreau, J.J. Propriétés des applications “prox”. Comptes Rendus Hebd. SéAnces L’AcadéMie Sci. 1963, 256, 1069–1071. [Google Scholar]
  23. Moreau, J.J. Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. Fr. 1965, 93, 273–299. [Google Scholar] [CrossRef]
  24. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  25. Micchelli, C.A.; Shen, L.; Xu, Y. Proximity algorithms for image models: Denoising. Inverse Probl. 2011, 27, 045009. [Google Scholar] [CrossRef]
  26. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator THEORY in Hilbert Spaces; Springer: Berlin, Germany, 2011; Volume 408. [Google Scholar]
  27. Opial, Z. Weak convergence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 531–537. [Google Scholar] [CrossRef] [Green Version]
  28. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  29. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  30. Chaux, C.; Combettes, P.L.; Pesquet, J.C.; Wajs, V.R. A variational formulation for frame-based inverse problems. Inverse Probl. 2007, 23, 1495. [Google Scholar] [CrossRef] [Green Version]
  31. Rockafellar, R.T.; Wets, R.J.B. Variational Analysis; Springer Science & Business Media: Berlin, Germany, 2009; Volume 317. [Google Scholar]
  32. Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef] [Green Version]
  33. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  34. Attouch, H.; Peypouquet, J. The rate of convergence of Nesterov’s accelerated forward-backward method is actually faster than 1/k2. SIAM J. Optim. 2016, 26, 1824–1834. [Google Scholar] [CrossRef]
  35. Combettes, P.L.; Pesquet, J.C. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Springer: Berlin, Germany, 2011; pp. 185–212. [Google Scholar]
Figure 1. Comparing Algorithm 1 with Algorithm 1 defined without the inertial term.
Figure 1. Comparing Algorithm 1 with Algorithm 1 defined without the inertial term.
Axioms 10 00109 g001
Figure 2. Computation results for Example 3.
Figure 2. Computation results for Example 3.
Axioms 10 00109 g002
Table 1. Computation results for Example 2.
Table 1. Computation results for Example 2.
N10050010002000
Algorithm (12)CPU (seconds)0.6029750.5560990.6033400.674949
Algorithm (13)CPU (seconds)0.2220860.2679800.2627850.270231
Our proposed algorithmCPU (seconds)0.1208140.2284860.2380060.247131
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kaewyong, N.; Sitthithakerngkiet, K. A Self-Adaptive Algorithm for the Common Solution of the Split Minimization Problem and the Fixed Point Problem. Axioms 2021, 10, 109. https://doi.org/10.3390/axioms10020109

AMA Style

Kaewyong N, Sitthithakerngkiet K. A Self-Adaptive Algorithm for the Common Solution of the Split Minimization Problem and the Fixed Point Problem. Axioms. 2021; 10(2):109. https://doi.org/10.3390/axioms10020109

Chicago/Turabian Style

Kaewyong, Nattakarn, and Kanokwan Sitthithakerngkiet. 2021. "A Self-Adaptive Algorithm for the Common Solution of the Split Minimization Problem and the Fixed Point Problem" Axioms 10, no. 2: 109. https://doi.org/10.3390/axioms10020109

APA Style

Kaewyong, N., & Sitthithakerngkiet, K. (2021). A Self-Adaptive Algorithm for the Common Solution of the Split Minimization Problem and the Fixed Point Problem. Axioms, 10(2), 109. https://doi.org/10.3390/axioms10020109

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop