Next Article in Journal
Fibonacci Numbers with a Prescribed Block of Digits
Next Article in Special Issue
Absolute Continuity of Fuzzy Measures and Convergence of Sequence of Measurable Functions
Previous Article in Journal
Theoretical Bounds on Performance in Threshold Group Testing Schemes
Previous Article in Special Issue
About Aczél Inequality and Some Bounds for Several Statistical Indicators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Inertial Krasnoselskii–Mann Method in Banach Spaces

1
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China
2
Department of Mathematics, ORT Braude College, 2161002 Karmiel, Israel
3
The Center for Mathematics and Scientific Computation, University of Haifa, Mt. Carmel, 3498838 Haifa, Israel
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2020, 8(4), 638; https://doi.org/10.3390/math8040638
Submission received: 28 February 2020 / Revised: 18 April 2020 / Accepted: 20 April 2020 / Published: 21 April 2020
(This article belongs to the Special Issue Applications of Inequalities and Functional Analysis)

Abstract

:
In this paper, we give a general inertial Krasnoselskii–Mann algorithm for solving inclusion problems in Banach Spaces. First, we establish a weak convergence in real uniformly convex and q-uniformly smooth Banach spaces for finding fixed points of nonexpansive mappings. Then, a strong convergence is obtained for the inertial generalized forward-backward splitting method for the inclusion. Our results extend many recent and related results obtained in real Hilbert spaces.

1. Introduction

Let X be a real Banach space and given a single and set-valued operators A : X X and B : X 2 X , respectively. We consider the following inclusion problem:
find x ^ X such that 0 A x ^ + B x ^ .
Such inclusion problems are quite general since it include as special cases various problems such as: non-smooth convex optimization problems, variational inequalities and convex-concave saddle-point problems, just to name a few. (see, e.g., [1,2,3,4,5]).
A known and popular method for solving problem (1) is the forward-backward splitting method [6,7], which is defined in the following manner: x 1 X and
x n + 1 = J r B ( x n r A x n ) , n 1 ,
where provided that B is maximally monotone and A is co-coercive (or equivalent assumptions) and J r B : = ( I + r B ) 1 , r > 0 , is called the “resolvent of B”. The forward-backward splitting method  (2) includes the proximal point algorithm, (see, e.g., [8,9,10,11,12]), and the gradient method (see, for example, [2,13]). It has been shown that  (2) in general only converges weakly to a zero of (1) (see, for example, [3,6,14,15]).
The following method was introduced in [16] (see also [14]) for finding zero of  (1) when A = 0 and B is maximal monotone operator: x 0 , x 1 H ;
y n = x n + θ n ( x n x n 1 ) x n + 1 = J r n B ( y n ) , n 1 .
Alvarez and Attouch [16] established the weak convergence of  (3) under some appropriate conditions on { θ n } and { r n } . Several other modifications of  (2) with inertial extrapolation step have been considered in Hilbert spaces by many authors, see, for example, [17,18,19,20,21].
Based on the above mentioned results [19,22,23,24,25,26], our main contribution in this paper is the following. We extend the results of [17] concerning the inertial Krasnoselskii–Mann iteration for fixed point of nonexpansive mappings to uniformly convex and q-uniformly smooth Banach space. We also extend the forward-backward splitting method with inertial extrapolation step for solving  (1) from Hilbert spaces to Banach spaces. While the mentioned results establish only weak convergence, we also provide strong convergence analysis in Banach spaces.
The outline of the paper is as follows. We first recall some basic definitions and results in Section 2. Our algorithms are presented and analysed in Section 3. In Section 4 an infinite dimensional example is presented and final remarks and conclusions are given in Section 5.

2. Preliminaries

Let X be a real Banach space. The modulus of convexity of X is defined as the function δ : ( 0 , 2 ] [ 0 , 1 ] ,
δ ( ε ) = inf 1 x + y 2 : x , y X , x = y = 1 , x y ε .
X is said to be uniformly convex if δ ( ε ) > 0 for all ε ( 0 , 2 ] .
The modulus of smoothness of X is the function ρ : R + R + defined by
ρ ( t ) = sup x + t y + x t y 2 1 : x , y X , x = y = 1 .
We say X is uniformly smooth if lim t 0 ρ ( t ) / t = 0 . X is said to be q-uniformly smooth with 1 < q 2 , if there exists a constant k q > 0 such that ρ ( t ) k q t q for t > 0 . If X is q-uniformly smooth, then it is uniformly smooth (see, e.g., [27]). Suppose that X * is the dual space of X. The generalized duality mapping J q ( q > 1 ) of X is defined by J q ( x ) : = { j q ( x ) X * : x , j q ( x ) = x q , j q ( x ) = x q 1 } , x X , where · , · denotes the duality pairing between X and X * . In particular, we call J 2 : = J , the normalized duality mapping on X. Furthermore, (see, e.g., [28] (p. 1128))
J q ( x ) = x q 2 J ( x ) , x 0 .
It is well known that (see, for example, [27]) X is uniformly smooth if and only if the duality mapping J q is single-valued and norm-to-norm uniformly continuous on bounded subsets of X.
Let B : X 2 X . We denote the domain of B by D ( B ) = { x X : B x } and its range by R ( B ) = { B z : z D ( B ) } . We say that B is accretive if, for each x , y D ( A ) , there exists j ( x y ) J ( x y ) such that (see, for example, [25])
u v , j ( x y ) 0 , u B x , v B y .
B is said to be m-accretive if R ( I + r B ) = X for all r > 0 . Given α > 0 and q ( 1 , ) , we say that a single-valued accretive operator A is α -inverse strongly accretive ( α -isa, for short) of order q if, for each x , y D ( A ) , there exists j q ( x y ) J q ( x y ) such that
A x A y , j q ( x y ) α u v q .
We say that A is α -strongly accretive of order q if, for each x , y D ( A ) , there exists j q ( x y ) J q ( x y ) such that
A x A y , j q ( x y ) α x y q .
Let C X and T : C C be a nonlinear mapping. The set of fixed points of T is defined by F i x ( T ) : = { x C : x = T x } .
For the rest of this paper, we shall adopt the following notation:
T r A , B = J r B ( I r A ) = ( I + r B ) 1 ( I r A ) , r > 0 .
Lemma 1
([29] p. 33). Let q > 1 and X be a real normed space with the generalized duality mapping J q . Then, for any x , y X , we have
x + y q x q + q y , j q ( x + y )
for all j q ( x + y ) J q ( x + y ) .
Lemma 2
([28] Cor. 1 ). Let 1 < q 2 and X be a smooth Banach space. Then the following statements are equivalent:
(i) 
X is q-uniformly smooth.
(ii) 
There is a constant k q > 0 such that for all x , y X
x + y q x q + q y , j q ( x ) + k q y q .
The best constant k q will be called the q-uniform smoothness coefficient of X.
Lemma 3
([25] Lem. 3.1, 3.2). Let X be a Banach space. Let A : X X be an α-isa of order q and B : X 2 X an m-accretive operator. Then we have
(i) 
For r > 0 , F i x ( T r A , B ) = ( A + B ) 1 ( 0 ) .
(ii) 
For 0 < s r and x X , x T s A , B x 2 x T r A , B x .
Lemma 4
([25] Lem. 3.3). Let X be a uniformly convex and q-uniformly smooth Banach space for some q ( 1 , 2 ] . Assume that A is a single-valued α-isa of order q in X. Then, given r > 0 , there exists a continuous, strictly increasing and convex function ϕ q : R + R + with ϕ q ( 0 ) = 0 such that, for all x , y B r ,
T r A , B x T r A , B y q x y q r ( α q r q 1 k q ) A x A y q ϕ q ( ( I J r B ) ( I r A ) x ( I J r B ) ( I r A ) y ) ,
where k q is the q-uniform smoothness coefficient of X.
Lemma 5
([26] Lem. 3.1). Let { a n } and { c n } be sequences of nonnegative real numbers such that
a n + 1 ( 1 δ n ) a n + b n + c n , n 1 ,
where { δ n } is a sequence in ( 0 , 1 ) and { b n } is a real sequence. Assume n = 1 c n < . Then the following results hold:
(i) 
If b n δ n M for some M 0 , then { a n } is a bounded sequence.
(ii) 
If n = 1 δ n = and lim sup n b n / δ n 0 , then lim n a n = 0 .
Lemma 6
(Maingé [30]). Let { φ n } , { δ n } and { θ n } be sequences in [ 0 , + ) such that
φ n + 1 φ n + θ n ( φ n φ n 1 ) + δ n , n 1 , n = 1 + δ n < + ,
and there exists a real number θ with 0 θ n θ < 1 for all n N . Then the following hold:
(i) 
n = 1 + [ φ n φ n 1 ] + < + , where [ t ] + : = max { t , 0 } ;
(ii) 
there exists φ * [ 0 , + ) such that lim n φ n = φ * .
Lemma 7
(Goebel and Reich [31]). Let E be a uniformly convex Banach space and C E be nonempty, closed and convex and T : C C be a nonexpansive mapping. Then I T is demi-closed at 0.
Lemma 8
(Xu, [28]). Let E be a uniformly convex Banach space. The following statements hold in E:
x + y q x q + q j q ( x ) , y + c q y q x , y E .
Lemma 9
(Xu, [32]). Let { a n } be a sequence of nonnegative real numbers satisfying the following relation:
a n + 1 ( 1 α n ) a n + α n σ n + γ n , n 1 ,
where
(a) 
{ α n } [ 0 , 1 ] , n = 1 α n = ;
(b) 
lim sup σ n 0 ;
(c) 
γ n 0 ( n 1 ) , n = 1 γ n < .
Then, a n 0 as n .
Notations: x n x , n means { x n } converges weakly to x and x n x , n means { x n } converges strongly to x.

3. The Algorithm

In this section, we introduce our method and give the convergence analysis. Recall that 1 is the space of all sequences whose series is absolutely convergent.
Let E be a uniformly convex Banach space and T : E E a nonexpansive mapping and F i x ( T ) .
Remark 1.
Observe that since the value of x n x n 1 is a priori known before θ n , then Step (2) in Algorithm 1 is easily implemented. Furthermore, observe that by the assumption that { ϵ n } n = 1 l 1 , we have that n = 0 ϵ n x n x n 1 < and n = 0 ϵ n x n x n 1 q < .
Algorithm 1 Inertial Krasnoselskii–Mann iteration.
1:
Choose sequence { ϵ n } 1 and pick θ [ 0 , 1 ) . Select x 0 , x 1 E and set n : = 1 .
2:
Given the iterations x n , x n 1 , choose θ n such that 0 θ n θ ¯ n , where
θ ¯ n =   min θ , ϵ n x k x k 1 q , ϵ n x k x k 1 , x k x k 1     θ , otherwise .
3:
Compute
w n = x n + θ n ( x n x n 1 ) ,
and
x n + 1 = ( 1 λ n ) w n + λ n T w n .
4:
Set n n + 1 , and go to 2.

Convergence Analysis

We start with the weak convergence analysis of Algorithm 1 for nonexpansive mappings. Throughout our analysis we assume that E be a uniformly convex Banach space.
Theorem 1.
Suppose T : E E is a nonexpansive mapping and F i x ( T ) . Assume that 0 < a λ n b < 1 . Then { x n } generated by Algorithm 1 converges weakly to a point in F ( T ) .
Proof. 
Take z F ( T ) . Then
x n + 1 z q ( 1 λ n ) w n z + λ n T w n z q = ( 1 λ n ) w n z + λ n T w n T z q ( 1 λ n ) w n z + λ n w n z q = w n z q
and
w n z q = x n z + θ n ( x n x n 1 ) q x n z q + q θ n x n x n 1 , j q ( x n z ) + c q θ n q x n x n 1 q .
Observe that
q x n x n 1 , j q ( x n z ) x n z q x n 1 z q + c q x n x n 1 q .
From (16) and (17), we have (noting that θ n q θ n )
w n z q x n z q + θ n ( x n z q x n 1 z q ) + c q ( θ n + θ n q ) x n x n 1 q x n z q + θ n ( x n z q x n 1 z q ) + 2 c q θ n x n x n 1 q .
It follows from (15) and (18) that
x n + 1 z q x n z q + θ n ( x n z q x n 1 z q ) + 2 c q θ n x n x n 1 q .
By Lemma 6, we deduce that { x n z } is convergent. Thus, { x n } is bounded and n = 1 [ x n + 1 z q x n z q ] + < .
We next show that lim n T w n w n = 0 . From the update of x n + 1 in Algorithm 1, we get
x n + 1 z q = ( 1 λ n ) ( w n z ) + λ n ( T w n z ) q ( 1 λ n ) w n z q + λ n T w n z q w q ( λ n ) φ ( T w n w n ) w n z q w q ( λ n ) φ ( T w n w n ) .
Using (18) in (20), we get
w q ( λ n ) φ ( T w n w n ) x n z q x n + 1 z q + θ n ( x n z q x n 1 z q ) + 2 c q θ n x n x n 1 q .
Also,
w n x n q = θ n q x n x n 1 q θ n x n x n 1 q 0 , n .
Since lim n θ n x n x n 1 q = 0 and lim n x n z q exists, we obtain from (21) that lim n w q ( λ n ) φ ( T w n w n ) = 0 . Since lim inf n λ n ( 1 λ n ) > 0 , we get lim n φ ( T w n w n ) = 0 and by the continuity of φ , we get lim n T w n w n = 0 .
Furthermore, since { x n } is bounded, there exists { x n k } { x n } such that x n k p B . By (22), we have that w n k p B . Using the demiclosedness of I T in Lemma 7, we get that p F ( T ) . By the results in [33], we have that { x n } has exactly one weak limit point and hence { x n } is weakly convergent. This ends the proof. □
Remark 2. 
(a) 
We mention here that quasi-nonexpansiveness is a weaker sufficient condition for Theorem 1.
(b) 
It can also be shown in Theorem 1 that
x n T x n x n w n + w n T w n + T w n T x n 2 x n w n + w n T w n 0 , n .
Therefore, Algorithm 1 preserves certain properties of the Krasnoselskii–Mann iteration.
Now taking T : = T r A , B in Algorithm 1, we obtain the following result for inclusion problem (1).
Theorem 2.
Let E be a uniformly convex and q-uniformly smooth Banach Space. Suppose that A : E E is α i s a of order q and B : E 2 E an m accretive operator. Assume that the solution set S of inclusion problem (1) nonempty. Let r 0 , α q c q 1 q 1 . Then the sequence { x n } generated by Algorithm 1 with T : = T r A , B converges weakly to a point in S.
Proof. 
By Lemma 3 (i) and Lemma 4, we have that F i x ( T r A , B ) = ( A + B ) 1 ( 0 ) = S and T r A , B is nonexpansive. Therefore, by Theorem 1, we have that { x n } converges weakly to a point in S and the desired result is obtained. □
We give two instances of strong convergence of the relaxed forward–backward Algorithm 1.
Theorem 3.
Let E be a uniformly convex and q-uniformly smooth Banach Space. Assume that the solution set S of inclusion problem (1) nonempty and { λ n } ( 0 , 1 ) is such that n = 1 λ n = . Let r 0 , α q c q 1 q 1 . Suppose that one of the following holds:
(i) 
A is α-isa of order q, B is β-strongly accretive of order q, and r 0 , α q c q 1 q 1 .
(ii) 
β L , A is β-strongly accretive and L-Lipschitz on E with r 0 , 2 β L 2 .
Then { x n } generated by Algorithm 1 with T : = T r A , B converges strongly to a unique point in S.
Proof. 
We first show that the inclusion problem (1) has a unique solution by showing that in each of the cases above T r A , B is a contraction map on E.
(i) For all x , y E , we have
( I r A ) x ( I r A ) y q = x y r ( A x A y ) q x y q + c q r q A x A y q r q A x A y , j q ( x y ) x y q r ( α q r q 1 c q ) A x A y q x y q .
Therefore, I r A is a nonexpansive mapping. Let x , y , u , v E . Since B is β -strongly accretive of order q, we have that
( u , v ) ( J r B x , J r B y ) ( x u , y v ) B u × B v ( x u ) ( y v ) , j q ( u v ) β u v q
x y , j q ( u v ) ( β + 1 ) u v q .
Hence,
( β + 1 ) J r B x J r B y q x y , j q ( J r B x J r B y ) x y j q ( J r B x J r B y ) = x y J r B x J r B y q 1 .
Therefore, J r B x J r B y 1 β + 1 x y . So,
J r B ( I r A ) x J r B ( I r A ) y 1 β + 1 x y = τ x y .
(ii) Observe that r ( β q c q r q 1 L q ) ( 0 , 1 ) and define τ : = 1 r ( β q c q r q 1 L q ) 1 q . Then for all x , y E ,
J r B ( I r A ) x J r B ( I r A ) y q ( I r A ) x ( I r A ) y q = x y r ( A x A y ) q x y q q r A x A y , j q ( x y ) + c q r q A x A y q x y q q r β x y q + c q r q L q x y q = ( 1 r ( β q c q r q 1 L q ) ) x y q .
Therefore, in both cases (i) and (ii), T r A , B is a contraction map on E with constant τ .
Each of these cases in (i) and (ii) above implies that the inclusion problem (1) has a unique solution x * S . Consequently, using the update of x n + 1 in Algorithm 1 with T = T r A , B , we get
x n + 1 x * ( 1 λ n ) w n x * + λ n τ w n x * = ( 1 λ n ( 1 τ ) ) w n x * ( 1 λ n ( 1 τ ) ) ( x n x * + θ n x n x n 1 ) = ( 1 λ n ( 1 τ ) ) x n x * + θ n x n x n 1 .
Observe that by the update of θ ¯ n in Algorithm 1, we have n = 1 θ n x n x n 1 < , using Lemma 9, we get that x n x * , n , and the proof is complete. □
We next present a complexity bound for Algorithm 1 in this result.
Theorem 4.
Suppose that either of condition (i) or (ii) in Theorem 3 is satisfied and let x * S be the unique solution of the inclusion problem (1). Let λ n = λ and ϵ n = ϵ be constant. Then, given ρ ( 0 , λ ( 1 τ ) ) , for any
n n ¯ : = log ( 1 ρ ) ϵ x 0 x * 1 λ ( 1 τ ) λ ( 1 τ ) ρ ,
assuming n ¯ 0 , it holds that
x n x * ϵ 1 λ ( 1 τ ) λ ( 1 τ ) ρ + 1 ,
where
(i) 
τ : = 1 β r + 1 if A is α i s a of order q, B is β-strongly accretive of order q, and r ( 0 , 2 α ) and
(ii) 
τ : = 1 r ( 2 β r L 2 ) if β L , A is β-strongly accretive and L-Lipschitz on E with r 0 , 2 β L 2 .
Proof. 
From the proof of Theorem 3, for any n 1 we get
x n + 1 x * ( 1 λ ( 1 τ ) ) ( x n x * + θ n x n x n 1 ) ( 1 λ ( 1 τ ) ) ( x n x * + ϵ ) .
Without the loss of generality, we assume that for every n < n ¯ we have
x n x * ϵ 1 λ ( 1 τ ) λ ( 1 τ ) ρ .
Concatenating (25) and (26) we obtain, for every k < k ¯ ,
x n + 1 x * ( 1 λ ( 1 τ ) ) 1 + λ ( 1 τ ) ρ 1 λ ( 1 τ ) x n x * = ( 1 ρ ) x n x * .
Therefore, by the definition of n ¯ , it holds that
x n ¯ x * ( 1 ρ ) n ¯ x 0 x * ϵ 1 λ ( 1 τ ) λ ( 1 τ ) ρ .
For any n > n ¯ there are two possibilities. If
x n 1 x * ϵ 1 λ ( 1 τ ) λ ( 1 τ ) ρ ,
then, by (25) and recalling that ( 1 λ ( 1 τ ) ) 1 , we obtain that x n satisfies (24). Otherwise, if
ϵ 1 λ ( 1 τ ) λ ( 1 τ ) ρ x n 1 x * ϵ 1 λ ( 1 τ ) λ ( 1 τ ) ρ + 1 ,
then
x n x * ( 1 ρ ) x n 1 x * x n 1 x * ,
and the desired result holds. □
Remark 3.
We observe that, in contradiction with the assumptions of Theorem 2, in Theorem 4 the summability of { ϵ n } is not required. However if one wants a good bound in (24) then a small value of ϵ must be set, but, in this case, small values of θ n are allowed.
To summarize and emphasize the novelty and major advantages of our proposed scheme, we list next several relations to recent works.
Remark 4. 
1.
Our result in Theorem 1 extends the results in [17,26,30,34,35] from Hilbert spaces to uniformly convex and q-uniformly smooth Banach spaces. Furthermore, when θ n = 0 in Algorithm 1, Theorem 1 reduces to the results in [33] and other related papers.
2.
Our Theorem 2 extends the results in [16,19,21,22,24,36] from Hilbert spaces to uniformly convex and q-uniformly smooth Banach spaces.
3.
Shehu in [37] obtained a nonasymptotic O ( 1 / n ) convergence rate result for a Krasnoselskii–Mann iteration with inertial extrapolation step in real Hilbert spaces under the stringent condition of Boţ et al. [17] (Theorem 5). In this paper, we obtain the results for Krasnoselskii–Mann iteration with inertial extrapolation step under easy assumptions and give some complexity results in uniformly convex Banach spaces.
4.
Themelis and Patrinos in [38] study a Newton-type generalization of the classical Krasnoselskii–Mann iteration in Hilbert spaces and obtained superlinear convergence when the direction satisfies Dennis-More condition in Hilbert spaces. However, Themelis and Patrinos in [38] do not consider Krasnoselskii–Mann iteration with inertial steps. Our results here involve inertial Krasnoselskii–Mann iteration obtained in a higher space viz, uniformly convex Banach space which extends Hilbert space.
5.
In [39], Phon-on et al. established inertial S-iteration in Banach spaces and obtained convergence under boundedness of some generated sequence. In this paper, the boundedness assumption of any generated sequence is dispensed with in our results. Therefore, our results improve on the results of this paper.

4. Numerical Illustration

In this section, we present two numerical examples in order to illustrate the behaviour of our proposed method. For the first example we are concern with the split convex feasibility problem (SCFP) (Censor and Elfving [40]) in an infinite-dimensional Hilbert space. Let H 1 and H 2 be two real Hilbert spaces and T : H 1 H 2 a bounded and linear operator and T * its adjoint. Let C H 1 and Q H 2 be nonempty, closed and convex sets. The split convex feasibility problem is formulated as follows:
find a point x C such that T x Q .
So, if we take A x : = 1 2 | | T x P Q T x | | 2 = T * ( I P Q ) T x , where P Q is the metric projection onto Q, ∇ is the gradient and B = i C is the characteristic function of the set C ( i C ( x ) = 0 if x C and i C ( x ) = if x C ). So, the SCFP has a inclusion structure as in (1). It can be seen that A is Lipschitz continuous with constant L = T 2 and B is maximal monotone, see e.g., [41].
Example 1.
Let H 1 = L 2 ( [ 0 , 2 π ] ) = H 2 and norm x : = 0 2 π | x ( t ) | 2 d t 1 2 and inner product x , y : = 0 2 π x ( t ) y ( t ) d t , x , y H . Consider the half-space
C : = { x L 2 ( [ 0 , 2 π ] ) 1 , x 1 } = x L 2 ( [ 0 , 2 π ] ) 0 2 π x ( t ) d t 1
where 1 1 L 2 ( [ 0 , 2 π ] ) . In addition, let the closed ball centered at sin L 2 ( [ 0 , 2 π ] with radius 4.
Q : = { x L 2 ( [ 0 , 2 π ] ) | | x sin | | L 2 2 16 } = x L 2 ( [ 0 , 2 π ] ) 0 2 π | x ( t ) sin ( t ) | 2 d t 16 .
Consider the mapping T : L 2 ( [ 0 , 2 π ] ) L 2 ( [ 0 , 2 π ] ) such that ( T x ) ( s ) = x ( s ) , x L 2 ( [ 0 , 2 π ] ) . Then ( T * x ) ( s ) = x ( s ) and T = 1 . So, we wish to solve the following problem:
find x * C such that T x * Q .
Observe since ( T x ) ( s ) = x ( s ) , x L 2 ( [ 0 , 2 π ] ) , (30) reduces to the well-known convex feasibility problem of the form.
find x * C Q .
Moreover, the solution set of (30) is nonempty since clearly x ( t ) = 0 is a solution. As explained before, we define A x : = 1 2 | | T x P Q T x | | 2 = T * ( I P Q ) T x and B = i C and translate (30) to an inclusion formulation as in (1).
We implement our algorithm with different starting point x 0 ( t ) = x 1 ( t ) , t [ 0 , 2 π ] . We choose the stopping criterion | | x n y n | | < 10 5 and other parameters are chosen as ε n = 1 / n 2 , λ n = 1 / n , θ = 0.5 , r = 0.5 . To justify our algorithm’s name we compare it with the standard Krasnoselskii–Mann, which is the update of x n + 1 in Algorithm 1 with w n replaced by x n and λ n ( 0 , 1 ) . The results for different starting points are presented in Table 1.
Recall the definition of the operator T r A , B (10) and following [41] (Example 23.4) and [42] we get the following results. For z L 2 ( [ 0 , 2 π ] we have
( I + λ n B ) 1 ( z ) = ( I + λ n i C ) 1 ( z ) = arg m i n u L 2 ( [ 0 , 2 π ] ) i C ( u ) + 1 2 λ n | | u z | | L 2 2 = P C ( z ) .
Moreover, by [42]
P C ( z ) = 1 0 2 π z ( t ) d t 4 π 2 1 + z , 0 2 π z ( t ) d t > 1 z , 0 2 π z ( t ) d t 1 .
For w L 2 ( [ 0 , 2 π ] we also have
P Q ( w ) = sin + 4 0 2 π | w ( t ) sin ( t ) | 2 d t ( w sin ) , 0 2 π | w ( t ) sin ( t ) | 2 d t > 16 w , 0 2 π | w ( t ) sin ( t ) | 2 d t 16 .
Example 2.
Take E = L p ( [ 0 , 2 π ] ) , 2 p < . Then, E is 2-uniformly smooth and uniformly convex and so q = 2 in Algorithm 1. Define ( T x ) ( s ) : = max { x ( s ) , 0 } x L p ( [ 0 , 2 π ] ) . Then, T is nonexpansive and F ( T ) = 0 . In the below numerical illustration we choose p = 4 , 10 , 100 , starting points x 0 = x 1 = 2 sin ( 5 t ) and other parameters as in the previous example. Based on the example setting and Remark 4 we find Shehu’s algorithm [37] (Equation (3)) most suitable for comparison with our Algorithm 1. The results are reported next in Table 2.

5. Conclusions

In this paper, we give weak and strong convergence results for relaxed inertial forward-backward splitting method in uniformly convex and q-uniformly smooth Banach spaces under some appropriate conditions. Our results are new in Banach spaces, and generalize some existing results in the literature. In our future project, we will generalize our results in this paper to finding zero of maximal monotone operators in a more general Banach space.

Author Contributions

Y.S. and A.G. contributed equally to the this paper with regards to all aspects such as: conceptualization, methodology, software, validation, formal analysis, investigation, resources, data curation, writing—original draft preparation, writing—review and editing, visualization, supervision, project administration, and funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We are very grateful to the anonymous referees and the Editor whose insightful comments helped to considerably improve an earlier version of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
isainverse strongly accretive
SCFPsplit convex feasibility problem

References

  1. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  2. Bertsekas, D.P.; Tsitsiklis, J.N. Parallel and Distributed Computation: Numerical Methods; Athena Scientific: Belmont, MA, USA, 1997. [Google Scholar]
  3. Combettes, P.L.; Wajs, V.R. Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 2005, 4, 1168–1200. [Google Scholar] [CrossRef] [Green Version]
  4. Duchi, J.; Shalev-Shwartz, S.; Singer, Y.; Chandra, T. Efficient projections onto the l1-ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008. [Google Scholar]
  5. Tibshirami, R. Regression shrinkage and selection via lasso. J. Roy. Statist. Soc. Ser. B 1996, 58, 267–288. [Google Scholar]
  6. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
  7. Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef] [Green Version]
  8. Brézis, H.; Lions, P.L. Produits infinis de resolvantes. Isr. J. Math. 1978, 29, 329–345. [Google Scholar] [CrossRef]
  9. Chen, G.H.G.; Rockafellar, R.T. Convergence rates in forward-backward splitting. SIAM J. Optim. 1997, 7, 421–444. [Google Scholar] [CrossRef] [Green Version]
  10. Güler, O. On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 1991, 29, 403–419. [Google Scholar] [CrossRef]
  11. Martinet, B. Régularisation d’inéquations variationnelles par approximations successives. Rev. Française Informat. Recherche. Opérationnelle. 1970, 4, 154–158. [Google Scholar]
  12. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
  13. Dunn, J.C. Convexity, monotonicity, and gradient processes in Hilbert space. J. Math. Anal. Appl. 1976, 53, 145–158. [Google Scholar] [CrossRef] [Green Version]
  14. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  15. Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
  16. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  17. Bot, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar]
  18. Chen, C.; Chan, R.H.; Ma, S.; Yang, J. Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 2015, 8, 2239–2267. [Google Scholar] [CrossRef]
  19. Lorenz, D.A.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef] [Green Version]
  20. Moudafi, A.; Oliny, M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef] [Green Version]
  21. Pesquet, J.-C.; Pustelnik, N. A parallel inertial proximal optimization methods. Pac. J. Optim. 2012, 8, 273–305. [Google Scholar]
  22. Attouch, H.; Alexandre, C. Convergence of a relaxed inertial forward–backward algorithm for structured monotone inclusions. Appl. Math. Optim. 2019, 80, 547–598. [Google Scholar] [CrossRef]
  23. Cholamjiak, W.; Cholamjiak, P.; Suantai, S. An inertial forward-backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 2018, 20, 42. [Google Scholar] [CrossRef]
  24. Dong, Q.L.; Jiang, D.; Cholamjiak, P.; Shehu, Y. A strong convergence result involving an inertial forward-backward algorithm for monotone inclusions. J. Fixed Point Theory Appl. 2017, 19, 3097–3118. [Google Scholar] [CrossRef]
  25. López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Forward-Backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012, 2012, 109236. [Google Scholar] [CrossRef] [Green Version]
  26. Maingé, P.E. Approximation method for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef] [Green Version]
  27. Cioranescu, I. Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems; Kluwer Academic Publishers: Berlin, Germany, 1990. [Google Scholar]
  28. Xu, H.K. Inequalities in Banach spaces with applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
  29. Chidume, C. Geometric Properties of Banach Spaces and Nonlinear Iterations; Springer: Berlin, Germany, 2009. [Google Scholar]
  30. Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef] [Green Version]
  31. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  32. Xu, H.K. Iterative algorithms for nonlinear operators. J. London. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  33. Reich, S. Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67, 274–276. [Google Scholar] [CrossRef] [Green Version]
  34. Dong, Q.L.; Yuan, H.B.; Cho, Y.J.; Rassias, T.M. Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 2018, 12, 87–102. [Google Scholar] [CrossRef]
  35. Maingé, P.E. Inertial iterative process for fixed points of certain quasi-nonexpansive mappings. Set Valued Anal. 2007, 15, 67–79. [Google Scholar] [CrossRef]
  36. Bot, R.I.; Csetnek, E.R. An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algorithm 2016, 71, 519–540. [Google Scholar] [CrossRef] [Green Version]
  37. Shehu, Y. Convergence rate Analysis of inertial Krasnoselskii-Mann-type iteration with applications. Numer. Funct. Anal. Optim. 2018, 39, 1077–1091. [Google Scholar] [CrossRef]
  38. Themelis, A.; Patrinos, P. SuperMann: A superlinearly convergent algorithm for finding fixed points of nonexpansive operators. IEEE Trans. Automat. Control 2019, 64, 4875–4890. [Google Scholar] [CrossRef] [Green Version]
  39. Phon-on, A.; Makaje, N.; Sama-Ae, A.; Khongraphan, K. An inertial S-iteration process. Fixed Point Theory Appl. 2019, 4. [Google Scholar] [CrossRef] [Green Version]
  40. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  41. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC; Springer: New York, NY, USA, 2011; ISBN 978-1-4419-9466-0. [Google Scholar]
  42. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Lecture Notes in Mathematics 2057; Springer: Berlin, Germany, 2012. [Google Scholar]
Table 1. Comparison of Algorithm 1 and the classical Krasnoselskii–Mann method.
Table 1. Comparison of Algorithm 1 and the classical Krasnoselskii–Mann method.
Starting PointsCPU TimeIterations
Algorithm 1Classical KMAlgorithm 1Classical KM
x 0 = x 1 = t 2 10 0.0540.201717
x 0 = x 1 = 2 t 16 0.0560.2541128
x 0 = x 1 = 2 sin ( 5 t ) 4 3 cos ( 2 t ) 0.06530.103415
x 0 = x 1 = t 2 exp ( t ) 525 0.07320.142515
x 0 = x 1 = 1 2 exp t 3 3 2 0.1030.243914
Table 2. Comparison of Algorithm 1 and Shehu’s Algorithm.
Table 2. Comparison of Algorithm 1 and Shehu’s Algorithm.
pCPU Time x L p
Algorithm 1Shehu’s AlgorithmAlgorithm 1Shehu’s Algorithm
41.12501.281311.717211.7172
100.95311.39063.88924.8892
10000.06250.25002.11352.8135

Share and Cite

MDPI and ACS Style

Shehu, Y.; Gibali, A. Inertial Krasnoselskii–Mann Method in Banach Spaces. Mathematics 2020, 8, 638. https://doi.org/10.3390/math8040638

AMA Style

Shehu Y, Gibali A. Inertial Krasnoselskii–Mann Method in Banach Spaces. Mathematics. 2020; 8(4):638. https://doi.org/10.3390/math8040638

Chicago/Turabian Style

Shehu, Yekini, and Aviv Gibali. 2020. "Inertial Krasnoselskii–Mann Method in Banach Spaces" Mathematics 8, no. 4: 638. https://doi.org/10.3390/math8040638

APA Style

Shehu, Y., & Gibali, A. (2020). Inertial Krasnoselskii–Mann Method in Banach Spaces. Mathematics, 8(4), 638. https://doi.org/10.3390/math8040638

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop