Next Article in Journal
On Generalized D-Conformal Deformations of Certain Almost Contact Metric Manifolds
Next Article in Special Issue
Common Fixed Point Theorems of Generalized Multivalued (ψ,ϕ)-Contractions in Complete Metric Spaces with Application
Previous Article in Journal
Lexicographic Orders of Intuitionistic Fuzzy Values and Their Relationships
Previous Article in Special Issue
Convergence Theorems for Generalized Viscosity Explicit Methods for Nonexpansive Mappings in Banach Spaces and Some Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weak and Strong Convergence Theorems for the Inclusion Problem and the Fixed-Point Problem of Nonexpansive Mappings

by
Prasit Cholamjiak
,
Suparat Kesornprom
and
Nattawut Pholasa
*
School of Science, University of Phayao, Phayao 56000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(2), 167; https://doi.org/10.3390/math7020167
Submission received: 18 December 2018 / Revised: 4 February 2019 / Accepted: 7 February 2019 / Published: 13 February 2019
(This article belongs to the Special Issue Fixed Point Theory and Related Nonlinear Problems with Applications)

Abstract

:
In this work, we study the inclusion problem of the sum of two monotone operators and the fixed-point problem of nonexpansive mappings in Hilbert spaces. We prove the weak and strong convergence theorems under some weakened conditions. Some numerical experiments are also given to support our main theorem.

1. Introduction

Let H be a real Hilbert space. We study the following inclusion problem: find x ^ H such that
0 A x ^ + B x ^
where A : H H is an operator and B : H 2 H is a set-valued operator.
If A : = F and B : = G , where F is the gradient of F and G is the subdifferential of G which is defined by
G ( x ) = { z H : y x , z + G ( x ) G ( y ) , y H } .
Then problem (1) becomes the following minimization problem:
min x H F ( x ) + G ( x )
To solve the inclusion problem via fixed-point theory, let us define, for r > 0 , the mapping T r : H H as follows:
T r = ( I + r B ) 1 ( I r A ) .
It is known that solutions of the inclusion problem involving A and B can be characterized via the fixed-point equation:
T r x = x x = ( I + r B ) 1 ( x r A x ) x r A x x + r B x 0 A x + B x ,
which suggests the following iteration process: x 1 H and
x n + 1 = ( I + r n B ) 1 ( x n r n A x n ) , n 1 ,
where { r n } ( 0 , ) .
Xu [1] and Kamimura-Takahashi [2] introduced the following inexact iteration process: u , x 1 H and
x n + 1 = α n u + ( 1 α n ) J r n x n + e n , n 1 ,
where { α n } ( 0 , 1 ) , { r n } ( 0 , ) , { e n } H and J r n = ( I + r n B ) 1 . Strong convergence was proved under some mild conditions. This scheme was also investigated subsequently by [3,4,5] with different conditions. In [6], Yao-Noor proposed the generalized version of the scheme (6) as follows: u , x 1 H and
x n + 1 = α n u + β n x n + ( 1 β n α n ) J r n x n + e n , n 1 ,
where { α n } , { β n } ( 0 , 1 ) with 0 α n + β n 1 , { r n } ( 0 , ) and { e n } H . The strong convergence is discussed with some suitable conditions. Recently, Wang-Cui [7] also studied the contraction-proximal point algorithm (7) by the relaxed conditions on parameters: α n 0 , n = 1 α n = , lim sup n   β n < 1 , lim inf n   r n > 0 , and either n = 1 e n < or e n α n 0 .
Takahashi et al. [8] introduced the following Halpern-type iteration process: u , x 1 H and
x n + 1 = α n u + ( 1 α n ) J r n ( x n r n A x n ) , n 1 ,
where { α n } ( 0 , 1 ) , { r n } ( 0 , ) , A is an α -inverse strongly monotone operator on H and B is a maximal monotone operator on H. They proved that { x n } defined by (8) strongly converges to zeroes of A + B if the following conditions hold:
(i) 
lim n α n = 0 , n = 1 α n = ;
(ii) 
n = 1 | α n + 1 α n | < ;
(iii) 
n = 1 | r n + 1 r n | < ;
(iv) 
0 < a r n < 2 α .
Takahashi et al. [8] also studied the following iterative scheme: u , x 1 H and
x n + 1 = β n x n + ( 1 β n ) ( α n u + ( 1 α n ) J r n ( x n r n A x n ) ) , n 1 ,
where { α n } , { β n } ( 0 , 1 ) and { r n } ( 0 , ) . They proved that { x n } defined by (9) strongly converges to zeroes of A + B if the following conditions hold:
(i) 
lim n α n = 0 , n = 1 α n = ;
(ii) 
0 < b β n c < 1 ;
(iii) 
lim n | r n + 1 r n | = 0 ;
(iv) 
0 < a r n < 2 α .
There have been, in the literature, many methods constructed to solve the inclusion problem for maximal monotone operators in Hilbert or Banach spaces; see, for examples, in [9,10,11].
Let C be a nonempty, closed, and convex subset in a Hilbert space H and let T be a nonexpansive mapping of C into itself, that is,
T x T y x y
for all x , y C . We denote by F ( T ) the set of fixed points of T.
The iteration procedure of Mann’s type for approximating fixed points of a nonexpansive mapping T is the following: x 1 C and
x n + 1 = α n x n + ( 1 α n ) T x n , n 1
where { α n } is a sequence in [ 0 , 1 ] .
On the other hand, the iteration procedure of Halpern’s type is the following: x 1 = x C and
x n + 1 = α n x + ( 1 α n ) T x n , n 1 .
where { α n } is a sequence in [ 0 , 1 ] .
Recently, Takahashi et al. [12] proved the following theorem for solving the inclusion problem and the fixed-point problem of nonexpansive mappings.
Theorem 1.
[12] Let C be a closed and convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping of C into H and let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ = ( I + λ B ) 1 be the resolvent of B for λ > 0 and let T be a nonexpansive mapping of C into itself such that F ( T ) ( A + B ) 1 0 . Let x 1 = x C and let { x n } C be a sequence generated by
x n + 1 = β n x n + ( 1 β n ) T ( α n x + ( 1 α n ) J λ n ( x n λ n A x n ) )
for all n N , where { λ n } ( 0 , 2 α ) , { β n } ( 0 , 1 ) and { α n } ( 0 , 1 ) satisfy
0 < a λ n b < 2 α , 0 < c β n d < 1 ,
lim n ( λ n λ n + 1 ) = 0 , lim n α n = 0   a n d   n = 1 α n =
.
Then { x n } converges strongly to a point of F ( T ) ( A + B ) 1 0 .
In this paper, motivated by Takahashi et al. [13] and Halpern [14], we introduce an iteration of finding a common point of the set of fixed points of nonexpansive mappings and the set of inclusion problems for inverse strongly monotone mappings and maximal monotone operators by using the inertial technique (see, [15,16]). We then prove strong and weak convergence theorems under suitable conditions. Finally, we provide some numerical examples to support our iterative methods.

2. Preliminaries

In this section, we provide some basic concepts, definitions, and lemmas which will be used in the sequel. Let H be a real Hilbert space with inner product · , · and norm · . When x n is a sequence in H, x n x implies that { x n } converges weakly to x and x n x means the strong convergence. In a real Hilbert space, we have
λ x + ( 1 λ ) y 2 = λ x 2 + ( 1 λ ) y 2 λ ( 1 λ ) x y 2 ,
for all x , y H and λ R .
We know the following Opial’s condition:
lim inf n x n u < lim inf n x n v
if x n u and u v .
Let C be a nonempty, closed, and convex subset of a Hilbert space H. The nearest point projection of H onto C is denoted by P C , that is, x P C x x y for all x H and y C . The operator P C is called the metric projection of H onto C. We know that the metric projection P C is firmly nonexpansive, for all x , y H
P C x P C y 2 P C x P C y , x y .
or equivalently
P C x P C y 2 x y 2 ( I P C ) x ( I P C ) y 2 .
It is well known that P C is characterized by the inequality, for all x H and y C
x P C x , y P C x 0 .
In a real Hilbert space H, we have the following equality:
x , y = 1 2 x 2 + 1 2 y 2 1 2 x y 2 .
and the subdifferential inequality
x + y 2 x 2 + 2 y , x + y
for all x , y H .
Let α > 0 . A mapping A : C H is said to be α -inverse strongly monotone iff
x y , A x A y α A x A y 2
for all x , y C .
A mapping f : H H is said to be a contraction if there exists a ( 0 , 1 ) such that
f ( x ) f ( y ) a x y
for all x , y H .
Let B be a mapping of H into 2 H . The effective domain of B is denoted by d o m ( B ) , that is, d o m ( B ) = { x H : B x } . A multi-valued mapping B is said to be a monotone operator on H iff x y , u v 0 for all x , y d o m ( B ) , u B x and v B y . A monotone operator B on H is said to be maximal iff its graph is not strictly contained in the graph of any other monotone operator on H. For a maximal monotone operator B on H and r > 0 , we define a single-valued operator J r = ( I + r B ) 1 : H d o m ( B ) , which is called the resolvent of B for r.
Lemma 1.
[17] Let { a n } and { c n } be sequences of nonnegative real numbers such that
a n + 1 ( 1 δ n ) a n + b n + c n , n 1
where { δ n } is a sequence in ( 0 , 1 ) and { b n } is a real sequence. Assume n = 1 c n < . Then the following results hold:
(i) 
If b n δ n M for some M 0 , then { a n } is a bounded sequence.
(ii) 
If n = 1 δ n = and lim sup n   b n / δ n 0 , then lim n a n = 0 .
Lemma 2.
[17] Let { Γ n } be a sequence of real numbers that does not decrease at infinity in the sense that there exists a subsequence { Γ n i } of { Γ n } which satisfies Γ n i < Γ n i + 1 for all i N . Define the sequence { ψ ( n ) } n n 0 of integers as follows:
ψ ( n ) = max { k n : Γ k < Γ k + 1 } ,
where n 0 N such that { k n 0 : Γ k < Γ k + 1 } . Then, the following hold:
(i) 
ψ ( n 0 ) ψ ( n 0 + 1 ) and ψ ( n ) ,
(ii) 
Γ ψ ( n ) Γ ψ ( n ) + 1 and Γ n Γ ψ ( n ) + 1 , n n 0 .
Lemma 3.
[18] Let H be a Hilbert space and { x n } a sequence in H such that there exists a nonempty set S H satisfying:
(i) 
For every x ˜ S , lim n x n x ˜ exists.
(ii) 
Any weak cluster point of { x n } belongs to S.
Then, there exists x ˜ S such that { x n } weakly converges to x ˜ .
Lemma 4.
[18] Let { ϕ n } [ 0 , ) and { δ n } [ 0 , ) verify:
(i) 
ϕ n + 1 ϕ n θ n ( ϕ n ϕ n 1 ) + δ n ,
(ii) 
n = 1 δ n < ,
(iii) 
{ θ n } [ 0 , θ ] , w h e r e θ [ 0 , 1 ] .
Then { ϕ n } is a converging sequence and n = 1 [ ϕ n + 1 ϕ n ] + < , where [ t ] + : = m a x { t , 0 } ( f o r a n y t R ) .

3. Strong Convergence Theorem

In this section, we are now ready to prove the strong convergence theorem in Hilbert spaces.
Theorem 2.
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping of H into itself and let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ = ( I + λ B ) 1 be the resolvent of B for λ > 0 and let S be a nonexpansive mapping of C into itself such that F ( S ) ( A + B ) 1 0 . Let f : C C be a contraction. Let x 0 , x 1 C and let { x n } C be a sequence generated by
y n = x n + θ n ( x n x n 1 ) , x n + 1 = β n x n + ( 1 β n ) S ( α n f ( x n ) + ( 1 α n ) J λ n ( y n λ n A y n ) )
for all n N , where { α n } ( 0 , 1 ) , { β n } ( 0 , 1 ) , { λ n } ( 0 , 2 α ) and { θ n } [ 0 , θ ] , where θ [ 0 , 1 ) satisfy
(C1) 
lim n α n = 0 and n = 1 α n = ;
(C2) 
lim inf n   β n ( 1 β n ) > 0 ;
(C3) 
0 < lim inf n   λ n lim sup n   λ n < 2 α ;
(C4) 
lim n θ n α n x n x n 1 = 0 .
Then { x n } converges strongly to a point of F ( S ) ( A + B ) 1 0 .
Proof. 
Let z = P F ( S ) ( A + B ) 1 0 f ( z ) . Then z = J λ n ( z λ n A z ) for all n 1 . It follows that by the firm nonexpansivity of J λ n ,
J λ n ( y n λ n A y n ) z 2 = J λ n ( y n λ n A y n ) J λ n ( z λ n A z ) 2 ( y n λ n A y n ) ( z λ n A z ) 2 ( I J λ n ) ( y n λ n A y n ) ( I J λ n ) ( z λ n A z ) 2 = ( y n z ) λ n ( A y n A z ) 2 y n λ n A y n J λ n ( y n λ n A y n ) z + λ n A z + J λ n ( z λ n A z ) 2 = ( y n z ) λ n ( A y n A z ) 2 y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 = y n z 2 2 λ n y n z , A y n A z + λ n 2 A y n A z 2 y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 y n z 2 2 λ n α A y n A z 2 + λ n 2 A y n A z 2 y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 = y n z 2 λ n ( 2 α λ n ) A y n A z 2 y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 .
By (C3), we obtain
J λ n ( y n λ n A y n ) z y n z .
On the other hand, since y n = x n + θ n ( x n x n 1 ) , it follows that
y n z = x n z + θ n ( x n x n 1 ) x n z + θ n x n x n 1 .
Hence J λ n ( y n λ n A y n ) z x n z + θ n x n x n 1 by (27) and (28).
Let w n = α n f ( x n ) + ( 1 α n ) J λ n ( y n λ n A y n ) for all n 1 . Then we obtain
w n z = α n ( f ( x n ) z ) + ( 1 α n ) J λ n ( y n λ n A y n ) z α n f ( x n ) f ( z ) + α n f ( z ) z + ( 1 α n ) x n z + θ n ( 1 α n ) x n x n 1 α n a x n z + α n f ( z ) z + ( 1 α n ) x n z + θ n ( 1 α n ) x n x n 1 = ( 1 α n ( 1 a ) ) x n z + α n f ( z ) z + θ n ( 1 α n ) x n x n 1 .
So, we have
x n + 1 z = β n ( x n z ) + ( 1 β n ) ( S w n z ) β n x n z + ( 1 β n ) S w n z β n x n z + ( 1 β n ) w n z β n x n z + ( 1 β n ) [ ( 1 α n ( 1 a ) ) x n z + α n f ( z ) z + θ n ( 1 α n ) x n x n 1 ] = ( 1 α n ( 1 β n ) ( 1 a ) ) x n z + α n ( 1 β n ) ( 1 a ) f ( z ) z 1 α + θ n ( 1 α n ) α n ( 1 α ) x n x n 1 .
By Lemma 1(i), we have that { x n } is bounded. We see that
x n + 1 z 2 = β n ( x n z ) + ( 1 β n ) ( S w n z ) 2 = β n x n z 2 + ( 1 β n ) S w n z 2 β n ( 1 β n ) x n S w n 2 β n x n z 2 + ( 1 β n ) w n z 2 β n ( 1 β n ) S w n x n 2 .
We next estimate the following:
w n z 2 = w n z , w n z = α n ( f ( x n ) z ) + ( 1 α n ) ( J λ n ( y n λ n A y n ) z ) , w n z = α n f ( x n ) f ( z ) , w n z + α n f ( z ) z , w n z + ( 1 α n ) J λ n ( y n λ n A y n ) z , w n z α n f ( x n ) f ( z ) w n z + ( 1 α n ) J λ n ( y n λ n A y n ) z w n z + α n f ( z ) z , w n z α n a x n z w n z + ( 1 α n ) J λ n ( y n λ n A y n ) z w n z + α n f ( z ) z , w n z 1 2 α n a x n z 2 + 1 2 α n a w n z 2 + 1 2 ( 1 α n ) J λ n ( y n λ n A y n ) z 2 + 1 2 ( 1 α n ) w n z 2 + α n f ( z ) z , w n z = 1 2 α n a x n z 2 + 1 2 ( 1 α n ) J λ n ( y n λ n A y n ) z 2 + 1 2 ( 1 α n ( 1 a ) ) w n z 2 + α n f ( z ) z , w n z .
It follows that
w n z 2 α n a 1 α n ( a 1 ) x n z 2 + 1 α n 1 α n ( a 1 ) J λ n ( y n λ n A y n ) z 2 + 2 α n 1 α n ( a 1 ) f ( z ) z , w n z .
We also have, using (19)
y n z 2 = ( x n z ) + θ n ( x n x n 1 ) 2 = x n z 2 + 2 θ n x n z , x n x n 1 + θ n 2 x n x n 1 2 = x n z 2 + 2 θ n 1 2 x n z 2 + 1 2 x n x n 1 2 1 2 x n z x n + x n 1 2 + θ n 2 x n x n 1 2 = x n z 2 + θ n [ x n z 2 + x n x n 1 2 x n 1 z 2 ] + θ n 2 x n x n 1 2 = x n z 2 + θ n [ x n z 2 x n 1 z 2 ] + ( θ n 2 + θ n ) x n x n 1 2 x n z 2 + θ n [ x n z 2 x n 1 z 2 ] + 2 θ n x n x n 1 2 .
Combining (26) and (31), we get
J λ n ( y n λ n A y n ) z 2 x n z 2 + θ n [ x n z 2 x n 1 z 2 ] + 2 θ n x n x n 1 2 λ n ( 2 α λ n ) A y n A z 2 y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 .
Combining (30) and (32), we obtain
w n z 2 α n a 1 α n ( a 1 ) x n z 2 + 1 α n 1 α n ( a 1 ) [ x n z 2 + θ n x n z 2 θ n x n 1 z 2 + 2 θ n x n x n 1 2 λ n ( 2 α λ n ) A y n A z 2 y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 ] + 2 α n 1 α n ( a 1 ) f ( z ) z , w n z = 1 α n ( 1 a ) 1 α n ( a 1 ) x n z 2 + θ n ( 1 α n ) 1 α n ( a 1 ) [ x n z 2 x n 1 z 2 ] + 2 θ n ( 1 α n ) 1 α n ( a 1 ) x n x n 1 2 λ n ( 1 α n ) ( 2 α λ n ) 1 α n ( a 1 ) A y n A z 2 1 α n 1 α n ( a 1 ) y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 + 2 α n 1 α n ( a 1 ) f ( z ) z , w n z .
From (29) and (33), we have
x n + 1 z 2 β n x n z 2 + ( 1 β n ) [ 1 α n ( 1 α ) 1 α n ( α 1 ) x n z 2 + θ n ( 1 α n ) 1 α n ( a 1 ) ( x n z 2 x n 1 z 2 ) + 2 θ n ( 1 α n ) 1 α n ( a 1 ) x n x n 1 2 λ n ( 1 α n ) ( 2 α λ n ) 1 α n ( a 1 ) A y n A z 2 1 α n 1 α n ( a 1 ) y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 + 2 α n 1 α n ( a 1 ) f ( z ) z , w n z ] β n ( 1 β n ) S w n x n 2 = 1 2 α n ( 1 a ) ( 1 β n ) 1 α n ( a 1 ) x n z 2 + θ n ( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) ( x n z 2 x n 1 z 2 ) + 2 θ n ( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) x n x n 1 2 λ n ( 1 α n ) ( 1 β n ) ( 2 α λ n ) 1 α n ( a 1 ) A y n A z 2 ( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 + 2 α n ( 1 β n ) 1 α n ( a 1 ) f ( z ) z , w n z β n ( 1 β n ) S w n x n 2 .
Set Γ n = x n z 2 , n 1 . We next consider two cases.
Case 1: Suppose that there exists a natural number N such that Γ n + 1 Γ n for all n N . In this case, { Γ n } is convergent. From (34) we obtain
Γ n + 1 1 2 α n ( 1 a ) ( 1 β n ) 1 α n ( a 1 ) Γ n + θ n ( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) ( Γ n Γ n 1 ) + 2 θ n ( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) x n x n 1 2 λ n ( 1 α n ) ( 1 β n ) ( 2 α λ n ) 1 α n ( a 1 ) A y n A z 2 ( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 + 2 α n ( 1 β n ) 1 α n ( a 1 ) f ( z ) z , w n z β n ( 1 β n ) S w n x n 2 .
It follows that
λ n ( 1 α n ) ( 1 β n ) ( 2 α λ n ) 1 α n ( a 1 ) A y n A z 2
Γ n Γ n + 1 + θ n ( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) ( Γ n Γ n 1 ) + 2 θ n ( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) x n x n 1 2
+ 2 α n ( 1 β n ) 1 α n ( a 1 ) f ( z ) z , w n z .
Also, we obtain
( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2
Γ n Γ n + 1 + θ n ( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) ( Γ n Γ n 1 ) + 2 θ n ( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) x n x n 1 2
+ 2 α n ( 1 β n ) 1 α n ( a 1 ) f ( z ) z , w n z .
We also have
β n ( 1 β n ) S w n x n 2 Γ n Γ n + 1 + θ n ( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) ( Γ n Γ n 1 ) + 2 θ n ( 1 β n ) ( 1 α n ) 1 α n ( a 1 ) x n x n 1 2 + 2 α n ( 1 β n ) 1 α n ( a 1 ) f ( z ) z , w n z .
Since lim n θ n α n x n x n 1 = 0 , lim n α n = 0 and { Γ n } converges, we have
A y n A z 0 ,
y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 0 ,
and
S w n x n 0
as n . We next show that J λ n ( y n λ n A y n ) y n 0 as n . We see that
J λ n ( y n λ n A y n ) y n = J λ n ( y n λ n A y n ) λ n ( A y n A z ) + λ n ( A y n A z ) y n y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) + λ n A y n A z 0 , a s n .
We also have
w n x n = α n ( f ( x n ) x n ) + ( 1 α n ) ( J λ n ( y n λ n A y n ) x n ) α n f ( x n ) x n + y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) + λ n A y n A z + x n y n + α n J λ n ( y n λ n A y n ) x n = α n f ( x n ) x n + y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) + λ n A y n A z + θ n x n x n 1 + α n J λ n ( y n λ n A y n ) x n 0 , a s n .
We next show that S x n x n 0 as n . We see that
S x n x n S x n S w n + S w n x n x n w n + S w n x n 0 , a s n .
Since { x n } is bounded, we can choose a subsequence { x n i } of { x n } which converges weakly to a point x C . Suppose that x S x . Then by Opial’s Condition we obtain
lim inf i x n i x < lim inf i x n i S x = lim inf i x n i S x n i + S x n i S x lim inf i x n i S x n i + lim inf i S x n i S x lim inf i x n i x .
This is a contradiction. Hence x F ( S ) . From w n = α n f ( x n ) + ( 1 α n ) J λ n ( y n λ n A y n ) , we have
w n α n f ( x n ) 1 α n = J λ n ( y n λ n A y n ) .
From J λ n = ( I + λ n B ) 1 , we also have
w n α n f ( x n ) 1 α n = ( I + λ n B ) 1 ) ( y n λ n A y n ) .
This gives
y n λ n A y n w n α n f ( x n ) 1 α n + λ n B w n α n f ( x n ) 1 α n .
So, we obtain
y n λ n A y n w n + α n f ( x n ) λ n ( 1 α n ) B w n α n f ( x n ) 1 α n .
Since B is monotone, we have for ( p , q ) B
w n α n f ( x n ) 1 α n p , y n λ n A y n w n + α n f ( x n ) λ n ( 1 α n ) q 0 .
So, we have
λ n ( w n α n f ( x n ) ) p λ n ( 1 α n ) , y n ( 1 α n ) A y n λ n ( 1 α n ) w n + α n f ( x n ) q λ n ( 1 α n ) 0 ,
which implies
λ n w n p λ n λ n α n ( f ( x n ) p ) , y n w n α n ( y n f ( x n ) ) λ n ( 1 α n ) ( A y n + q ) 0 .
Since y n x , A y n A x α A y n A x 2 , A y n A z and y n i x (since x n y n 0 ), we have α A y n A x 2 0 and thus A z = A x . From (35), we have x p , A x q 0 .
Since B is maximal monotone, we have A x B x . Hence 0 ( A + B ) x and thus we have x F ( S ) ( A + B ) 1 0 .
We will show that lim sup n f ( z ) z , w n z 0 . Sine { w n } is bounded and x n w n 0 , there exists a subsequence { w n i } of { w n } such that
lim sup n f ( z ) z , w n z = lim i f ( z ) z , w n i z = f ( z ) z , x z 0 .
We know that
Γ n + 1 1 2 α n ( 1 a ) ( 1 β n ) 1 α n ( a 1 ) Γ n + 2 α n ( 1 a ) ( 1 β n ) 1 α n ( a 1 ) θ n ( 1 α n ) α n ( 1 a ) x n x n 1 2 + 1 ( 1 a ) f ( z ) z , w n z .
Since lim sup n θ n ( 1 α n ) α n ( 1 a ) x n x n 1 2 + 1 1 a f ( z ) z , w n z 0 , by Lemma 1(ii) lim n Γ n = 0 . So x n z .
Case 2: Suppose that there exists a subsequence { Γ n i } of the sequence { Γ n } such that Γ n i < Γ n i + 1 for all i N . In this case, we define φ : N N as in Lemma 2. Then, by Lemma 2, we have Γ φ ( n ) Γ φ ( n ) + 1 . We see that
x φ ( n ) + 1 x φ ( n ) ( 1 β φ ( n ) ) S w φ ( n ) x φ ( n ) 0 , a s n .
From (34) we have
Γ φ ( n ) + 1 1 2 α φ ( n ) ( 1 a ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( a 1 ) Γ φ ( n ) + θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( a 1 ) ( Γ φ ( n ) Γ φ ( n ) 1 ) + 2 θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( a 1 ) x φ ( n ) x φ ( n ) 1 2 λ φ ( n ) ( 1 α φ ( n ) ) ( 1 β φ ( n ) ) ( 2 α λ φ ( n ) ) 1 α φ ( n ) ( a 1 ) A y φ ( n ) A z 2 ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( a 1 ) y φ ( n ) λ φ ( n ) ( A y φ ( n ) A z ) J λ φ ( n ) ( y φ ( n ) λ φ ( n ) A y φ ( n ) ) 2 + 2 α φ ( n ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( a 1 ) f ( z ) z , w φ ( n ) z β φ ( n ) ( 1 β φ ( n ) ) S w φ ( n ) x φ ( n ) 2 .
It follows that
λ φ ( n ) ( 1 α φ ( n ) ) ( 1 β φ ( n ) ) ( 2 a λ φ ( n ) ) 1 α φ ( n ) ( α 1 ) A y φ ( n ) A z 2
Γ φ ( n ) Γ φ ( n ) + 1 1 2 α φ ( n ) ( 1 α ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( α 1 ) Γ φ ( n ) + θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) ( Γ φ ( n ) Γ φ ( n ) 1 ) + 2 θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) x φ ( n ) x φ ( n ) 1 2 + 2 α φ ( n ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( α 1 ) f ( z ) z , w φ ( n ) z θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) ( Γ φ ( n ) Γ φ ( n ) 1 ) + 2 θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) x φ ( n ) x φ ( n ) 1 2 + 2 α φ ( n ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( α 1 ) f ( z ) z , w φ ( n ) z .
We also have
β φ ( n ) ( 1 β φ ( n ) ) S w φ ( n ) x φ ( n ) 2
Γ φ ( n ) Γ φ ( n ) + 1 1 2 α φ ( n ) ( 1 α ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( α 1 ) Γ φ ( n ) + θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) ( Γ φ ( n ) Γ φ ( n ) 1 ) + 2 θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) x φ ( n ) x φ ( n ) 1 2 + 2 α φ ( n ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( α 1 ) f ( z ) z , w φ ( n ) z
θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) ( Γ φ ( n ) Γ φ ( n ) 1 ) + 2 θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) x φ ( n ) x φ ( n ) 1 2 + 2 α φ ( n ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( α 1 ) f ( z ) z , w φ ( n ) z .
We also have
( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) y φ ( n ) λ φ ( n ) ( A y φ ( n ) A z ) J λ φ ( n ) ( y φ ( n ) λ φ ( n ) A y φ ( n ) ) 2
Γ φ ( n ) Γ φ ( n ) + 1 1 2 α φ ( n ) ( 1 α ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( α 1 ) Γ φ ( n ) + θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) ( Γ φ ( n ) Γ φ ( n ) 1 ) + 2 θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) x φ ( n ) x φ ( n ) 1 2 + 2 α φ ( n ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( α 1 ) f ( z ) z , w φ ( n ) z θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) ( Γ φ ( n ) Γ φ ( n ) 1 ) + 2 θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( α 1 ) x φ ( n ) x φ ( n ) 1 2 + 2 α φ ( n ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( α 1 ) f ( z ) z , w φ ( n ) z .
We know that
Γ φ ( n ) Γ φ ( n ) 1 = x φ ( n ) z 2 x φ ( n ) 1 z 2 = [ x φ ( n ) z x φ ( n ) 1 z ] [ x φ ( n ) z + x φ ( n ) 1 z ] x φ ( n ) x φ ( n ) 1 [ x φ ( n ) z + x φ ( n ) 1 z ] 0 , a s n .
From (36)–(38), we have
A y φ ( n ) A z 0 , y φ ( n ) λ φ ( n ) ( A y φ ( n ) A z ) J λ φ ( n ) ( y φ ( n ) λ φ ( n ) A y φ ( n ) ) 0
and S w φ ( n ) x φ ( n ) 0 . Now repeating the argument of the proof in Case 1, we obtain lim sup n f ( z ) z , w φ ( n ) z 0 . We note that
2 α φ ( n ) ( 1 a ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( a 1 ) Γ φ ( n ) θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( a 1 ) ( Γ φ ( n ) Γ φ ( n ) 1 ) + 2 θ φ ( n ) ( 1 β φ ( n ) ) ( 1 α φ ( n ) ) 1 α φ ( n ) ( a 1 ) x φ ( n ) x φ ( n ) 1 2 + 2 α φ ( n ) ( 1 β φ ( n ) ) 1 α φ ( n ) ( a 1 ) f ( z ) z , w φ ( n ) z .
This gives
Γ φ ( n ) θ φ ( n ) ( 1 α φ ( n ) ) 2 α φ ( n ) ( 1 a ) [ Γ φ ( n ) Γ φ ( n ) 1 ] + θ φ ( n ) ( 1 α φ ( n ) ) α φ ( n ) ( 1 a ) x φ ( n ) x φ ( n ) 1 2 + 1 1 a f ( z ) z , w φ ( n ) z .
So lim sup n   Γ φ ( n ) 0 . This means lim n Γ φ ( n ) = lim n x φ ( n ) z 2 = 0 . Hence x φ n z . It follows that
x φ ( n ) + 1 z x φ ( n ) + 1 x φ ( n ) + x φ ( n ) z 0 , a s n .
By Lemma 2, we have Γ n Γ φ ( n ) + 1 . Thus, we obtain
Γ n = x n z 2 x φ ( n ) + 1 z 2 0 , a s n .
Hence Γ n 0 and thus x n z . This completes the proof. □
Remark 1.
It is noted that the condition
lim n ( λ n λ n + 1 ) = 0
is removed from Theorem TTT of Takahashi et al. [12].
Remark 2.
[17] We remark here that the conditions (C4) is easily implemented in numerical computation since the valued of x n x n 1 is known before choosing θ n . Indeed, the parameter θ n can be chosen such that 0 θ n θ ¯ n , where
θ ¯ n = min ω n x n x n 1 , θ i f x n x n 1 , θ o t h e r w i s e ,
where { ω n } is a positive sequence such that ω n = o ( α n ) .

4. Weak Convergence Theorem

In this section, we prove the weak convergence theorem.
Theorem 3.
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let A be an α-inverse strongly monotone mapping of H into itself and let B be a maximal monotone operator on H such that the domain of B is included in C. Let J λ = ( I + λ B ) 1 be the resolvent of B for λ > 0 and let S be a nonexpansive mapping of C into itself such that F ( S ) ( A + B ) 1 0 . Let x 0 , x 1 C and let { x n } C be a sequence generated by
y n = x n + θ n ( x n x n 1 ) , x n + 1 = β n x n + ( 1 β n ) S ( J λ n ( y n λ n A y n ) )
for all n N , where { λ n } ( 0 , 2 α ) , { β n } ( 0 , 1 ) and { θ n } [ 0 , θ ] , where θ [ 0 , 1 ) satisfy
(C1) 
lim inf n   β n ( 1 β n ) > 0 ;
(C2) 
0 < lim inf n   λ n lim sup n   λ n < 2 α ;
(C3) 
n = 1 θ n x n x n 1 2 < .
Then { x n } converges weakly to a point of F ( S ) ( A + B ) 1 0 .
Proof. 
Let z F ( S ) ( A + B ) 1 0 and w n = J λ n ( y n λ n A y n ) n 1 . Then z = J λ n ( z λ n A z ) . From Theorem 2 we have
x n + 1 z 2 β n x n z 2 + ( 1 β n ) w n z 2 β n ( 1 β n ) x n S w n 2 ,
w n z 2 = y n z 2 λ n ( 2 α λ n ) A y n A z 2 y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2
and
y n z 2 ( 1 + θ n ) x n z 2 + 2 θ n x n x n 1 2 θ n x n 1 z 2 .
Combining (42) and (41), we obtain
w n z 2 ( 1 + θ n ) x n z 2 + 2 θ n x n x n 1 2 θ n x n 1 z 2 λ n ( 2 α λ n ) A y n A z 2 y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 .
Combining (40) and (43), we also have
x n + 1 z 2 β n x n z 2 + ( 1 β n ) [ ( 1 + θ n ) x n z 2 + 2 θ n x n x n 1 2 θ n x n 1 z 2 λ n ( 2 α λ n ) A y n A z 2 y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) ] β n ( 1 β n ) x n S w n 2 = β n x n z 2 + ( 1 β n ) ( 1 + θ n ) x n z 2 + 2 θ n ( 1 β n ) x n x n 1 2 θ n ( 1 β n ) x n 1 z 2 λ n ( 2 α λ n ) ( 1 β n ) A y n A z 2 ( 1 β n ) y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 β n ( 1 β n ) x n S w n 2 x n z 2 + θ n ( 1 β n ) x n z 2 + 2 θ n ( 1 β n ) x n x n 1 2 θ n ( 1 β n ) x n 1 z 2 .
This shows that
x n 1 z 2 x n z 2 θ n ( 1 β n ) [ x n z 2 x n 1 z 2 ] + 2 θ n ( 1 β n ) x n x n 1 2 .
By Lemma 4, we have x n z 2 converges. Thus, lim n x n z 2 exists. So, by (44) we have
λ n ( 2 α λ n ) ( 1 β n ) A y n A z 2 θ n ( 1 β n ) [ x n z 2 x n 1 z 2 ] + 2 θ n ( 1 β n ) x n x n 1 2 + x n z 2 x n + 1 z 2 0 , a s n .
We also have
( 1 β n ) y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 2 θ n ( 1 β n ) [ x n z 2 x n 1 z 2 ] + 2 θ n ( 1 β n ) x n x n 1 2 + x n z 2 x n + 1 z 2 0 , a s n .
Moreover, we obtain
β n ( 1 β n ) x n S w n 2 θ n ( 1 β n ) [ x n z 2 x n 1 z 2 ] + 2 θ n ( 1 β n ) x n x n 1 2 + x n z 2 x n + 1 z 2 0 , a s n .
It follows that
A y n A z 0 , y n λ n ( A y n A z ) J λ n ( y n λ n A y n ) 0   and   x n S w n 0
.
By a similar proof as in Theorem 2, we can show that if there exists a subsequence { x n k } of { x n } , such that x n k x , then x F ( S ) ( A + B ) 1 0 . By Lemma 3, we conclude that { x n } weakly converges to a point in F ( S ) ( A + B ) 1 0 . We thus complete the proof. □
Remark 3.
[18] We remark here that the conditions (C3) is easily implemented in numerical computation. Indeed, once x n and x n 1 are given, it is just sufficient to compute the update x n + 1 with (39) by choosing θ n such that 0 θ n θ ¯ n , where
θ ¯ n = min ε n x n x n 1 2 , θ i f x n x n 1 , θ o t h e r w i s e ,
where { ε n } [ 0 , ) is such that n = 1 ε n < .

5. Numerical Examples

In this section, we give some numerical experiments to show the efficiency and the comparison with other methods.
Example 1.
Solve the following minimization problem:
min x R 3 x 2 2 + ( 3 , 5 , 1 ) x + 9 + x 1 ,
where x = ( y 1 , y 2 , y 3 ) R 3 and the fixed-point problem of S : R 3 R 3 defined by
S ( x ) = ( 2 y 1 , 4 y 2 , y 3 ) .
For each x R 3 , we set F ( x ) = x 2 2 + ( 3 , 5 , 1 ) x + 9 and G ( x ) = x 1 . Put A = F and B = G in Theorem 2. We can check that F is convex and differentiable on R 3 with 2-Lipschitz continuous gradient. Moreover, G is convex and lower semi-continuous but not differentiable on R 3 . We know that for r > 0
( I + r B ) 1 ( x ) = ( m a x { y 1 r , 0 } s i g n ( y 1 ) , m a x { y 2 r , 0 } s i g n ( y 2 ) , m a x { y 3 r , 0 } s i g n ( y 3 ) ) .
We choose α n = 1 100 n + 1 , β n = 3 n 5 n + 1 , λ n = 0.0001 for all n N and θ = 0.5 . For each n N , let ω n = 1 ( n + 1 ) 3 and define θ n = θ n ¯ as in Remark 2. The stopping criterion is defined by
E n = x n J λ n ( I F ) x n + x n S x n < 10 3 .
We now study the effect (in terms of convergence and the CPU time) and consider different choices of x 0 and x 1 as following, see Table 1.
  • Choice 1: x 0 = ( 1 , 2 , 1 ) and x 1 = ( 1 , 5 , 1 ) ;
  • Choice 2: x 0 = ( 0 , 2 , 2 ) and x 1 = ( 2 , 0 , 3 ) ;
  • Choice 3: x 0 = ( 5 , 4 , 6 ) and x 1 = ( 3 , 5 , 9 ) ;
  • Choice 4: x 0 = ( 1 , 2 , 3 ) and x 1 = ( 8 , 7 , 3 ) .
The error plotting of Equations (13) and (25) for each choice is shown in Figure 1, Figure 2, Figure 3 and Figure 4, respectively.

Author Contributions

N.P.; methodology, S.K.; write draft preparation and P.C.; supervision.

Funding

The authors would like to thank University of Phayao. P. Cholamjiak was supported by The Thailand Research Fund and University of Phayao under granted RSA6180084.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Xu, H.K. A regularization method for the proximal point algorithm. J. Glob. Optim. 2006, 36, 115–125. [Google Scholar] [CrossRef]
  2. Kamimura, S.; Takahashi, W. Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 2000, 106, 226–240. [Google Scholar] [CrossRef]
  3. Boikanyo, O.A.; Morosanu, G. A proximal point algorithm converging strongly for general errors. Optim. Lett. 2010, 4, 635–641. [Google Scholar] [CrossRef]
  4. Boikanyo, O.A.; Morosanu, G. Strong convergence of a proximal point algorithm with bounded errorsequence. Optim. Lett. 2013, 7, 415–420. [Google Scholar] [CrossRef]
  5. Marino, G.; Xu, H.K. Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal. 2004, 3, 791–808. [Google Scholar]
  6. Yao, Y.; Noor, M.A. On convergence criteria of generalized proximal point algorithms. J. Comput. Appl. Math. 2008, 217, 46–55. [Google Scholar] [CrossRef]
  7. Wang, F.; Cui, H. On the contraction-proximal point algorithms with multi-parameters. J. Glob. Optim. 2012, 54, 485–491. [Google Scholar] [CrossRef]
  8. Takahashi, W. Viscosity approximation methods for resolvents of accretive operators in Banach spaces. J. Fixed Point Theory Appl. 2007, 1, 135–147. [Google Scholar] [CrossRef]
  9. Combettes, P.L. Iterative construction of the resolvent of a sum of maximal monotone operators. J. Convex Anal. 2009, 16, 727–748. [Google Scholar]
  10. Lopez, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Forward-Backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012, 2012, 109–236. [Google Scholar] [CrossRef]
  11. Lehdili, N.; Moudafi, A. Combining the proximal algorithm and Tikhonov regularization. Optimization 1996, 37, 239–252. [Google Scholar] [CrossRef]
  12. Takahashi, S.; Takahashi, W.; Toyoda, M. Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 2010, 147, 27–41. [Google Scholar] [CrossRef]
  13. Takahashi, W.; Tamura, T. Convergence theorems for a pair of nonexpansive mappings. J. Convex Anal. 1998, 5, 45–56. [Google Scholar]
  14. Halpern, B. Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef]
  15. Dong, Q.L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
  16. Dong, Q.L.; Yuan, H.B.; Cho, Y.J.; Rassias, T.M. Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 2018, 12, 87–102. [Google Scholar] [CrossRef]
  17. Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 2018, 14, 1595–1615. [Google Scholar] [CrossRef]
  18. Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef]
Figure 1. Comparison of Equations (13) and (25) for each choice 1.
Figure 1. Comparison of Equations (13) and (25) for each choice 1.
Mathematics 07 00167 g001
Figure 2. Comparison of Equations (13) and (25) for each choice 2.
Figure 2. Comparison of Equations (13) and (25) for each choice 2.
Mathematics 07 00167 g002
Figure 3. Comparison of Equations (13) and (25) for each choice 3.
Figure 3. Comparison of Equations (13) and (25) for each choice 3.
Mathematics 07 00167 g003
Figure 4. Comparison of Equations (13) and (25) for each choice 4.
Figure 4. Comparison of Equations (13) and (25) for each choice 4.
Mathematics 07 00167 g004
Table 1. Using Equations (13) and (25) with different choices of x 0 and x 1 .
Table 1. Using Equations (13) and (25) with different choices of x 0 and x 1 .
Equation (13)Equation (25)
Choice 1 x 0 = ( 1 , 2 , 1 ) No. of Iter.926
x 1 = ( 1 , 5 , 1 ) CPU (Time) 0.045106 0.016301
Choice 2 x 0 = ( 0 , 2 , 2 ) No. of Iter.9214
x 1 = ( 2 , 0 , 3 ) CPU (Time) 0.039239 0.014759
Choice 3 x 0 = ( 5 , 4 , 6 ) No. of Iter.9214
x 1 = ( 3 , 5 , 9 ) CPU (Time) 0.064943 0.010813
Choice 4 x 0 = ( 1 , 2 , 3 ) No. of Iter.9214
x 1 = ( 8 , 7 , 3 ) CPU (Time) 0.066736 0.047984

Share and Cite

MDPI and ACS Style

Cholamjiak, P.; Kesornprom, S.; Pholasa, N. Weak and Strong Convergence Theorems for the Inclusion Problem and the Fixed-Point Problem of Nonexpansive Mappings. Mathematics 2019, 7, 167. https://doi.org/10.3390/math7020167

AMA Style

Cholamjiak P, Kesornprom S, Pholasa N. Weak and Strong Convergence Theorems for the Inclusion Problem and the Fixed-Point Problem of Nonexpansive Mappings. Mathematics. 2019; 7(2):167. https://doi.org/10.3390/math7020167

Chicago/Turabian Style

Cholamjiak, Prasit, Suparat Kesornprom, and Nattawut Pholasa. 2019. "Weak and Strong Convergence Theorems for the Inclusion Problem and the Fixed-Point Problem of Nonexpansive Mappings" Mathematics 7, no. 2: 167. https://doi.org/10.3390/math7020167

APA Style

Cholamjiak, P., Kesornprom, S., & Pholasa, N. (2019). Weak and Strong Convergence Theorems for the Inclusion Problem and the Fixed-Point Problem of Nonexpansive Mappings. Mathematics, 7(2), 167. https://doi.org/10.3390/math7020167

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop