Next Article in Journal
Prescribed Fixed-Time Adaptive Neural Control for Manipulators with Uncertain Dynamics and Actuator Failures
Previous Article in Journal
Decoding the Profitability of Insurance Products: A Novel Approach to Evaluating Non-Participating and Participating Insurance Policies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Mann-Type Algorithm for Two Countable Families of Nonexpansive Mappings and Application to Monotone Inclusion and Image Restoration Problems

by
Kasamsuk Ungchittrakool
1,2,
Somyot Plubtieng
1,2,
Natthaphon Artsawang
1,2,* and
Purit Thammasiri
1
1
Department of Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
2
Research Center for Academic Excellence in Mathematics, Faculty of Science, Naresuan University, Phitsanulok 65000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(13), 2927; https://doi.org/10.3390/math11132927
Submission received: 7 June 2023 / Revised: 25 June 2023 / Accepted: 27 June 2023 / Published: 29 June 2023
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
In this paper, we introduce and study a modified Mann-type algorithm that combines inertial terms for solving common fixed point problems of two countable families of nonexpansive mappings in Hilbert spaces. Under appropriate assumptions on the sequences of parameters, we establish a strong convergence result for the sequence generated by the proposed method in finding a common fixed point of two countable families of nonexpansive mappings. This method can be applied to solve the monotone inclusion problem. Additionally, we employ a modified Mann-type iterative algorithm to address image restoration problems. Furthermore, we present numerical results across different scenarios to demonstrate the superior efficiency of our algorithm compared to existing algorithms.

1. Introduction

Let H be a real Hilbert space with inner product  · , ·  and associated norm  · = · , · . Recall that a mapping  T : H H  is said to be a strict pseudo-contraction if there exists  k ( , 1 )  such that
T x T y 2 x y 2 + k ( I T ) x ( I T ) y 2 , x , y H .
In particular, it is well known that if  k = 1  in (1), then the mapping T in (1) is said to be a firmly nonexpansive mapping and the equivalent form of this mapping can be written as  T x T y 2 x y , T x T y , x , y H .  Furthermore, if  k = 0  in (1), then the mapping T in (1) is called a nonexpansive mapping which we can view (1) in this particular case as  T x T y x y , x , y H , see [1,2] for more details.
We denote by  F i x ( T ) : = { x H : T x = x }  the set of all fixed points of T. Let C be a nonempty closed convex subset of H. Then for every  z H  there exists a unique  x C  such that  z x = inf x C z x  and if we define  P C : H C  by  P C ( z ) = x  for all  z H  then  P C  is called the metric projection (or nearest point projection) of H onto C, see [3,4,5] more details.
Fixed Point Problem: The following is the fixed point problem for the mapping T usually represented by:
find x H such that x = T x .
Fixed point problems have found wide-ranging applications across various fields of mathematics, engineering, computer science, and other disciplines. These problems involve finding points in a function’s domain that remain unchanged after applying the function, making them essential in solving equations, optimization, economic equilibrium analysis, signal processing, control systems, machine learning, and fractal geometry. By providing a framework for iterative methods and stability analysis, fixed point problems offer powerful tools to tackle complex problems and uncover solutions in diverse areas of study.
The well known and important iterative algorithm for finding a solution of (2) was proposed by Mann [6] as follows:
x 1 C , x n + 1 = ( 1 α n ) x n + α n T x n , n 1 ,
where  ( α n ) n 1  is chosen from  [ 0 , 1 ] .  Reich [7] showed that if T in (3) is a nonexpansive with a fixed point and  ( α n ) n 1  satisfies some appropriate conditions, then (3) converges weakly to a fixed point of T. Later, Ishikawa [8] relied on Mann’s ideas and proposed the following iterative method for a Lipschitzian pseudo-contraction in Hilbert spaces as follows:
x 1 C , y n = ( 1 α n ) x n + α n T x n , x n + 1 = ( 1 β n ) x n + β n T y n , n 1 .
Under certain suitable conditions of C ( α n ) n 1 , and  ( β n ) n 1 , he was able to prove that (4) converges strongly to a fixed point of T. On the other hand, Halpern [9] presented a new approach for finding a fixed point of a nonexpansive mapping T which differs from Mann’s method by forcing one point of the vector to rest as follows:
u , x 1 C , x n + 1 = ( 1 α n ) u + α n T x n , n 1 ,
where  ( α n ) n 1  is chosen from  [ 0 , 1 ] .  It can be proved under some appropriate conditions that (5) converges strongly to a fixed point of T. After that, Moudafi [10] developed a new iterative method which guarantees strong convergence that later came to be known as the viscosity approximation method by combining the Halpern iterative method with the contraction mapping. Many researchers studied the concept of viscosity method and then developed this method in many directions, see [11,12,13,14,15] for more details. In 2019, Bot et al. [16] modified and developed (3) to induce strong convergence to a fixed point of a nonexpansive mapping without relying on the concept of viscosity approximation method. Their method is as follows:
x 1 H , x n + 1 = ( 1 α n ) δ n x n + α n T δ n x n , n 1 ,
when  ( α n ) n 1  and  ( δ n ) n 1  are sequences chosen from  ( 0 , 1 ]  with suitable assumptions.
On the other hand, it is well known that the fixed point problem (2) has the strong relationship with the monotone inclusion problem which it can be written as follows:
find x H such that 0 V x ,
where  V : H 2 H  is a multi-value operator. Indeed, for example, if V is a multi-value maximal monotone operator, then resolvent operator J γ V : H H  is a single-value and firmly nonexpansive mapping where  γ > 0 . Furthermore, for solving all solutions of (7) is equivalent to solve all solutions of the fixed point problem of  J γ V  or equivalently to say that  z e r ( V ) = F i x ( J γ V ) , where  z e r ( V ) = x H 0 V x  is the set of all zero points of V (see Section 4 and [3,4] for more details).
In 1964, Polyak [17] proposed several strategies to improve the convergence rate of iteration methods. These techniques include modifications to the classical iterative schemes, such as the use of variable relaxation parameters and acceleration techniques by using the inertial extrapolation term, that is,  θ n ( x n x n 1 )  where  ( θ n )  is a sequence that satisfies appropriate conditions. Consequently, the inertial extrapolation has received attention and has been studied by many authors, see [18,19,20,21,22,23,24,25] more details. In 2019, Shehu [26] proposed an algorithm that combines the inertial method, the Halpern method, and error terms to solve a fixed point of a nonexpansive mapping. Later, Kitkuan et al. [27] employed the idea of inertial viscosity to find some solutions of monotone inclusion problems and apply to solve image restoration problems. Recently, Akutsah et al. [28] introduced a new algorithm for finding a common fixed point of generalized nonexpansive mappings. Moreover, the fixed point problems have been studied in many aspects, see for instance [29,30,31,32,33]. Inspired by Bot et al. [16], Artsawang and Ungchittrakool [25] introduced the inertial Mann-type algorithm for finding a fixed point of a nonexpansive mapping and applying to monotone inclusion and image restoration problems as follows:
x 0 , x 1 H , y n = x n + θ n ( x n x n 1 ) , x n + 1 = δ n y n + α n ( T δ n y n δ n y n ) + ε n , n 1 ,
where  ( θ n ) n 1 , ( α n ) n 1 , ( δ n ) n 1  are some appropriate sequences chosen from  [ 0 , 1 ] .
Common Fixed Point Problem: the common fixed point problem for the mapping S and T is generally denoted by the following:
find x H such that x = S x = T x .
The common fixed point problem (8) has many applications in various disciplines such as numerical analysis, optimization, dynamical systems, game theory, control theory, topology, computer science, and economics, etc. To approximate the common fixed points of two mappings, Das and Debata [34] and Takahashi and Tamura [35] generalized the Ishikawa iterative algorithm for mappings S and T as follows:
x 1 C , y n = ( 1 α n ) x n + α n S x n , x n + 1 = ( 1 β n ) x n + β n T y n , n 1 ,
where  ( α n ) n 1 , ( β n ) n 1  are some sequences chosen in  [ 0 , 1 ] . Later, Khan and Fukhar-ud-din [36] introduced and studied a scheme with errors for two nonexpansive mappings.
In 2007, Aoyama et al. [37] introduced and studied Halpern iterative algorithm for a countable family of nonexpansive mappings and provided some important conditions on a countable family of nonexpansive mappings which later known as AKTT condition to guarantee the convergence result to a common fixed point solution.
As mentioned above, all of these researches inspired and drove us to develop a new iterative algorithm, which is as follows:
Let  ( S n ) n 0 , ( T n ) n 0 : H H  be any two countable families of nonexpansive mappings such that  n = 0 F i x ( S n ) n = 0 F i x ( T n ) . We propose the following algorithm:
x 0 , x 1 H , y n = x n + θ n ( x n x n 1 ) , z n = ( 1 α n ) δ n y n + α n S n δ n y n , x n + 1 = z n + β n T n δ n y n z n + ε n ,
for all  n 1 , where  ( θ n ) n 0 [ 0 , θ ]  with  θ [ 0 , 1 ) ( α n ) n 0 , ( β n ) n 0 ,  and  ( δ n ) n 0  are sequences in  ( 0 , 1 ]  and  ( ε n ) n 0  is a sequence in H.
Moreover, we can apply Equation (9) as a tool to find a zero point solution of the sum of three monotone operators  U , V , W  as follows:
find x H such that 0 μ 1 U x + η V x + ν 1 W x μ 2 U x + η V x + ν 2 W x ,
where  U , V  are some monotone operators on a Hilbert space H, W is  κ -cocoercive with parameter  κ , and  μ 1 , μ 2 , η > 0  and  ν 1 , ν 2 ( 0 , 2 κ )  (see Section 4 for more details). The problem (10) can be viewed as a generalization of Artsawang and Ungchittrakool [25] and Davis and Yin [38]. It is useful to note the importance and association that (10) can be rewritten to be some common fixed point problems of nonexpansive mappings (see Section 4 for more details). For this reason, finding a solution of the monotone inclusion problem (10), it can be applied the ideas of common fixed point problem and fixed point iterative algorithms to solve (10). For practical applications, we can use by-products obtained from Equation (9) to solve the image restoration problems. Furthermore, the effectiveness of our novel algorithm is substantiated by presenting numerical outcomes in different scenarios. These results conclusively establish that our algorithm outperforms its predecessor, as evidenced by the higher performance exhibited in the numerical analysis.

2. Preliminaries

In this section, we present the collected results from real Hilbert spaces, which are relevant and beneficial for the conducted convergence analysis in this study.
Lemma 1 
([3,4]). Let H be a real Hilbert space. The following conditions are hold.
1. 
x + y 2 x 2 + 2 x + y , y  for all  x , y H ,
2. 
t x + ( 1 t ) y 2 = t x 2 + ( 1 t ) y 2 t ( 1 t ) x y 2  for all  t R  and  x , y H .
Lemma 2 
([39,40]). Let  ( s n ) n 0 , ( ϵ n ) n 0 [ 0 , + ) ( σ n ) n 0 [ 0 , 1 ]  and  ( ν n ) n 0 R  be sequences such that
s n + 1 ( 1 σ n ) s n + σ n ν n + ϵ n , n 0 .
Assume that  n = 0 ϵ n < + .  Then, the statements below are true.
1. 
If  σ n ν n r σ n  (where  r 0 ), then  ( s n ) n 0  is bounded.
2. 
If  n = 0 σ n = +  and  lim sup n ν n 0 , then  lim n s n = 0 .
Lemma 3 
([5]). Let  T : H H  be a nonexpansive operator,  ( x n ) n 0 H , and  x H  be such that  x n x  as  n  and  x n T x n 0  as  n . Then  x F i x ( T ) .
The following lemma is one important characterization of the metric projection.
Lemma 4 
([3,4,5]). Let C be a nonempty closed convex subset of H. Then for every  z H  and  x C ,
x = P C ( z )  if and only if  z x , y x 0 , y C .
Aoyama, Kimura, Takahashi, and Toyoda [37] (see also Plubtieng and Ungchittrakool [41]) observed some behaviors of a countable family of mappings and provided a nice condition as the following definition.
Definition 1
(AKTT condition, Aoyama et al. [37]). Let D be a nonempty subset of a Hilbert space H and let  ( T n ) n 0  be a countable family of mappings from D to H. For a subset Q of D, we say that  ( T n ) n 0 , Q  satisfies condition AKTT if  n = 1 sup T n z T n 1 z z Q < + .
Additionally, they verified the following result, which is important to our main task.
Lemma 5 
([37]). Let D be a nonempty subset of a Hilbert space H and let  ( T n ) n 0  be a countable family of mappings from D to H. Let Q be a subset of D with  ( T n ) n 0 , Q  satisfying condition AKTT, then there exists a mapping  T : Q H  such that
T x = lim n T n x , x Q ,
and  lim n sup T z T n z z Q = 0 .

3. Main Results

In this section, we analyze the convergence of Equation (9), starting with the boundedness property of the algorithm, as stated in the following lemma.
Assumption 1.
Let  ( α n ) n 0 ( 0 , 1 ] , ( β n ) n 0 , ( δ n ) n 0 [ 0 , 1 ] , and  ( ε n ) n 0 H  be consistent with the following conditions:
1. 
  lim inf n β n ( 1 β n ) α n > 0 .
2. 
  n = 1 α n α n 1 < + .
3. 
  n = 1 | β n β n 1 | < + .
4. 
(a) 
lim n δ n = 1 .
(b) 
n = 0 ( 1 δ n ) = + ,  and  n = 1 | δ n δ n 1 | < + .
5. 
  n = 0 ε n < + .
We have confirmed the validity of Assumption 1, as demonstrated in the following remark.
Remark 1.
Let  z H . We set  δ n = 1 1 n + 2 α n = β n = 1 4 1 ( n + 3 ) 2  and  ε n = z ( n + 1 ) 3  for all  n 0 . It is easy to see that the Assumption 1 is satisfied.
Lemma 6. 
Let  ( S n ) n 0 , ( T n ) n 0 : H H  be two countable families of nonexpansive mappings such that  n = 0 F i x ( S n ) n = 0 F i x ( T n )  and let  ( x n ) n 0  be generated by Equation (9). Let  ( θ n ) n 0  be a sequence in  [ 0 , θ ]  with  θ [ 0 , 1 )  such that  n = 1 θ n x n x n 1 < + .  Suppose 5. in Assumption 1 holds. Then  ( x n ) n 0  is bounded.
Proof. 
Let  p n = 0 F i x ( S n ) n = 0 F i x ( T n ) . Then, let us consider
y n p = x n + θ n ( x n x n 1 ) p = x n p + θ n ( x n x n 1 ) x n p + θ n x n x n 1 .
By using (11), we obtain
δ n y n p = δ n y n δ n p + δ n p p δ n y n p + 1 δ n p δ n x n p + θ n x n x n 1 + 1 δ n p .
Further, we observe that
z n p = ( 1 α n ) δ n y n + α n S n δ n y n p ( 1 α n ) δ n y n p + α n S n δ n y n p δ n y n p .
Using (12) and (13), we will have
x n + 1 p = ( 1 β n ) ( z n p ) + β n ( T n δ n y n p ) + ε n ( 1 β n ) z n p + β n T n δ n y n p + ε n ( 1 β n ) δ n y n p + β n δ n y n p + ε n = δ n y n p + ε n δ n x n p + θ n x n x n 1 + 1 δ n p + ε n = 1 1 δ n x n p + 1 δ n p + θ n x n x n 1 + ε n .
Notice that  n = 1 θ n x n x n 1 + ε n < + , we can apply Lemma 2 (1) to (14) and then it shows that  ( x n ) n 0  is bounded. □
Lemma 7. 
Let  ( S n ) n 0 , ( T n ) n 0 : H H  be two countable families of nonexpansive mappings such that  n = 0 F i x ( S n ) n = 0 F i x ( T n )  and let  ( x n ) n 0  be generated by Equation (9). Let  ( θ n ) n 0  be a sequence in  [ 0 , θ ]  with  θ [ 0 , 1 )  such that  n = 1 θ n x n x n 1 < + .  Suppose 2., 3., 4b., and 5. in Assumption 1 hold and for any bounded subset Q of H,  ( S n ) n 0 , Q  and  ( T n ) n 0 , Q  satisfy AKTT condition. Then  x n + 1 x n 0  as  n .
Proof. 
Let us consider
y n y n 1 = x n + θ n ( x n x n 1 ) x n 1 + θ n 1 ( x n 1 x n 2 ) = x n x n 1 + θ n ( x n x n 1 ) θ n 1 ( x n 1 x n 2 ) x n x n 1 + θ n x n x n 1 + θ n 1 x n 1 x n 2 .
By using (15), it can be observed that
δ n y n δ n 1 y n 1 = δ n ( y n y n 1 ) + ( δ n δ n 1 ) y n 1 δ n y n y n 1 + | δ n δ n 1 | y n 1 δ n x n x n 1 + θ n ( x n x n 1 ) θ n 1 ( x n 1 x n 2 ) + | δ n δ n 1 | M 1 δ n x n x n 1 + θ n x n x n 1 + θ n 1 x n 1 x n 2 + | δ n δ n 1 | M 1 = δ n x n x n 1 + ξ n ( 1 ) ,
where  ξ n ( 1 ) : = θ n x n x n 1 + θ n 1 x n 1 x n 2 + | δ n δ n 1 | M 1
and  M 1 : = sup y n 1 n N .
Moreover, we can find that
z n z n 1 = ( 1 α n ) δ n y n + α n S n δ n y n ( ( 1 α n 1 ) δ n 1 y n 1 + α n 1 S n 1 δ n 1 y n 1 ) = ( 1 α n ) δ n y n + α n S n δ n y n S n δ n 1 y n 1 + S n δ n 1 y n 1 + α n 1 + α n 1 S n δ n 1 y n 1 ( ( 1 α n + α n α n 1 ) δ n 1 y n 1 + α n 1 S n 1 δ n 1 y n 1 ) = ( 1 α n ) ( δ n y n δ n 1 y n 1 ) + α n ( S n δ n y n S n δ n 1 y n 1 ) + ( α n α n 1 ) S n δ n 1 y n 1 + α n 1 ( S n δ n 1 y n 1 S n 1 δ n 1 y n 1 ) + ( α n 1 α n ) δ n 1 y n 1 ( 1 α n ) δ n y n δ n 1 y n 1 + α n S n δ n y n S n δ n 1 y n 1 + α n α n 1 S n δ n 1 y n 1 + δ n 1 y n 1 + α n 1 S n δ n 1 y n 1 S n 1 δ n 1 y n 1 δ n y n δ n 1 y n 1 + α n α n 1 M 2 + sup S n z S n 1 z z Q = δ n y n δ n 1 y n 1 + ξ n ( 2 ) ,
where  M 2 : = sup S n δ n 1 y n 1 + δ n 1 y n 1 n N Q H  is some bounded set such that  ( δ n 1 y n 1 ) n 1 Q , and  ξ n ( 2 ) : = α n α n 1 M 2 + sup S n z S n 1 z z Q .  In the final step, by employing (16) and (17), it will obtain the following result
x n + 1 x n = z n + β n ( T n δ n y n z n ) + ε n ( z n 1 + β n 1 ( T n 1 δ n 1 y n 1 z n 1 ) + ε n 1 ) = ( 1 β n ) z n + β n T n δ n y n + ε n ( ( 1 β n 1 ) z n 1 + β n 1 T n 1 δ n 1 y n 1 + ε n 1 ) = ( 1 β n ) z n + β n T n δ n y n T n δ n 1 y n 1 + T n δ n 1 y n 1 + β n 1 + β n 1 T n δ n 1 y n 1 ( 1 β n + β n β n 1 ) z n 1 β n 1 T n 1 δ n 1 y n 1 + ( ε n ε n 1 ) = ( 1 β n ) ( z n z n 1 ) + β n ( T n δ n y n T n δ n 1 y n 1 ) + ( β n β n 1 ) T n δ n 1 y n 1 + β n 1 ( T n δ n 1 y n 1 T n 1 δ n 1 y n 1 ) + ( β n 1 β n ) z n 1 + ( ε n ε n 1 ) ( 1 β n ) z n z n 1 + β n T n δ n y n T n δ n 1 y n 1 + β n β n 1 T n δ n 1 y n 1 + z n 1 + β n 1 T n δ n 1 y n 1 T n 1 δ n 1 y n 1 + ε n ε n 1 ( 1 β n ) δ n y n δ n 1 y n 1 + ξ n ( 2 ) + β n δ n y n δ n 1 y n 1 + | β n β n 1 | M 3 + sup T n z T n 1 z z Q + ε n ε n 1 δ n y n δ n 1 y n 1 + ξ n ( 2 ) + ξ n ( 3 ) δ n x n x n 1 + ξ n ( 1 ) + ξ n ( 2 ) + ξ n ( 3 ) = ( 1 ( 1 δ n ) ) x n x n 1 + ( 1 δ n ) 0 + ξ n ( 1 ) + ξ n ( 2 ) + ξ n ( 3 ) ,
where  M 3 = sup T n δ n 1 y n 1 + z n 1 n N  and
ξ n ( 3 ) : = | β n β n 1 | M 3 + sup T n z T n 1 z z Q + ε n ε n 1 .  By using 2., 3., 4b., and 5. in Assumption 1 and AKTT conditon, we obtain that  n = 1 ξ n ( 1 ) + ξ n ( 2 ) + ξ n ( 3 ) < + . By applying Lemma 2 (2) to (18), it yields that  x n + 1 x n 0   as   n + .   □
Theorem 1.
Let  ( S n ) n 0 , ( T n ) n 0 : H H  be two countable families of nonexpansive mappings such that  n = 0 F i x ( S n ) n = 0 F i x ( T n )  and let  ( x n ) n 0  be generated by Equation (9). Let  ( θ n ) n 0  be a sequence in  [ 0 , θ ]  with  θ [ 0 , 1 )  such that  n = 1 θ n x n x n 1 < + .  Suppose Assumption 1 holds and for any bounded subset Q of H,  ( S n ) n 0 , Q  and  ( T n ) n 0 , Q  satisfy AKTT condition. Let  S , T : H H  be defined by  S z = lim n S n z  and  T z = lim n T n z  for all  z H  and suppose that  F i x ( S ) = n = 0 F i x ( S n )  and  F i x ( T ) = n = 0 F i x ( T n )  and let  Ω : = n = 0 F i x ( S n ) n = 0 F i x ( T n ) = F i x ( S ) F i x ( T ) . Then, the sequence  ( x n ) n 0  strongly converges to  x : = P Ω ( 0 ) .
Proof. 
From Lemma 6, we have that  ( x n ) n 0  is bounded. By (11) and (13), it is not hard to see that  ( y n ) n 0  and  ( z n ) n 0  are also bounded. Let  x = P Ω ( 0 ) . The first step let us consider the following inequality and equality.
z n x 2 = ( 1 α n ) δ n y n + α n S n δ n y n x 2 = ( 1 α n ) ( δ n y n x ) + α n ( S n δ n y n x ) 2 = ( 1 α n ) δ n y n x 2 + α n S n δ n y n x 2 α n ( 1 α n ) δ n y n S n δ n y n 2 δ n y n x 2 α n ( 1 α n ) δ n y n S n δ n y n 2 ,
and
z n T n δ n y n 2 = ( 1 α n ) δ n y n + α n S n δ n y n T n δ n y n 2 = ( 1 α n ) δ n y n T n δ n y n + α n S n δ n y n T n δ n y n 2 = ( 1 α n ) δ n y n T n δ n y n 2 + α n S n δ n y n T n δ n y n 2 α n ( 1 α n ) δ n y n S n δ n y n 2 .
By using Lemma 1 (1), we obtain the following inequality.
δ n y n x 2 = δ n ( y n x ) + ( δ n 1 ) x 2 = δ n 2 y n x 2 + 2 δ n ( 1 δ n ) x , y n x + ( 1 δ n ) 2 x 2 δ n y n x 2 + ( 1 δ n ) 2 δ n x , y n x + ( 1 δ n ) x 2 = δ n x n + θ n ( x n x n 1 ) x 2 + ( 1 δ n ) 2 δ n x , y n x + ( 1 δ n ) x 2 = δ n x n x + θ n ( x n x n 1 ) 2 + ( 1 δ n ) 2 δ n x , y n x + ( 1 δ n ) x 2 δ n x n x 2 + 2 δ n θ n y n x , x n x n 1 + ( 1 δ n ) 2 δ n x , y n x + ( 1 δ n ) x 2 δ n x n x 2 + 2 δ n y n x θ n x n x n 1 + ( 1 δ n ) 2 δ n x , y n x + ( 1 δ n ) x 2 δ n x n x 2 + K 1 θ n x n x n 1 + ( 1 δ n ) 2 δ n x , y n x + ( 1 δ n ) x 2 ,
where  K 1 : = sup 2 δ n y n x n N . Applying Lemma 1 (1), (19) and (20), we can see that
x n + 1 x 2 = z n + β n ( T n δ n y n z n ) + ε n x 2 = ( 1 β n ) ( z n x ) + β n ( T n δ n y n x ) + ε n 2 ( 1 β n ) ( z n x ) + β n ( T n δ n y n x ) 2 + 2 ε n , x n + 1 x = ( 1 β n ) z n x 2 + β n T n δ n y n x 2 β n ( 1 β n ) z n T n δ n y n 2 + 2 ε n , x n + 1 x ( 1 β n ) δ n y n x 2 α n ( 1 α n ) δ n y n S n δ n y n 2 + β n δ n y n x 2 + 2 x n + 1 x ε n β n ( 1 β n ) ( 1 α n ) δ n y n T n δ n y n 2 + α n S n δ n y n T n δ n y n 2 α n ( 1 α n ) δ n y n S n δ n y n 2 = δ n y n x 2 ( 1 β n ) α n ( 1 α n ) δ n y n S n δ n y n 2 + 2 x n + 1 x ε n β n ( 1 β n ) ( 1 α n ) δ n y n T n δ n y n 2 β n ( 1 β n ) α n S n δ n y n T n δ n y n 2 + β n ( 1 β n ) α n ( 1 α n ) δ n y n S n δ n y n 2 δ n x n x 2 + K 1 θ n x n x n 1 + ( 1 δ n ) 2 δ n x , y n x + ( 1 δ n ) x 2 + K 2 ε n ( 1 β n ) 2 α n ( 1 α n ) δ n y n S n δ n y n 2 β n ( 1 β n ) ( 1 α n ) δ n y n T n δ n y n 2 β n ( 1 β n ) α n S n δ n y n T n δ n y n 2 δ n x n x 2 + ( 1 δ n ) 2 δ n x , y n x + ( 1 δ n ) x 2 + K 1 θ n x n x n 1 + K 2 ε n β n ( 1 β n ) α n S n δ n y n T n δ n y n 2 x n x 2 + ( 1 δ n ) 2 δ n x , y n x + ( 1 δ n ) x 2 + K 1 θ n x n x n 1 + K 2 ε n β n ( 1 β n ) α n S n δ n y n T n δ n y n 2 ,
where  K 2 : = sup 2 x n + 1 x n N .
β n ( 1 β n ) α n S n δ n y n T n δ n y n 2 x n x 2 x n + 1 x 2 + ( 1 δ n ) 2 δ n x , y n x + ( 1 δ n ) x 2 + K 1 θ n x n x n 1 + K 2 ε n x n x n + 1 x n x + x n + 1 x + ( 1 δ n ) 2 δ n x , y n x + ( 1 δ n ) x 2 + K 1 θ n x n x n 1 + K 2 ε n
Since  lim inf n β n ( 1 β n ) α n > 0 , so (23) yields that
S n δ n y n T n δ n y n 0 as n .
On the other hand, let us consider
T n δ n y n δ n y n = T n δ n y n x n + 1 + x n + 1 δ n y n T n δ n y n x n + 1 + x n + 1 δ n y n = T n δ n y n z n + β n T n δ n y n z n + ε n + ( 1 δ n ) x n + 1 + δ n x n + 1 δ n y n ( 1 β n ) ( T n δ n y n z n ) ε n + ( 1 δ n ) x n + 1 + δ n x n + 1 y n ( 1 β n ) z n T n δ n y n + ε n + ( 1 δ n ) x n + 1 + δ n ( x n + 1 x n + θ n x n x n 1 ) = ( 1 β n ) ( 1 α n ) ( δ n y n T n δ n y n ) + α n ( S n δ n y n T n δ n y n ) + ε n + ( 1 δ n ) x n + 1 + δ n ( x n + 1 x n + θ n x n x n 1 ) ( 1 β n ) ( 1 α n ) δ n y n T n δ n y n + ( 1 β n ) α n S n δ n y n T n δ n y n + ε n + ( 1 δ n ) x n + 1 + δ n ( x n + 1 x n + θ n x n x n 1 ) ( 1 α n ) δ n y n T n δ n y n + ( 1 β n ) α n S n δ n y n T n δ n y n + ε n + ( 1 δ n ) x n + 1 + δ n ( x n + 1 x n + θ n x n x n 1 )
By using (25) and (24), it yields the following result.
T n δ n y n δ n y n ( 1 β n ) S n δ n y n T n δ n y n + 1 α n ε n + ( 1 δ n ) x n + 1 + δ n ( x n + 1 x n + θ n x n x n 1 ) 0 as n .
Next, by employing (24) and (26), we obtain the following result.
δ n y n S n δ n y n δ n y n T n δ n y n + T n δ n y n S n δ n y n δ n y n T n δ n y n + T n δ n y n S n δ n y n 0 as n .
By the virtue of (27), it allows us to obtain
S δ n y n δ n y n S δ n y n S n δ n y n + S n δ n y n δ n y n sup S z S n z z Q + S n δ n y n δ n y n 0 as n .
Similarly, the inequality (26) yields the result below
T δ n y n δ n y n T δ n y n T n δ n y n + T n δ n y n δ n y n sup T z T n z z Q + T n δ n y n δ n y n 0 as n .
It is not hard to see that (22) can be rewritten to be the following
x n + 1 x 2 ( 1 ( 1 δ n ) ) x n x 2 + ( 1 δ n ) 2 δ n x , y n x + ( 1 δ n ) x 2 + K 1 θ n x n x n 1 + K 2 ε n .
In order to prove that  ( x n ) n 0  converges strongly to  x , it is enough to show that
lim sup n x , y n x 0 .
Assume to a contrary that (31) is not true, then there exists a real number  k > 0  and a subsequence  ( y n i ) i 0  of  ( y n ) n 0  such that
x , y n i x k > 0 , i 0 .
Since  ( y n i ) i 0  is bounded on a Hilbert space H, we obtain that there exists a subsequence  ( y n i j ) j 0  of  ( y n i ) i 0  such that  y n i j y H  as  j . Therefore,
0 < k lim j x , y n i j x = x , y x .
Notice that  lim n δ n = 1 , we obtain  δ n i j y n i j y  as  j .  Applying (28), (29) and Lemma 3, we obtain that  y F i x ( S ) F i x ( T ) .  The characterization of metric projection via Lemma 4 yields that  x , y x = 0 x , y x 0  which lead to a contradiction. For this reason, the inequality (31) holds. So, it is unlocked and allows us to obtain
lim sup n 2 δ n x , y n x + ( 1 δ n ) x 2 0 .
Applying Lemma 2 (2) to (30), it can be concluded that  lim n x n = x .  This complete the proof. □
Remark 2.
The assumption of the sequence  ( θ n ) n 0  in Theorem 1 is verified, if we choose  θ n  such that  0 θ n θ ¯ n , where
θ ¯ n = min θ , c n x n x n 1 , if x n x n 1 , θ , otherwise ,
and  n 0 c n < + .
It is worth to mention that The implication of Theorem 1 can be applied for a wide range of the previous existing results as follows:
The first one, in the sense of constant sequences of nonexpansive mappings as in the followings:
Corollary 1.
Let  S , T : H H  be two nonexpansive mappings such that  F i x ( S ) F i x ( T )  and let  ( x n ) n 0  be generated by
x 0 , x 1 H , y n = x n + θ n ( x n x n 1 ) , z n = ( 1 α n ) δ n y n + α n S δ n y n , x n + 1 = z n + β n T δ n y n z n + ε n ,
where  ( θ n ) n 0  is a sequence in  [ 0 , θ ]  with  θ [ 0 , 1 )  such that  n = 1 θ n x n x n 1 < + .  Suppose Assumption 1 holds. Then, the sequence  ( x n ) n 0  strongly converges to  x : = P F i x ( S ) F i x ( T ) ( 0 ) .
Proof. 
If  S n = S  and  T n = T  for all  n N { 0 }  where  S , T : H H  are nonexpansive mappings, then it is obvious that  ( S n ) n 0 , Q  and  ( T n ) n 0 , Q  satisfy AKTT condition for any nonempty subset  Q H . Moreover,  F i x ( S ) = n = 0 F i x ( S n )  and  F i x ( T ) = n = 0 F i x ( T n ) . Therefore, the desired result is obtained by the implication of Theorem 1. □
Lemma 8. 
Let  S : H H  be a nonexpansive mapping with  F i x ( S ) . Define  T n : = ( 1 α n ) I + α n S  for all  n N { 0 } . Then the following statements hold:
(i) 
T n : H H  are nonexpansive mappings for all  n N { 0 } .
(ii) 
If  ( α n ) n 0 [ 0 , 1 ]  with the property that  lim inf n α n > 0 , then  F i x ( S ) = n = 0 F i x ( T n ) .
(iii) 
If  n = 1 α n α n 1 < + , then  ( T n ) n 0 , Q  satisfies AKTT condition for any nonempty bounded subset Q of H.
Proof. 
It is not hard to verify the proof of (i). For the proof of (ii), let us consider
x F i x ( S ) x = S x 0 = α n ( S x x ) x = x + α n ( S x x ) = ( 1 α n ) x + α n S x = T n x , n N x n = 0 F i x ( T n ) .
On the other hand, it is found that
x n = 0 F i x ( T n ) x = T n x = ( 1 α n ) x + α n S x = x + α n ( S x x ) , n N 0 = α n ( S x x ) , n N lim inf n α n S x x = 0 S x x = 0 lim inf n α n = 0 x F i x ( S ) .
Therefore, Lemma 8(ii) holds. For the proof of Lemma 8(iii), let  Q H  be a nonempty and bounded subset. Then, since  F i x ( S ) , it is not hard to see that the set  z + S z z Q  is also bounded. And then it can be observed that for any  n N ,
T n z T n 1 z = ( α n α n 1 ) z + ( α n α n 1 ) S z α n α n 1 z + α n α n 1 S z = α n α n 1 z + S z , z Q .
This implies that  sup T n z T n 1 z z Q L α n α n 1 ,
where  L : = sup z + S z z Q . Therefore,
n = 1 sup T n z T n 1 z z Q L n = 1 α n α n 1 < + .
This shows that Lemma 8(iii) is true. □
Corollary 2 
([25]). Let  S : H H  be a nonexpansive mapping such that  F i x ( S )  and let  ( x n ) n 0  be generated by
x 0 , x 1 H , y n = x n + θ n ( x n x n 1 ) , x n + 1 = δ n y n + α n S δ n y n δ n y n + ε n ,
where  ( θ n ) n 0  is a sequence in  [ 0 , θ ]  with  θ [ 0 , 1 )  such that  n = 1 θ n x n x n 1 < + .  Suppose that  ( α n ) n 0 , ( δ n ) n 0 ( 0 , 1 ]  and  ( ε n ) n 0 H  are consistent with the following conditions:
(a) 
lim inf n α n > 0  and  n = 1 α n α n 1 < + ,
(b) 
lim n δ n = 1 , n = 0 ( 1 δ n ) = + ,  and  n = 1 δ n δ n 1 < + ,
(c) 
  n = 0 ε n < + .
Then, the sequence  ( x n ) n 0  strongly converges to  x : = P F i x ( S ) ( 0 ) .
Proof. 
First, it is not hard to verify that
x n + 1 = δ n y n + α n ( S δ n y n δ n y n ) + ε n z n = ( 1 α n ) δ n y n + α n S δ n y n , x n + 1 = z n + 1 2 T n δ n y n z n + ε n ,
where  z n = ( 1 α n ) δ n y n + α n S δ n y n  and  T n = ( 1 α n ) I + α n S . Indeed, from the third line of (34), it is observed that
x n + 1 = δ n y n + α n ( S δ n y n δ n y n ) + ε n = ( 1 α n ) δ n y n + α n S δ n y n + ε n = z n + 0 + ε n = z n + 1 2 z n z n + ε n = z n + 1 2 ( 1 α n ) δ n y n + α n S δ n y n z n + ε n = z n + 1 2 ( ( 1 α n ) I + α n S ) δ n y n z n + ε n = z n + 1 2 T n δ n y n z n + ε n .
Setting  β n = 1 2  for all  n N { 0 }  and since  lim inf n α n > 0 , so we have  lim inf n β n ( 1 β n ) α n = lim inf n 1 4 α n = 1 4 lim inf n α n > 0 .  On the other hand, the definition of  { T n }  and the condition (a) imply that  ( T n ) n 0  satisfies (i)–(iii) of Lemma 8. Then, by applying Theorem 1, we have the desired result. □
Corollary 3
([16] Theorem 3). Let  S : H H  be a nonexpansive mapping such that  F i x ( S )  and let  ( x n ) n 0  be generated by
x 0 H , x n + 1 = δ n x n + α n S δ n x n δ n x n .
Suppose that  ( α n ) n 0 , ( δ n ) n 0 ( 0 , 1 ]  are consistent with the following conditions:
1. 
lim inf n α n > 0  and  n = 1 α n α n 1 < + ,
2. 
lim n δ n = 1 , n = 0 ( 1 δ n ) = + ,  and  n = 1 δ n δ n 1 < + .
Then, the sequence  ( x n ) n 0  strongly converges to  x : = P F i x ( S ) ( 0 ) .
Proof. 
By letting  θ n = 0  and  ε n = 0  for all  n N { 0 }  in Corollary 2. Then, we obtain the desired result. □

4. Applications

This section focuses on exploring the potential uses of Equation (9) and its by-products in specific cases from Equation (9) to solve monotone inclusion problems.
Recall that the operator  W : H H  is said to be:
  • monotone if it satisfies  W x W y , x y 0  for all  x , y H .
  • κ-cocoercive if there exists  κ > 0  such that  W x W y , x y κ W x W y 2  for all  x , y H .
The set of all zeros of the operator W is denoted by  zer ( W ) : = { z H : 0 = W z } .
Let  G ( V ) : = ( x , y ) H × H y V x  be the graph of a multivalued operator  V : H 2 H . Then, V is called:
  • monotone if for all  x , y H u V x  and  v V y  implies that  u v , x y 0 .
  • maximal monotone if  G ( V )  is not properly contained in any graph of other multivalued monotone operator, that is, if  V ^ : H 2 H  is a multivalued monotone operator such that  G ( V ) G ( V ^ ) , then  G ( V ) = G ( V ^ ) .
  • λ-strongly monotone if there exists  λ > 0  such that  x y , u v λ x y 2  for all  ( x , u ) , ( y , v ) G ( V ) .
The multivalued operator  J V : H 2 H  which is defined by
J V ( x ) = ( I + V ) 1 ( x ) , x H
is called the resolvent operator associated with V. It is well known that if  V : H 2 H  is maximal monotone and  γ > 0 , then  J γ V  is single-valued and firmly nonexpansive. By employing  J γ V , the Yosida approximation V γ  of V is defined by  V γ : = 1 γ I J γ V .
Let us consider the following monotone inclusion problem:
find x H such that 0 μ 1 U x + η V x + ν 1 W x μ 2 U x + η V x + ν 2 W x
where  μ 1 , μ 2 , η , ν 1 , ν 2 > 0 U , V : H 2 H  are maximal monotone operators and  W : H H  is a  κ -cocoercive operator with  κ > 0 . For applying Equation (9) to solve (35), the important tool is as follows:
Proposition 1
([38], Proposition 2.1). Let  T 1 , T 2 : H H  be two firmly nonexpansive operators and W be a κ-cocoercive operator with  κ > 0 . Let  ν ( 0 , 2 κ ) . Then operator
T : = T 1 ( 2 T 2 I ν W T 2 ) + I T 2
is α-averaged with coefficient  α : = 2 κ 4 κ ν < 1 . In particular, the following inequality holds for all  x , y H
T x T x 2 x y 2 ( 1 α ) α ( I T ) x ( I T ) y 2 .
The following lemma provides a characterization of  zer μ U + η V + ν W .
Lemma 9 
(Fixed point encoding). Let  μ , η , ν > 0 U , V : H 2 H  be maximal monotone operators and  W : H H  be an operator. Suppose that  zer μ U + η V + ν W .  Then,
zer μ U + η V + ν W = J η V F i x ( T ) ,
where  T = J μ U 2 J η V I ν W J η V + I J η V .
Proof. 
Let  x zer μ U + η V + ν W . Then,  0 μ U x + η V x + ν W x . This implies that  y U U x  and  y V V x  such that  μ y U + η y V + ν W x = 0 .  Next, we expect that  x = J η V z  for some  z F i x ( T ) . Let  z = x + η y V  and since  z = x + η y V x + η V x = I + η V x . This means that  x = J η V z . Moreover, it can be observed that  2 J η V z z ν W J η V z = 2 x z ν W x = x ν W x η y V = x + μ y U . It remains to show that  z F i x ( T ) . Let us consider
T z = J μ U 2 J η V z z ν W J η V z + z J η V z = J μ U x + μ y U + z x = x + z x = z .
This shows that  z F i x ( T )  and then  x = J η V z J η V F i x ( T ) .
On the other hand, let  x J η V F i x ( T ) .  Then, there exists  z F i x ( T )  such that  x = J η V z .  Moreover, it can be observed that
z = T z = J μ U 2 J η V z z ν W J η V z + z J η V z .
This implies that  x = J η V z = J μ U 2 J η V z z ν W J η V z  and by using some properties of Yosida approximation  1 μ I J μ U  of U and  1 η I J η V  of V, it yields that
z = T z = J μ U 2 J η V I ν W J η V + I J η V z = z + J μ U 2 J η V z z ν W J η V z J η V z = z + J μ U 2 J η V z z ν W J η V z 2 J η V z z ν W J η V z + J η V z z ν W J η V z = z μ 1 μ I J μ U 2 J η V z z ν W J η V z η 1 η I J η V z ν W J η V z z μ U J μ U 2 J η V z z ν W J η V z η V J η V z ν W J η V z = z μ U x η V x ν W x .
By simple calculation, we have that  0 μ U x + η V x + ν W x  and then
x zer μ U + η V + ν W .
Setting
S : = J μ 1 U 2 J η V I ν 1 W J η V + I J η V
and
T : = J μ 2 U 2 J η V I ν 2 W J η V + I J η V .
Assume that  F i x ( S ) F i x ( T )  and by employing (33), then the following algorithm can be constructed for solving (35) as follows:
x 0 , x 1 H , y n = x n + θ n ( x n x n 1 ) , j n = J η V ( δ n y n ) , z n = δ n y n + α n J μ 1 U 2 j n δ n y n ν 1 W ( j n ) j n , x n + 1 = ( 1 β n ) z n + β n J μ 2 U 2 j n δ n y n ν 2 W ( j n ) + δ n y n j n + ε n ,
for all  n 1 , where  μ 1 , μ 2 , η > 0  and  ν 1 , ν 2 ( 0 , 2 κ ) .
Theorem 2.
Let  U , V : H 2 H  be two maximal monotone operators and  W : H H  be κ-cocoercive with  κ > 0 . Suppose that  μ 1 , μ 2 , η , ν 1 , ν 2 > 0 S = J μ 1 U 2 J η V I ν 1 W J η V + I J η V T = J μ 2 U 2 J η V I ν 2 W J η V + I J η V  and  F i x ( S ) F i x ( T ) , that is,  J η V F i x ( S ) F i x ( T ) zer μ 1 U + η V + ν 1 W zer μ 2 U + η V + ν 2 W . Let  ( θ n ) n 1  be a sequence in  [ 0 , θ ]  with  θ [ 0 , 1 )  and  μ 1 , μ 2 , η > 0  and  ν 1 , ν 2 ( 0 , 2 κ ) . Let  ( x n ) n 0  and  ( j n ) n 1  be generated by Equation (36). Assume that  n = 1 θ n x n x n 1 < +  and the Assumption 1 holds. Then the following statements are true:
(A) 
( x n ) n 0  strongly converges to  x : = P F i x ( S ) F i x ( T ) ( 0 ) .
(B) 
( j n ) n 1  strongly converges to  J η V ( x ) zer μ 1 U + η V + ν 1 W zer μ 2 U + η V + ν 2 W .
Proof. 
(A): Let  ( x n ) n 0  be generated by Equation (36). By Proposition 1, we obtain S and T are nonexpansive. By applying Corollary 1, we have the sequence  ( x n ) n 0  strongly converges to  x : = P F i x ( S ) F i x ( T ) ( 0 )  as  n + .
(B): Notice that  y n x  and  δ n 1  as  n + , thus  δ n y n x  as  n + . Since  J η V  is continuous, so  j n = J η V ( δ n y n ) J η V ( x ) J η V ( F i x ( S ) F i x ( T ) ) . By applying Lemma 9, we have that  zer μ 1 U + η V + ν 1 W = J η V F i x ( S )  and  zer μ 2 U + η V + ν 2 W = J η V F i x ( T ) . Then,
J η V F i x ( S ) F i x ( T ) J η V F i x ( S ) J η V F i x ( T ) = zer μ 1 U + η V + ν 1 W zer μ 2 U + η V + ν 2 W .
This shows that  j n J η V ( x ) zer μ 1 U + η V + ν 1 W zer μ 2 U + η V + ν 2 W .   □
In Theorem 2, if we set  V x = 0 , then  J η V ( x ) = I ( x )  for all  x H . By applying Lemma 9, it yields  zer μ 1 U + ν 1 W zer μ 2 U + ν 2 W = F i x ( S ) F i x ( T )  where  S : = J μ 1 U I ν 1 W  and  T : = J μ 2 U I ν 2 W . All of which leads to the following corollary.
Corollary 4.
Let  U : H 2 H  be a maximal monotone operator and  W : H H  a κ-cocoercive operator with  κ > 0  and  zer μ 1 U + ν 1 W zer μ 2 U + ν 2 W = .  Let  ( x n ) n 0  be generated by the following
x 0 , x 1 H , y n = x n + θ n ( x n x n 1 ) , z n = ( 1 α n ) δ n y n + α n J μ 1 U ( δ n y n ν 1 W ( δ n y n ) ) , x n + 1 = ( 1 β n ) z n + β n J μ 2 U ( δ n y n ν 2 W ( δ n y n ) ) + ε n ,
for all  n 1 , where  μ 1 , μ 2 > 0 ν 1 , ν 2 ( 0 , 2 κ ) , and  ( θ n ) n 1 [ 0 , θ ]  with  θ [ 0 , 1 ) . Assume that the Assumption 1 holds and  n = 1 θ n x n x n 1 < + . Then, the sequence  ( x n ) n 0  strongly converges to  P zer μ 1 U + ν 1 W zer μ 2 U + ν 2 W ( 0 ) .

5. Numerical Experiments

To examine the performance of the iterative method proposed in this study, we conducted a numerical example focusing on image restoration problems. The entire experimentation was conducted using MATLAB R2016b on a MacBook Air 13-inch, early 2017 model, which is equipped with a 1.8 GHz Intel Core i5 processor and 8 GB 1600 MHz DDR3 memory.

Image Restoration Problems

We utilize the suggested algorithm in this section to address image restoration issues, encompassing the deblurring and denoising of images. Our focus is on the degradation model that accurately represents real-world image restoration problems, or at least the most relevant mathematical approximations of them.
y = Π x + w ,
where  y , Π , x  and w refer to the corrupted image, degradation operator (or blurring operator), pristine image, and noise operator, respectively.
To obtain the reconstructed image, we solve the subsequent regularized least-squares problem
min x 1 2 Π x y 2 2 + τ ϕ ( x ) ,
where the regularization parameter is denoted by  τ > 0 , while  ϕ ( · )  is the regularization function. An established regularization functional utilized to eliminate noise in restoration problems is the  l 1  norm, which is commonly referred to as Tikhonov regularization [42]. The problem (39) can be expressed as follows:
find x arg min x R k 1 2 Π x y 2 2 + τ x 1 ,
where y refers to the corrupted image, and  Π  represents a bounded linear operator. It is worth noting that problem (40) is a specific instance of problem (10) when configured by setting  U = f ( · ) V = 0 , and  W = L ( · )  where  f ( x ) = x 1 , τ = 0.001 ,  and  L ( x ) = 1 2 Π x y 2 2 . With this configuration, it follows that  W ( x ) = L ( x ) = Π ( Π x y ) , where  Π  is a transpose of  Π . To commence the problem, we first select images and corrupt them through various blurring techniques. By utilizing the configuration outlined in Corollary 4, our algorithm is utilized to solve the problem (40) with the following settings  α n = 0.99 + 1 100 n 2 , β n = 0.99 + 1 100 n 2 , δ n = 1 1 100 n 2 , μ 1 = μ 2 = 0.000099 , ν 1 = 0.7 , ν 2 = 0.99 , ε n = 0  and  θ n  is defined by
θ n = min 70 n 9 100 n , 1 ( n + 1 ) 2 x n x n 1 , if x n x n 1 , 70 n 9 100 n , otherwise .
We compare our proposed algorithm with the algorithm presented in [28] Algorithm (4.1), the algorithm was presented in [25], Equaltion (34), and the inertial Mann-type algorithm introduced by Kitkuan et al. [27].
In the case of the algorithm presented in [28] Algorithm (4.1), we set  α n = β n = γ n = 0.99 + 1 100 n 2 . For the Equaltion (34), we select  α n = 0.99 + 1 100 n 2 , δ n = 1 1 100 n + 1 , λ n = 0.7 . Regarding the algorithm discussed in ([27], Algorithm in Theorem 3.1), we opt for  ς n = θ n , α n = 1 ( n + 10 ) 2 , λ n = 0.7  and  h ( x ) = x 2 12 . We assess the quality of the reconstructed image by using the signal to noise ratio (SNR) for images which is defined by
SNR ( n ) = 20 log 10 x 2 2 x x n 2 2 ,
where x and  x n  denote the original and the restored image at iteration n, respectively.
The corresponding numerical results for the aforementioned selections are presented in Figure 1, Figure 2, Figure 3 and Figure 4.
Our algorithm has demonstrated superior performance in image restoration compared to other algorithms, as shown in Table 1.

6. Conclusions

This paper presents and studies the modified Mann-type algorithm, referred to as Equation (9), for two countable families of nonexpansive mappings, incorporating an inertial extrapolation term. Under some mild conditions that are available on nonexpansive mappings together with AKTT condition (see Definition 1) and appropriate control conditions of scalar parameters, this new theoretical tool that we have devised strongly converges to a common fixed point of two countable families of nonexpansive mappings. In terms of of applications, we applied Equation (36), a by-product of Equation (9), to solve the monotone inclusion problem (35). Furthermore, for solving the image restoration problem (40), we can apply the new tool that is Equation (37), a by-product of Equation (36), to find a solution of (40). Numerical results in various instances were provided to showcase the superior efficiency of our newly developed algorithm. These numerical findings unambiguously demonstrate that our algorithm performs significantly better than the previous results.

Author Contributions

Conceptualization, K.U., S.P. and N.A.; methodology, K.U. and N.A.; software, K.U. and N.A.; validation, K.U., S.P. and N.A.; convergence analysis, K.U. and N.A.; investigation, K.U., S.P. and N.A.; writing—original draft preparation, K.U., S.P., N.A. and P.T.; writing—review and editing, K.U., N.A. and P.T.; visualization, K.U. and N.A.; project administration, K.U. and N.A.; funding acquisition, K.U. and N.A. All authors have read and agreed to the published version of the manuscript.

Funding

K.U. and S.P. are supported by the National Research Council of Thailand (Grant No. R2565B071).

Data Availability Statement

Not applicable.

Acknowledgments

The authors express their gratitude to the editors and anonymous referees for their valuable comments and suggestions, which have contributed to the enhancement of the paper’s quality and presentation. This research received financial support from the National Research Council of Thailand (Grant No. R2565B071) and the Faculty of Science, Naresuan University.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Browder, F.E.; Petryshyn, W.V. Construction of fixed points of nonlinear mappings in Hilbert spaces. J. Math. Anal. Appl. 1967, 20, 197–228. [Google Scholar] [CrossRef] [Green Version]
  2. Ungchittrakool, K. Existence and convergence of fixed points for a strict pseudo-contraction via an iterative shrinking projection technique. J. Nonlinear Convex Anal. 2014, 15, 693–710. [Google Scholar]
  3. Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  4. Takahash, W. Introduction to Nonlinear and Convex Analysis; Yokohama Publishers: Yokohama, Japan, 2009. [Google Scholar]
  5. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics; Springer: New York, NY, USA, 2011. [Google Scholar]
  6. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  7. Reich, S. Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67, 274–276. [Google Scholar] [CrossRef] [Green Version]
  8. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  9. Halpern, B. Fixed points of nonexpansive maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef] [Green Version]
  10. Moudafi, A. Viscosity approximation methods for fixed points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
  11. Takahashi, S.; Takahashi, W. Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 331, 506–515. [Google Scholar] [CrossRef] [Green Version]
  12. Plubtieng, S.; Punpaeng, R. A general iterative method for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 2007, 336, 455–469. [Google Scholar] [CrossRef]
  13. Plubtieng, S.; Ungchittrakool, K. Viscosity approximation methods for equilibrium problems and zeroes of an accretive operator in Hilbert spaces. Int. Math. Forum. 2008, 3, 1387–1400. [Google Scholar]
  14. Cholamjiak, P.; Suantai, S. Viscosity approximation methods for a nonexpansive semigroup in Banach spaces with gauge functions. J. Glob. Optim. 2012, 54, 185–197. [Google Scholar] [CrossRef]
  15. Nimit, N.; Narin, P. Viscosity Approximation Methods for Split Variational Inclusion and Fixed Point Problems in Hilbert Spaces. In Proceedings of the International Multi-Conference of Engineers and Computer Scientists (IMECS 2014), Hong Kong, China, 12–14 March 2014; Volume II. [Google Scholar]
  16. Boţ, R.I.; Csetnek, E.R.; Meier, D. Inducing strong convergence into the asymptotic behaviour of proximal splitting algorithms in Hilbert spaces. Optim. Methods Softw. 2019, 34, 489–514. [Google Scholar] [CrossRef]
  17. Polyak, B.T. Some methods of speeding up the convergence of iterative methods. Zh. Vychisl. Mat. Mat. Fiz. 1964, 4, 1–17. [Google Scholar]
  18. Nesterov, Y. A method for solving a convex programming problem with convergence rate O(1/K2). Dokl. Math. 1983, 27, 367–372. [Google Scholar]
  19. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  20. Moudafi, A.; Oliny, M. Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 2003, 155, 447–454. [Google Scholar] [CrossRef] [Green Version]
  21. Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 2004, 14, 773–782. [Google Scholar] [CrossRef]
  22. Attouch, H.; Bolte, J.; Svaiter, B.F. Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Math. Program. 2009, 137, 91–129. [Google Scholar] [CrossRef] [Green Version]
  23. Beck, A.; Teboulle, M. Accelerated gradient methods for nonconvex optimization. Math. Program. 2014, 144, 1–35. [Google Scholar]
  24. Lorenz, D.; Pock, T. An inertial forward–backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef] [Green Version]
  25. Artsawang, N.; Ungchittrakool, K. Inertial Mann-type algorithm for a nonexpansive mapping to solve monotone inclusion and image restoration problems. Symmetry 2020, 12, 750. [Google Scholar] [CrossRef]
  26. Shehu, Y.; Iyiola, O.S.; Ogbuisi, F.U. Iterative method with inertial terms for nonexpansive mappings: Applications to compressed sensing. Numer. Algor. 2019, 83, 1321–1347. [Google Scholar] [CrossRef]
  27. Kitkuan, D.; Kumam, P.; Martínez-Moreno, J.; Sitthithakerngkiet, K. Inertial viscosity forward–backward splitting algorithm for monotone inclusions and its application to image restoration problems. Int. J. Comput. Math. 2019, 97, 482–497. [Google Scholar] [CrossRef]
  28. Akutsah, F.; Mebawondu, A.A.; Pillay, P.; Narain, O.K.; Igiri, C.P. A New Iterative Method for Solving Constrained Minimization, Variational Inequality and Split Feasibility Problems in the Framework of Banach Spaces. Sahand Commun. Math. Anal. 2023, 20, 147–172. [Google Scholar]
  29. Akram, M.; Dilshad, M.; Rajpoot, A.K.; Babu, F.; Ahmad, R.; Yao, J.C. Modified iterative schemes for a fixed point problem and a split variational inclusion problem. Mathematics 2022, 10, 2098. [Google Scholar] [CrossRef]
  30. Balooee, J.; Yao, J.C. Graph convergence with an application for system of variational inclusions and fixed-point problems. J. Inequal. Appl. 2022, 1, 112. [Google Scholar] [CrossRef]
  31. Yao, Y.; Shehu, Y.; Li, X.H.; Dong, Q.L. A method with inertial extrapolation step for split monotone inclusion problems. Optimization 2021, 70, 741–761. [Google Scholar] [CrossRef]
  32. Zhao, X.; Yao, J.C.; Yao, Y. A proximal algorithm for solving split monotone variational inclusions. UPB Sci. Bull. Ser. A 2020, 82, 43–52. [Google Scholar]
  33. Zhu, L.J.; Yao, Y. Algorithms for approximating solutions of split variational inclusion and fixed point problems. Mathematics 2023, 11, 641. [Google Scholar] [CrossRef]
  34. Das, G.; Debata, J.P. Fixed points of quasi-nonexpansive mappings. Indian J. Pure. Appl. Math. 1986, 17, 1263–1269. [Google Scholar]
  35. Takahashi, W.; Tamura, T. Convergence theorems for a pair of nonexpansive mappings. J. Convex Anal. 1995, 5, 45–56. [Google Scholar]
  36. Khan, S.H.; Fukhar-ud-din, H. Weak and strong convergence of a scheme with errors for two nonexpansive mappings. Nonlinear Anal. 2005, 61, 1295–1301. [Google Scholar] [CrossRef]
  37. Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. 2007, 67, 2350–2360. [Google Scholar] [CrossRef]
  38. Davis, D.; Yin, W. A three-operator splitting scheme and its optimization applications. Set-Valued Var. Anal. 2017, 25, 829–858. [Google Scholar] [CrossRef] [Green Version]
  39. Xu, H.-K. Iterative algorithms for nonlinear operators. J. London Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  40. Mainge, P.E. Approximation methods for common fixed points of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2007, 325, 469–479. [Google Scholar] [CrossRef] [Green Version]
  41. Plubtieng, S.; Ungchittrakool, K. Approximation of common fixed points for a countable family of relatively nonexpansive mappings in a Banach space and applications. Nonlinear Anal. 2010, 72, 2896–2908. [Google Scholar] [CrossRef]
  42. Tikhonov, A.N.; Arsenin, V.Y. Solutions of ill–posed problems. SIAM Rev. 1979, 21, 266–267. [Google Scholar]
Figure 1. (a) displays the original image ‘Car’, while (b) presents the images degraded by average blur. (cf) depict the reconstructed images obtained using the Kitkuan et al. algorithm, the Artsawang algorithm, the Akutsah et al. algorithm, and our algorithm (referred to as Equation (37), respectively [25,27,28].
Figure 1. (a) displays the original image ‘Car’, while (b) presents the images degraded by average blur. (cf) depict the reconstructed images obtained using the Kitkuan et al. algorithm, the Artsawang algorithm, the Akutsah et al. algorithm, and our algorithm (referred to as Equation (37), respectively [25,27,28].
Mathematics 11 02927 g001
Figure 2. (a) displays the original image ‘peppers’, while (b) presents the images affected by motion blur. (cf) depict the reconstructed images obtained using the Kitkuan et al. algorithm, the Artsawang algorithm, the Akutsah et al. algorithm, and our algorithm (referred to as Equation (37), respectively [25,27,28].
Figure 2. (a) displays the original image ‘peppers’, while (b) presents the images affected by motion blur. (cf) depict the reconstructed images obtained using the Kitkuan et al. algorithm, the Artsawang algorithm, the Akutsah et al. algorithm, and our algorithm (referred to as Equation (37), respectively [25,27,28].
Mathematics 11 02927 g002
Figure 3. (a), (b) and (c) show a comparison of behavior for the SNR of Figure 1, ranging from 0–1000 cycles, 200–1000 cycles, and 500–1000 cycles, respectively [25,27,28].
Figure 3. (a), (b) and (c) show a comparison of behavior for the SNR of Figure 1, ranging from 0–1000 cycles, 200–1000 cycles, and 500–1000 cycles, respectively [25,27,28].
Mathematics 11 02927 g003
Figure 4. (a), (b) and (c) show a comparison of behavior for the SNR of Figure 2, ranging from 0–1000 cycles, 200–1000 cycles, and 500–1000 cycles, respectively [25,27,28].
Figure 4. (a), (b) and (c) show a comparison of behavior for the SNR of Figure 2, ranging from 0–1000 cycles, 200–1000 cycles, and 500–1000 cycles, respectively [25,27,28].
Mathematics 11 02927 g004
Table 1. The performance of the signal to noise ratio (SNR) is assessed for two images.
Table 1. The performance of the signal to noise ratio (SNR) is assessed for two images.
nCarPeppers
Kitkuan Alg.Artsawang Alg.Akutsah Alg.Our Alg.Kitkuan Alg.Artsawang Alg.Akutsah Alg.Our Alg.
129.197931.260930.822431.271720.008521.795321.478421.8055
5034.124734.284633.806334.405528.608528.921126.188329.1169
10034.551234.684234.309134.893131.305331.649328.496131.8981
20034.950835.069234.791835.458533.963134.277831.257634.5329
30035.172135.280935.094435.856335.384835.677432.873535.9750
50035.384835.469835.531236.434837.005537.262034.771337.7622
100035.276135.281536.258637.308738.642538.788037.215840.1373
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ungchittrakool, K.; Plubtieng, S.; Artsawang, N.; Thammasiri, P. Modified Mann-Type Algorithm for Two Countable Families of Nonexpansive Mappings and Application to Monotone Inclusion and Image Restoration Problems. Mathematics 2023, 11, 2927. https://doi.org/10.3390/math11132927

AMA Style

Ungchittrakool K, Plubtieng S, Artsawang N, Thammasiri P. Modified Mann-Type Algorithm for Two Countable Families of Nonexpansive Mappings and Application to Monotone Inclusion and Image Restoration Problems. Mathematics. 2023; 11(13):2927. https://doi.org/10.3390/math11132927

Chicago/Turabian Style

Ungchittrakool, Kasamsuk, Somyot Plubtieng, Natthaphon Artsawang, and Purit Thammasiri. 2023. "Modified Mann-Type Algorithm for Two Countable Families of Nonexpansive Mappings and Application to Monotone Inclusion and Image Restoration Problems" Mathematics 11, no. 13: 2927. https://doi.org/10.3390/math11132927

APA Style

Ungchittrakool, K., Plubtieng, S., Artsawang, N., & Thammasiri, P. (2023). Modified Mann-Type Algorithm for Two Countable Families of Nonexpansive Mappings and Application to Monotone Inclusion and Image Restoration Problems. Mathematics, 11(13), 2927. https://doi.org/10.3390/math11132927

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop