Next Article in Journal
Modelling Up-and-Down Moves of Binomial Option Pricing with Intuitionistic Fuzzy Numbers
Previous Article in Journal
The Impact of Quasi-Conformal Curvature Tensor on Warped Product Manifolds
Previous Article in Special Issue
Optimized Self-Similar Borel Summation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficiency of a New Iterative Algorithm Using Fixed-Point Approach in the Settings of Uniformly Convex Banach Spaces

1
Department of Mathematics and Statistics, University of Victoria, Victoria, BC V8W 3R4, Canada
2
Department of Mathematics, University of Sialkot, Sialkot 51310, Pakistan
3
Department of Computer Science, College of Computer and Information Sciences Majmaah University, Al Majmaah 11952, Saudi Arabia
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(8), 502; https://doi.org/10.3390/axioms13080502
Submission received: 26 April 2024 / Revised: 24 June 2024 / Accepted: 18 July 2024 / Published: 26 July 2024
(This article belongs to the Special Issue Mathematical Analysis and Applications IV)

Abstract

:
In the presence of Banach spaces, a novel iterative algorithm is presented in this study using the Chatterjea–Suzuki–C (CSC) condition, and the convergence theorems are established. The efficacy of the proposed algorithm is discussed analytically and numerically. We explain the solution of the Caputo fractional differential problem using our main result and then provide the numerical simulation to validate the results. Moreover, we use MATLAB R (2021a) to compare the obtained numerical results using the new iterative algorithm with some efficient existing algorithms. The work seems to contribute to the current advancement of fixed-point approximation iterative techniques in Banach spaces.

1. Introduction and Motivation

To approximate the value of a fixed point (a point that does not change under certain conditions), various iterative algorithms have been introduced over time in the field of fixed-point theory [1]. Fixed-point theory is a fundamental concept in mathematics with significant applications. For example, by identifying fixed points [2], researchers can gain insights into the behavior of iterative algorithms, which are commonly used in logic programming [3], machine learning [4], and other artificial intelligence applications [5]. This provides a framework for understanding the convergence of these algorithms and making predictions about their performance. Moreover, fixed-point theory is a crucial tool for the geometry of figures [6,7,8] that is important in improving the performance of modern systems and advancing the field of artificial intelligence by developing and improving fractal antennas. The speed of convergence is an important factor when choosing between different iterative algorithms for approximating fixed points. Once the existence of a fixed point for a given mapping has been established, determining its value becomes a challenging task. Therefore, some basic iterative algorithms are discussed in [9,10,11], and further modifications concerning these fundamental algorithms have been developed in [12,13,14]. By studying the literature [9,10,11,12,13,14], it can be observed that every modified iterative algorithm has an improved degree of convergence compared to the previous one, and authors proved their claims with the help of numerical examples. Furthermore, the Banach contraction principle [15] provides a powerful tool for establishing the existence and uniqueness of a fixed point, but it does not provide any direct method for computing a fixed point itself. The Banach contraction theorem refers simply to Picard’s iterative scheme [10] to find the approximation for the value of a fixed point. Therefore, the role of iterative methods becomes more significant for this purpose. A related review of the literature demonstrates that the novel and generalized classes of Kannan-type contractions are used to discuss new results in [16,17,18]. Generalized nonexpansive mappings on Banach space are also discussed in [19,20,21,22]. Moreover, significant results are available in the literature after extensive study of mappings with the Suzuki (C)-condition. Khatoon and Uddin [23], Wairojjana [24], and Hasanen [25,26,27] also presented iterative algorithms in this respect. More recently, solutions to non-linear problems have been found using iterative schemes. For instance, an innovative iterative method was presented for determining an approximate solution of a certain kind of fractional differential equation in [28].
Ullah and Arshad [22] presented the M-Iterative algorithm along with condition (C). Furthermore, [22] contains a beautiful discussion about why each new iteration process is preferred over a large class of the existing iterative algorithms. More recently, [29] used the M-Iterative algorithm defined in [22] along with the Chatterjea–Suzuki–C condition and established new results related to strong and weak convergence. Hence, according to the above review of the literature, the following question arises, which is posed and answered in this research.
Question: Is there any iterative scheme with a better rate of convergence compared to the algorithm discussed in [29]?
To answer this question, we introduce a new three-step iterative algorithm, the Z-Iterative algorithm, in this research. To present the efficiency of our proposed algorithm, we plan this article as follows:
In Section 2, we go over the prerequisites and necessary terminology pertaining to various iterations. Then, in Section 3, we demonstrated strong and weak convergence results using the Z-iterative approach in the context of uniformly convex Banach spaces. Using nonexpansive mapping enhanced with the Chatterjea–Suzuki–C condition, this criterion is demonstrated. Theorems are another way in which these conditions are further refined and proven. Section 4 offers an application to fractional differential equations. In light of this, the suggested iteration technique’s Section 5 presents a range of numerical outcomes by considering various parameter values. This section also compares these numerical values using a class of existing iterations and the new proposed iteration. Tabular and graphical illustrations are used to conduct time analysis for the number of iterations. To demonstrate that the examined scheme is superior to the current ones, we also make a numerical comparison of the Z-iterative algorithm’s convergence speed with some other well-known schemes. Section 6 concludes and provides more discussion of these findings and comparisons. Future directions for this work are also included in the last Section.

2. Preliminaries

Fundamental definitions and theorems that are necessary to demonstrate our new findings are provided in this section. In the subsequent definitions, denotes a nonempty subset of uniformly convex Banach space .
Definition 1 
([30,31]). A mapping τ : is said to be a contraction if for all elements o , p there exists α in [ 0 , 1 ) such that
τ ( o ) τ ( p ) α o p .
Definition 2. 
τ is a nonexpansive mapping over the uniformly convex Banach space ℜ if
τ ( o ) τ ( p ) o p .
Definition 3. 
A point s 0 is a fixed point of mapping τ if
s 0 = τ ( s 0 ) .
We denote the set of such fixed points of τ by F τ .
Definition 4. 
A mapping τ : is said to be endowed with a condition (C) (or Suzuki mapping) if the following inequality holds
1 2 e τ ( e ) e y τ ( e ) τ ( y ) e y .
Definition 5. 
A mapping τ : is said to satisfy Chatterjea–Suzuki–C condition if it satisfies the following inclusion:
1 2 e τ ( e ) e y τ ( e ) τ ( y ) 1 2 ( e τ ( y ) + e τ ( y ) ) .
Definition 6 
([32]). A mapping τ defined on a subset ℵ of a Banach space is a contraction on the uniformly convex Banach space if and only if there is a function
ς : ; ς ( 0 ) = 0 , ς ( u ) > 0 ; u [ 0 , ) { 0 }
such that
e τ ( e ) ς ( d ( e , F τ ) ) ,
where = [ 0 , ) and e ∈ℵ, ς ( d ( e , F τ ) ) is the distance between o and F τ .
Definition 7. 
Suppose ℵ is a closed and convex subset of ℜ. If { a m } is a bounded set, then the following mapping is called the asymptotic radius of { a m } corresponding to ℵ,
ς ( , { a m } ) = inf { lim sup m a m a : a } .
Similarly, asymptotic center of sequence { a m } corresponding to ℵ is defined by
A ( , { a m } ) = { a : lim sup m a m a = ς ( , { a m } ) } .
Definition 8 
([33]). Opial’s condition holds in a Banach space ℜ if and only if a sequence { a m } in ℜ converges in the weak sense to a 0 , and
lim n sup a m a 0 < lim m sup a m b ; b { a 0 } .
Remark 1 
([34]). If ℜ is a uniformly convex Banach space, then the set A ( , { a m } ) contains one element. Moreover, if ℵ is weakly convex and compact then A ( , { a m } ) is a convex set. More details can be seen in [35,36].
Proposition 1. 
For a nonempty closed subset ℵ of a Banach space, we have the following results in the presence of self-mapping τ :
(a) 
If τ is enhanced with Chatterjea–Suzuki–C condition and F τ then
τ ( x ) a x a x   a n d   a F τ .
(b) 
If τ is enhanced with the condition of Chatterjea–Suzuki–C, then F τ is closed. Moreover, if ℜ is strictly convex and ℵ is convex, then F is also convex.
(c) 
If τ is enhanced with the condition namely Chatterjea–Suzuki–C, then for any x , y .
x τ ( y ) 5 x τ ( x ) + x y .
(d) 
If τ is enhanced with Chatterjea–Suzuki–C condition, { a m } is weakly convergent to a, and
lim m τ ( a m ) a m = 0
then a F τ provided that ℜ satisfies Opial’s condition.
The subsequent result is initiated by [37].
Lemma 1. 
For a real number e 0 , consider { a m } and { b m } in ℜ satisfying
lim m sup a m e , lim m sup b m e ,
and
lim m ( 1 k m ) a m + k m b m = e ; 0 < k m < 1
then
lim m sup a m b m = 0 .

Iterative Algorithms

The most simple and basic among the existing iterative algorithms is the Picard [10] iterative algorithm defined by a m + 1 = τ ( a m ) , which is commonly used to find approximations. In the following paragraph, we enlist some more advanced and recent iterative algorithms that are used and compared with our new iteration in this study. Moreover, σ m , β m and γ m denote sequences in ( 0 , 1 ) for the following iteration schemes.
Agarwal et al. [12] defined the iterative algorithm as given by,
a 1 b m = ( 1 σ m ) a m + σ m τ ( a m ) a m + 1 = ( 1 β m ) ( a m ) + β m τ ( b m ) a m + 1 = τ ( b m ) .
Abbas and Nazir [13] defined the iterative algorithm as given by,
a 1 c m = ( 1 σ m ) a m + σ m τ ( a m ) b m = ( 1 β m ) τ ( a m ) + β m τ ( c m ) a m + 1 = ( 1 γ m ) τ ( b m ) + γ m τ ( c m ) .
Thakur et al. [14] defined the iterative algorithm as given by,
a 1 c m = ( 1 σ m ) a m + σ m τ ( a m ) b m = ( 1 β m ) τ ( c m ) + β m τ ( a m ) a m + 1 = ( 1 γ m ) τ ( c m ) + γ m τ ( b m ) .
M-iterative algorithm is defined in [22] (see, also [29])
a 1 c m = ( 1 σ m ) a m + σ m τ ( a m ) b m = τ ( c m ) a m + 1 = τ ( b m ) .

3. Main Results

In this section, we propose our new iterative algorithm and name it the Z-iterative algorithm, defined as follows
a 1 c m = τ ( ( 1 σ m ) a m + σ m τ ( a m ) ) b m = τ ( ( 1 β m ) c m + β m τ ( c m ) ) a m + 1 = τ ( b m ) ,
where σ m , β m are sequences in (0, 1). The main results are obtained using (9), which demonstrates the efficiency of our proposed algorithm.
Lemma 2. 
Let ℜ be a uniformly convex Banach space and ℵ be a nonempty closed convex subset of ℜ. Suppose τ : is enhanced with the condition of Chatterjea–Suzuki–C along with F τ . Moreover, if the sequence { a m } is as defined in Equation (9) then lim m a m r 0 holds true for each r 0 F τ .
Proof. 
Let r 0 be an arbitrary element in F τ then because of Proposition 1 (a), we obtain
τ ( x ) r 0 x r 0 .
Next, using the algorithm (9), we have
c m r 0 = τ ( ( 1 σ m ) a m + σ m τ ( a m ) ) r 0 .
Then, making use of (10), we obtain
c m r 0 ( 1 σ m ) a m + σ m τ ( a m ) r 0 ( 1 σ m ) a m + σ m r 0 σ m r 0 + σ m τ ( a m ) r 0 ( 1 σ m ) a m r 0 ( 1 σ m ) σ m r 0 + σ m τ ( a m ) ( 1 σ m ) ( a m r 0 ) + σ m ( τ ( a m ) r 0 ) c m r 0 ( 1 σ m ) a m r 0 + σ m τ ( a m ) r 0 .
Now, using Chatterjea–Suzuki–C condition with τ ( r 0 ) = r 0 , we have
τ ( a m ) r 0 1 2 ( a m τ ( r 0 ) + r 0 τ ( a m ) ) 1 2 ( a m + r 0 r 0 τ ( r 0 ) + r 0 + a m a m τ ( a m ) ) 1 2 ( ( a m r 0 ) + ( r 0 τ ( r 0 ) + ( r 0 a m ) + ( a m τ ( a m ) ) ) 1 2 ( ( a m r 0 ) + ( r 0 τ ( r 0 ) + ( r 0 a m ) + ( a m τ ( a m ) ) ) 1 2 ( ( a m r 0 ) + ( a m r 0 ) + τ ( r 0 r 0 ) + ( τ ( a m a m ) ) ) 1 2 ( 2 ( a m r 0 ) + ( r 0 r 0 ) + ( a m a m ) ) 1 2 ( 2 a m r 0 ) τ ( a m ) r 0 a m r 0 .
Using the above inequality in (11), we obtain
c m r 0 ( 1 σ m ) a m r 0 + σ a m r 0 a m r 0 ( 1 σ m + σ m ) c m r 0 a m r 0 .
Next, we have
b m r 0 = τ ( ( 1 β m ) c m + β m τ ( c m ) ) r 0 ( 1 β m ) c m + β m τ ( c m ) r 0 ( 1 β m ) c m + β m r 0 β m r 0 + β m τ ( c m ) r 0 ( 1 β m ) ( c m r 0 ) + β m ( τ ( c m ) ) r 0 ) ( 1 β m ) c m r 0 + β m τ ( c m ) r 0 .
By using Chatterjea–Suzuki–C condition with τ ( r 0 ) = r 0 , we obtain
τ ( c m ) r 0 1 2 ( c m τ ( r 0 ) + r 0 τ ( c m ) ) 1 2 ( c m + r 0 r 0 τ ( r 0 ) + r 0 + c m c m τ ( c m ) ) 1 2 ( 2 ( c m r 0 ) + ( r 0 r 0 ) + ( c m c m ) ) 1 2 ( 2 c m r 0 ) c m r 0 .
Next, by combining it with (13), we obtain
b m r 0 ( 1 β m ) c m r 0 + β m c m r 0 c m r 0 ( 1 β m + β m ) b m r 0 c m r 0 .
Hence, we obtain
a m + 1 r 0 = τ ( b m ) r 0 b m r 0
and using (14), we can write
a m + 1 r 0 b m r 0 c m r 0 .
Continuing in this way, from (12), (14), and (16), we conclude that
a m + 1 r 0 b m r 0 c m r 0 a m r 0 ,
which implies that
a m + 1 r 0 a m r 0 .
This shows that a m r 0 is decreasing and bounded for each r 0 F τ . Hence,
lim m a m r 0 exists. □
Theorem 1. 
Let , , τ , F τ and { a m } be the same as in Lemma 2, then F τ if and only if the sequence { a m } is bounded and lim m a m τ ( a m ) = 0 .
Proof. 
Consider F τ then from Lemma 2, we infer that lim m a m r 0 exists and { a m } is bounded. Assume that
lim m a m r 0 = e
and we want to show that
lim m a m r 0 = 0 .
Therefore, from (12), we have
c m r 0 a m r 0 ,
which implies that
lim m sup c m r 0 lim m sup a m r 0
and combining it with (17), we obtain
lim m sup c m r 0 lim m sup a m r 0 e .
Since r 0 F τ , therefore using Proposition 1 (a), we obtain
τ ( a m ) r 0 a m r 0 ,
which implies that
lim m sup τ ( a m ) r 0 lim m sup a m r 0 e .
Then, using (16), we have
a m + 1 r 0 c m r 0 ,
and combining it with (17), we obtain
e lim m inf c m r 0 .
Next, using (18) and (20), we obtain
lim m c m r 0 = e .
Moreover,
e lim m c m r 0 = lim m τ ( ( 1 σ m ) a m + σ τ ( a m ) ) r 0 lim m ( 1 σ m ) a m + σ τ ( a m ) r 0 lim m ( 1 σ m ) a m + σ m r 0 σ m r 0 + σ τ ( a m ) r 0 lim m ( 1 σ m ) a m r 0 ( 1 σ m ) σ m r 0 + σ m τ ( a m ) lim m ( 1 σ m ) ( a m r 0 ) + σ m ( τ ( a m ) r 0 ) lim m ( 1 σ m ) a m r 0 + σ τ ( a m ) r 0 lim m ( 1 σ m ) a m r 0 + σ a m r 0 lim m a m r 0 ( 1 σ m + σ m ) e lim m a m r 0
Hence,
lim m ( 1 σ m ) ( a m r 0 ) + σ ( τ ( a m ) r 0 ) = e .
Then using (17), (19), and (21) along with Lemma 1, we obtain
lim m a m r 0 τ ( a m ) + r 0 = 0
lim m a m τ ( a m ) = 0 .
Conversely, let { a m } is bounded with
lim m a m τ ( a m ) = 0 ,
here we will show that F τ . Let r 0 A ( , { a m } ) , then using definition (7)
A ( τ ( r 0 ) , { a m } ) = lim m s u p a m τ r 0
and using Proposition 1 (c), we obtain
5 lim m sup a m τ ( a m ) + a m r 0 5 lim m sup a m τ ( a m ) + lim m sup a m r 0 lim m sup a m r 0 = A ( r 0 , { s m } ) .
Hence, τ ( r 0 ) A ( , { a m } ) . Since A ( , { a m } ) contains singleton point, so r 0 F τ , i.e., F τ and τ ( r 0 ) = r 0 .
Theorem 2. 
Suppose , F τ , τ and { a m } are the same as in Lemma 2 and ℵ is a weakly compact and convex subset of ℜ then { a m } has week convergence to a point in F τ in the presence of ℜ with the condition of Opial.
Proof. 
Lemma 2, implies that lim m a m r 0 exists and we want to show that { a m } has a unique weak subsequential limit in F τ . Since is weakly compact, let a 0 and a 0 be the subsequential limits of subsequences { a m t } and { a m s } of { a m } , respectively. By Theorem 1, we have
lim t a m t τ ( a m t ) = 0 ; a 0 F τ
and
lim s a m s τ ( a m s ) = 0 ; a 0 F τ .
At this step, we aim to show the uniqueness. Therefore, if a 0 a 0 then by Opial’s condition, we have
lim m a m a 0 = lim t a m t a 0 < lim t a m t a 0 = lim m a m a 0 = lim m a m s a 0 < lim s a m s a 0 = lim m a m a 0 ,
which shows that
lim m a m a 0 < lim m a m a 0 .
This is not possible and leads to a contradiction. Hence, { a m } converges weakly to a point in F τ . □
Theorem 3. 
Suppose , F τ , τ and { a m } are the same as in Lemma 2 as well as ℵ is a weakly compact and convex subset of ℜ then { a m } converges strongly to a point in F τ .
Proof. 
By Theorem (1), we have
lim m a m τ ( a m ) = 0 ,
and since is compact and { a m } , so { a m } has a subsequence { a m t } for some z 0 such that
lim t a m t z 0 = 0 .
Furthermore, because of Proposition 1 (c), we have
a m t z 0 5 a m t τ ( a m t ) + a m t z 0 ,
which shows that a m t τ ( z 0 ) as t . This implies that τ ( z 0 ) = z 0 i-e z 0 F τ and lim m a m a 0 exists by Lemma 2. Hence { a m } converges strongly to z 0 F τ . □
Lemma 3. 
The sequence { a m } has strong convergence to a point in F τ with lim m inf d ( a m , F τ ) = 0 , where , , F τ , τ and { a m } are assumed to have the same properties as in Lemma 2.
Proof. 
For all r 0 F τ , Lemma (2) suggests the existence of
lim m a m r 0
and by assumption, it follows that
lim m dist ( a m , F τ ) = 0 .
According to Proposition 1 (b), the set F τ is indeed closed in and then the remaining proof closely follows from the proof of ([36], Theorem 2) and can be omitted. □
This leads us to suggest another strong convergence theorem that does not require assuming the compactness of the domain. This is an exciting advancement that expands the applicability of the theorem.
Theorem 4. 
Let , , F τ , τ and { a m } be the same as in Lemma 2, then { a m } converges strongly to a point in F τ whenever τ is a contraction on the uniformly convex Banach space.
Proof. 
From Theorem 1, we have
lim m inf ( a m , F a m ) = 0 ,
which implies that
lim m inf ( a m , F τ ) = 0 .
The successful proof of all the assumptions in Theorem 4 confirms that the sequence essentially converges strongly in F τ . This is a significant result that demonstrates the reliability and effectiveness of the method.

4. Application to Caputo Fractional Differential Equation

This century has dealt with the extensive study of fractional differential equations (FDEs) due to their interesting and important applications in different areas of science. For example, fractional differential equations have been used in the modeling of complex phenomena, such as fractals, anomalous diffusion, and non-local interactions. Its applications have also been found in finance, biology, image processing, etc. Overall, fractional differential equations have proven to be a powerful tool in the modeling and analysis of complex systems in various fields.
Different types of fractional derivatives are used in the literature according to the model of the problem. We use Caputo-type fractional derivatives generally defined by
D η φ ( t ) = 1 Γ ( r η ) 0 t ( t s ) r η 1 φ ( r ) ( s ) d s ,
where φ ( t ) , t > 0 is a real valued function, η > 0 is the order of Caputo-type fractional derivative, r is an integer, η ( r 1 , r ) and D η φ ( t ) = D 0 , t η C φ ( t ) . New fixed-point results are obtained using non-linear operators in [38], and functional-integral equations are solved in [39,40]. Hence, many problems are challenging to solve using analytical techniques. However, it is still possible to solve them by finding an approximation value through alternative methods. Some researchers have used fixed-point techniques for nonexpansive operators to solve fractional differential equations. For example, see [41].
Let D η represent the Caputo fractional derivative of order η endowed with ω : [ 0 , 1 ] × R R then we apply the Z-iterative algorithm (9) under the Chatterjea–Suzuki–C for the following fractional differential equation
D η f ( l ) + ω ( l , f ( l ) ) = 0 ; 0 l 1 , 1 < η < 2 f ( 0 ) = f ( 1 ) = 0 .
Let be a collection of continuous functions that map the interval [ 1 , 2 ] to R . The usual maximum norm is used to determine the size of the functions in . The respective Green’s function associated with the fractional differential Equation (22) is defined by
A ( l , s ) = 1 Γ ( η ) ( l ( 1 s ) ( η 1 ) ( l s ) ( η 1 ) ) ; 0 s l 1 ( l ( 1 s ) ( η 1 ) ) Γ ( η ) ; 0 l s 1
Now we proceed to formulate and prove the following theorem.
Theorem 5. 
Consider an operator τ : C [ 0 , 1 ] C [ 0 , 1 ] defined by
τ ( f ( l ) ) = 0 1 A ( l , s ) ω ( s , f ( s ) ) d s ; f ( l ) C [ 0 , 1 ] .
If the following inclusion holds
| ω ( s , f ( s ) ) ω ( s , g ( s ) ) | 1 2 ( | f ( s ) τ ( g ( s ) ) | + | g ( s ) τ ( f ( s ) ) |
then Z-iterative algorithm (9) approaches some solutions S of the fractional differential Equation (22) in case lim n inf d ( a n , S ) = 0 .
Proof. 
Since an element f of is the solution of the fractional differential Equation (22) if and only if it is also the solution to the subsequent equation [29],
f ( l ) = 0 1 A ( l , s ) ω ( s , f ( s ) ) d s .
Now, for any f , g and 0 l 1 , it follows that
τ f ( l ) τ g ( l ) | 0 1 A ( l , s ) ω ( s , f ( s ) ) d s 0 1 A ( l , s ) ω ( s , g ( s ) ) d s | 0 1 A ( l , s ) | ω ( s , f ( s ) ) ω ( s , g ( s ) ) | d s 0 1 A ( l , s ) ( 1 2 | f ( s ) τ ( g ( s ) ) | + 1 2 | g ( s ) τ ( f ( s ) ) | ) d s ( 1 2 f ( s ) τ ( g ( s ) ) + 1 2 g ( s ) τ ( f ( s ) ) ) ( 0 1 A ( l , s ) d s ) 1 2 f ( s ) τ ( g ( s ) ) + 1 2 g ( s ) τ ( f ( s ) ) = 1 2 ( f ( s ) τ ( g ( s ) ) + g ( s ) τ ( f ( s ) ) ) .
Consequently, we obtain
τ ( f ) τ ( g ) 1 2 ( f τ ( g ) + g τ ( f ) ) .
Thus, τ meets the Chatterjea–Suzuki–C requirement. Based on Lemma 3, the sequence produced by (9) approaches a stationary point of the mapping τ . This point is the solution to the given fractional differential equation. □

5. Numerical Simulation

In this section, we analyze that the convergence speed of the Z-iterative algorithm is better than the modern algorithms discussed in [12,13,14,29] with the help of a numerical example.
Example 1. 
Let = [ 7 , 13 ] be endowed with usual norm . and let τ : be a function defined by
τ ( e ) = 7 i f e = 13 e + 7 2 , e l s e w h e r e ,
we will show that the following conditions hold:
  • F τ .
  • τ does not satisfy condition (C).
  • τ satisfies condition Chatterjea–Suzuki–C.
Proof. 
We discuss the above conditions one by one as follows:
  • Since F τ = { 7 } , implies that τ possesses a single fixed point, and F τ .
  • If we take e = 11 and y = 12 then τ does not satisfies condition (C).
  • To prove the Chatterjea–Suzuki–C condition, we have 4 cases:
  • (Case-1) If e = 13 = y then, τ ( e ) τ ( y ) = 0 . Hence,
    1 2 ( ( e τ ( y ) + y τ ( e ) ) 0 = τ ( e ) τ ( y ) .
  • (Case-2) If 7 e , y 13 then, τ ( e ) τ ( y ) = e y 2 .
Hence,
1 2 ( ( e τ ( y ) + y τ ( e ) ) = e ( y + 7 2 ) 2 + y ( e + 7 2 ) 2 ( e ( y + 7 2 ) ) ( y ( e + 7 2 ) ) 2 = 3 e 3 y 4 e y 2 = τ ( e ) τ ( y ) .
  • (Case-3) If e = 13 , 7 y < 13 then, τ ( e ) τ ( y ) = y 7 2 .
Hence,
1 2 ( ( e τ ( y ) + y τ ( e ) ) = e ( y + 7 2 ) 2 + y 7 2 y 7 2 = τ ( e ) τ ( y ) .
  • (Case-4) If y = 13 , 7 e < 13 then, τ ( e ) τ ( y ) = e 7 2 .
Hence,
1 2 ( ( e τ ( y ) + y τ ( e ) ) = e 7 2 + y ( e + 7 2 ) 2 e 7 2 = τ ( e ) τ ( y ) .
Hence, from the above cases (1–4), part (3) of Example 1 is proved. □
We choose σ m = 1 6 n ( 7 n + 5 ) 2 , β m = 1 1 ( n + 7 ) 2 , and γ m = 1 2 n ( 9 n + 8 ) 2 with initial point (7.5) and generate the following Table 1 of the values by performing 30 iterations. We analyze that Agarwal et al. iterative algorithm [35] converges at the 29th iteration while Abbas and Nazir [13] and M [22] iterative algorithms need 19 and 18 iterations, respectively. It is significant to remark that our proposed new algorithm converges to the solution at the 11th iteration, which is a noticeable improvement. However, the M-Iterative algorithm has already been proven efficient over the others [22,29]. The main purpose is now to compare the Z-iterative algorithm and the M-Iterative algorithm. We continue in this way by changing the choice of sequences and initial guesses to obtain numerical values in Table 2 and Table 3. Figure 1 and Figure 2 validate our results.
By considering the sequences σ m = 1 n ( n + 5 ) 2 , β m = n ( 6 n + 5 ) n , and γ m = 1 n ( n 3 + 6 ) 6 , a significant improvement of the results can be observed between our proposed iterative algorithm and Agarwal et al. iterative algorithm that needs 52 iterations as compared to 13. It is interesting to note that by changing the initial guess from 9 to 9.5, each of the listed algorithms needs one more iteration see Figure 1. Furthermore, by considering the sequences σ m = n 2 + 2 ( n 2 + n + 3 ) , β m = n + 3 ( 2 n + 6 ) , and γ m = n + 2 ( n 2 + 3 ) , a significant improvement of the results can be observed between our proposed iterative algorithm and Agarwal et al. iterative algorithm that needs 54 iterations as compared to 11. It is interesting to note that by changing the initial guess from 10 to 10.5, miscellaneous variation in the number of iterations for each of the listed algorithms can be seen in Figure 2.
From the graphical comparison between the Z-iterative algorithm and another existing algorithm, one can observe the fast convergence of the Z-iterative algorithm as demonstrated in Figure 1 and Figure 2, respectively. This faster convergence is a significant advantage of the Z-iterative algorithm, as it allows us to reach the desired solution more quickly and efficiently.

6. Conclusions and Further Discussions

The iterative Z-Algorithm that includes operators enhanced with the Chatterjea–Suzuki–C condition is examined in this study. It shows that, under the right circumstances on the operator or the domain, this technique converges both weakly and strongly to a fixed point of a mapping equipped with the Chatterjea–Suzuki–C condition. Furthermore, we use operators enhanced with the Chatterjea–Suzuki–C condition to solve a fractional differential equation. Furthermore, some tables and graphs are presented to demonstrate the Z-iterative scheme’s higher accuracy in comparison to other existing schemes [12,13,29]. Let us discuss the advantages of our proposed iterative scheme:
  • Compared to other schemes in the literature, our approach demonstrates superior convergence to a fixed point. This means that it reaches a stable solution more efficiently and effectively.
  • Our proposed iterative scheme stands out by utilizing two scalar sequences σ m , β m instead of three. This unique approach leads to better convergence in comparison with various other iterative techniques described in the literature.
  • The proposed iterative scheme has been proven to be stable when it comes to initial points and sequences of scalars. This stability is demonstrated by the data presented in tabular and graphical forms, which clearly shows the consistent and reliable performance of the scheme.
    In light of the above discussion, we further compared it with the Mann Iterative algorithm ([9]) in Figure 3.
Similarly, we compare with Ishikawa iterative algorithm [11] in Figure 4
And so forth, the efficiency of our proposed algorithm is vital in comparison to the existing, well-known algorithms (Figure 5, Figure 6 and Figure 7).
As a result, a decision must be made between various iteration methodologies, with crucial factors taken into account. For example, simplicity and convergence speed are the two most important elements in determining whether the iteration strategy is more effective than others. In cases like this, the following problems will unavoidably arise: Which iteration strategy accelerates convergence among these? This article demonstrates that our proposed iteration scheme converges faster than the present modern iteration schemes. For our future work, we can enhance the results using more generalized (C)-conditions [42]. It may help to improve the convergence rate of our proposed iteration.

Author Contributions

Each author equally contributed to writing and finalizing the article. Conceptualization, R.S. and W.A.; methodology, W.A. and A.T.; software, N.A. validation, N.A. and A.T.; formal analysis, N.A. and A.T.; investigation, R.S. and W.A.; resources, A.T.; writing—original draft preparation, W.A.; writing—review and editing, A.T.; visualization, A.T.; supervision, R.S.; project administration, A.T. and W.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were generated for this study.

Acknowledgments

The author extends the appreciation to the Deanship of Postgraduate Studies and Scientific Research at Majmaah University for funding this research work through the project number (PGR-2024-1062).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tassaddiq, A.; Ahmed, W.; Zaman, S.; Raza, A.; Islam, U.; Nantomah, K. A Modified Iterative Approach for Fixed Point Problem in Hadamard Spaces. J. Funct. Spaces 2024, 1, 5583824. [Google Scholar] [CrossRef]
  2. Tassaddiq, A.; Kanwal, S.; Perveen, S.; Srivastava, R. Fixed points of single-valued and multi-valued mappings in sb-metric spaces. J. Inequalities Appl. 2022, 1, 85. [Google Scholar] [CrossRef]
  3. Khachay, M.Y.; Ogorodnikov, Y.Y. Efficient approximation of the capacitated vehicle routing problem in a metric space of an arbitrary fixed doubling dimension. InDoklady Math. 2020, 102, 324–329. [Google Scholar] [CrossRef]
  4. Khamsi, M.A.; Misane, D. Fixed point theorems in logic programming. Ann. Math. Artif. Intell. 1997, 21, 231–243. [Google Scholar] [CrossRef]
  5. Camelo, M.; Papadimitriou, D.; Fàbrega, L.; Vilà, P. Geometric routing with word-metric spaces. IEEE Commun. Lett. 2014, 18, 2125–2128. [Google Scholar] [CrossRef]
  6. Tassaddiq, A. General escape criteria for the generation of fractals in extended Jungck–Noor orbit. Math. Comput. Simul. 2022, 196, 1–14. [Google Scholar] [CrossRef]
  7. Tassaddiq, A.; Kalsoom, A.; Rashid, M.; Sehr, K.; Almutairi, D.K. Generating Geometric Patterns Using Complex Polynomials and Iterative Schemes. Axioms 2024, 13, 204. [Google Scholar] [CrossRef]
  8. Tassaddiq, A.; Tanveer, M.; Azhar, M.; Lakhani, F.; Nazeer, W.; Afzal, Z. Escape criterion for generating fractals using Picard–Thakur hybrid iteration. Alex. Eng. J. 2024, 100, 331–339. [Google Scholar] [CrossRef]
  9. Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  10. Picard, É. Mémoire sur la théorie des équations aux dérivées partielles et la méthode des approximations successives. J. Mathématiques Pures Appliquées 1890, 6, 145–210. [Google Scholar]
  11. Ishikawa, S. Fixed points by a new iteration method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  12. Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Iterative construction of fixed points of nearly asymptotically nonexpansive mappings. J. Nonlinear Convex Anal. 2007, 1, 61. [Google Scholar]
  13. Abbas, M.; Nazir, T. Some new faster iteration process applied to constrained minimization and feasibility problems. Mat. Vestn. 2014, 66, 223–234. [Google Scholar]
  14. Thakur, B.S.; Thakur, D.; Postolache, M. A new iterative scheme for numerical reckoning fixed points of Suzuki’s generalized nonexpansive mappings. Appl. Math. Comput. 2016, 275, 147–155. [Google Scholar] [CrossRef]
  15. Banach, S. Sur les opérations dans les ensembles abstraits et leur application aux équations intégrales. Fundam. Math. 1922, 3, 133–181. [Google Scholar] [CrossRef]
  16. Konwar, N.; Srivastava, R.; Debnath, P.; Srivastava, H.M. Some new results for a class of multivalued interpolative Kannan-type contractions. Axioms 2022, 11, 76. [Google Scholar] [CrossRef]
  17. Debnath, P.; Srivastava, H.M. New extensions of Kannan’s and Reich’s fixed point theorems for multivalued maps using Wardowski’s technique with application to integral equations. Symmetry 2020, 12, 1090. [Google Scholar] [CrossRef]
  18. Debnath, P.; Mitrović, Z.D.; Srivastava, H.M. Fixed points of some asymptotically regular multivalued mappings satisfying a Kannan-type condition. Axioms 2021, 10, 24. [Google Scholar] [CrossRef]
  19. Suzuki, T. Fixed point theorems and convergence theorems for some generalized nonexpansive mappings. J. Math. Anal. Appl. 2008, 340, 1088–1095. [Google Scholar] [CrossRef]
  20. Ahmad, J.; Ullah, K.; George, R. Numerical algorithms for solutions of nonlinear problems in some distance spaces. AIMS Math. 2023, 8, 8460–8477. [Google Scholar] [CrossRef]
  21. Ullah, K.; Ahmad, J.; Mlaiki, N. On Noor iterative process for multi-valued nonexpansive mappings with application. Int. J. Math. Anal. 2019, 13, 293–307. [Google Scholar] [CrossRef]
  22. Ullah, K.; Arshad, M. Numerical reckoning fixed points for Suzuki’s generalized nonexpansive mappings via new iteration process. Filomat 2018, 32, 187–196. [Google Scholar] [CrossRef]
  23. Khatoon, S.; Uddin, I. Convergence analysis of modified Abbas iteration process for two G-nonexpansive mappings. Rend. Circ. Mat. Palermo Ser. 2 2021, 70, 31–44. [Google Scholar] [CrossRef]
  24. Wairojjana, N.; Pakkaranang, N.; Pholasa, N. Strong convergence inertial projection algorithm with self-adaptive step size rule for pseudomonotone variational inequalities in Hilbert spaces. Demonstr. Math. 2021, 54, 110–128. [Google Scholar] [CrossRef]
  25. Hammad, H.A.; Rehman, H.U.; Zayed, M. Applying faster algorithm for obtaining convergence, stability, and data dependence results with application to functional-integral equations. AIMS Math. 2022, 7, 19026–19056. [Google Scholar] [CrossRef]
  26. Hammad, H.A.; Rehman, H.U.; De la Sen, M. A novel four-step iterative scheme for approximating the fixed point with a supportive application. Inf. Sci. Lett. 2021, 10, 333–339. [Google Scholar]
  27. Hammad, H.A.; Rehman, H.U.; De la Sen, M. Shrinking projection methods for accelerating relaxed inertial Tseng-type algorithm with applications. Math. Probl. Eng. 2020, 2020, 7487383. [Google Scholar] [CrossRef]
  28. Jia, Y.; Xu, M.; Lin, Y.; Jiang, D. An efficient technique based on least-squares method for fractional integro-differential equations. Alex. Eng. J. 2023, 64, 97–105. [Google Scholar] [CrossRef]
  29. Ahmad, J.; Ullah, K.; Hammad, H.A.; George, R.A. Solution of a fractional differential equation via novel fixed-point approaches in Banach spaces. AIMS Math. 2023, 8, 12657–12670. [Google Scholar] [CrossRef]
  30. Browder, F.E. Nonexpansive nonlinear operators in a Banach space. Proc. Natl. Acad. Sci. USA 1965, 54, 1041–1044. [Google Scholar] [CrossRef]
  31. Göhde, D. Zum prinzip der kontraktiven Abbildung. Math. Nach. 1965, 30, 251–258. [Google Scholar] [CrossRef]
  32. Senter, H.F.; Dotson, W.G. Approximating fixed points of nonexpansive mappings. Proc. Am. Math. Soc. 1974, 44, 375–380. [Google Scholar] [CrossRef]
  33. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Amer. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef]
  34. Clarkson, J.A. Uniformly convex spaces. Trans. Am. Math. Soc. 1936, 40, 396–414. [Google Scholar] [CrossRef]
  35. Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Fixed Point Theory for Lipschitzian-Type Mappings with Applications; Springer: New York, NY, USA, 2009. [Google Scholar]
  36. Takahashi, W. Nonlinear functional analysis. In Fixed Point Theory and its Applications; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  37. Schu, J. Weak and strong convergence to fixed points of asymptotically nonexpansive mappings. Bull. Aust. Math. Soc. 1991, 43, 153–159. [Google Scholar] [CrossRef]
  38. Srivastava, H.M.; Ali, A.; Hussain, A.; Arshad, M.U.; Al-Sulami, H.A. A certain class of θL-type non-linear operators and some related fixed point results. J. Nonlinear Var. Anal. 2022, 6, 69–87. [Google Scholar]
  39. Srivastava, H.M.; Deep, A.; Abbas, S.; Hazarika, B. Solvability for a class of generalized functional-integral equations by means of Petryshyn’s fixed point theorem. J. Nonlinear Convex Anal. 2021, 22, 2715–2737. [Google Scholar]
  40. Srivastava, H.M.; Shehata, A.; Moustafa, S.I. Some fixed point theorems for F(ψ,φ)-contractions and their application to fractional differential equations. Russ. J. Math. Phys. 2020, 27, 385–398. [Google Scholar] [CrossRef]
  41. Hammad, H.A.; Zayed, M. Solving a system of differential equations with infinite delay by using tripled fixed point techniques on graphs. Symmetry 2022, 14, 1388. [Google Scholar] [CrossRef]
  42. Karapınar, E.; Taş, K. Generalized (C)-conditions and related fixed point theorems. Comput. Math. Appl. 2011, 61, 3370–3380. [Google Scholar] [CrossRef]
Figure 1. Comparison of Z-iterative algorithm with other iterative algorithms with initial point (9) and (9.5).
Figure 1. Comparison of Z-iterative algorithm with other iterative algorithms with initial point (9) and (9.5).
Axioms 13 00502 g001
Figure 2. Comparison of Z-iterative algorithm with other iterative algorithms with initial point (10) and (10.5).
Figure 2. Comparison of Z-iterative algorithm with other iterative algorithms with initial point (10) and (10.5).
Axioms 13 00502 g002
Figure 3. Comparison between Z-iterative algorithm (ZIA) and Mann iterative Algorithm [9].
Figure 3. Comparison between Z-iterative algorithm (ZIA) and Mann iterative Algorithm [9].
Axioms 13 00502 g003
Figure 4. Comparison between Z-iterative algorithm (ZIA) and Ishikawa iterative Algorithm [11].
Figure 4. Comparison between Z-iterative algorithm (ZIA) and Ishikawa iterative Algorithm [11].
Axioms 13 00502 g004
Figure 5. Further comparison between the Z-iterative algorithm (ZIA) and Agarwal iterative Algorithm.
Figure 5. Further comparison between the Z-iterative algorithm (ZIA) and Agarwal iterative Algorithm.
Axioms 13 00502 g005
Figure 6. Further comparison between the Z-iterative algorithm (ZIA) and Abbas iterative Algorithm.
Figure 6. Further comparison between the Z-iterative algorithm (ZIA) and Abbas iterative Algorithm.
Axioms 13 00502 g006
Figure 7. Further comparison between the Z-iterative algorithm (ZIA) and M-Iterative Algorithm.
Figure 7. Further comparison between the Z-iterative algorithm (ZIA) and M-Iterative Algorithm.
Axioms 13 00502 g007
Table 1. Numeric outcomes of different iterative algorithms with an initial guess (7.5).
Table 1. Numeric outcomes of different iterative algorithms with an initial guess (7.5).
nZ-Iterative AlgorithmM-Iteration [22]Abbas and Nazir [13]Agarwal et al. [35]
17.5000000000000007.5000000000000007.5000000000000007.500000000000000
27.0156558950742087.0625015070408957.0630552660339837.125247148819911
37.0004899202157047.0078127262430007.0079393331063867.031354890441289
47.0000153253399657.0009765922598877.0009982665196637.007846573196534
57.0000004792769857.0001220741073497.0001253923894227.001963118312863
67.0000000149860787.0000152592678897.0000157388910117.000491063737875
77.0000000004685287.0000019074087867.0000019743959737.000122821832627
87.0000000000146477.0000002384261207.0000002475746307.000030716651000
97.0000000000004587.0000000298032677.0000000310333457.000007681438493
107.0000000000000147.0000000037254087.0000000038889287.000001920828534
117.0000000000000007.0000000004656757.0000000004872287.000000480304887
127.0000000000000007.0000000000582097.0000000000610307.000000120096814
137.0000000000000007.0000000000072767.0000000000076437.000000030028581
147.0000000000000007.0000000000009097.0000000000009577.000000007508084
157.0000000000000007.0000000000001147.0000000000001207.000000001877224
167.0000000000000007.0000000000000147.0000000000000157.000000000469351
177.0000000000000007.0000000000000027.0000000000000027.000000000117347
187.0000000000000007.0000000000000007.0000000000000017.000000000029339
197.0000000000000007.0000000000000007.0000000000000007.000000000007335
217.0000000000000007.0000000000000007.0000000000000007.000000000001833
227.0000000000000007.0000000000000007.0000000000000007.000000000000458
237.0000000000000007.0000000000000007.0000000000000007.000000000000115
257.0000000000000007.0000000000000007.0000000000000007.000000000001833
267.0000000000000007.0000000000000007.0000000000000007.000000000000028
277.0000000000000007.0000000000000007.0000000000000007.000000000000007
287.0000000000000007.0000000000000007.0000000000000007.000000000000002
297.0000000000000007.0000000000000007.0000000000000007.000000000000000
307.0000000000000007.0000000000000007.0000000000000007.000000000000000
Table 2. σ m = 1 n ( n + 5 ) 2 , β m = n ( 6 n + 5 ) n , and γ m = 1 n ( n 3 + 6 ) 6 .
Table 2. σ m = 1 n ( n + 5 ) 2 , β m = n ( 6 n + 5 ) n , and γ m = 1 n ( n 3 + 6 ) 6 .
AlgorithmsNumber of Iterations
Initial Point (9)
Initial Point (9.5)
Agarwal et al. [35]5253
Abbas and Nazir [13]2726
Thakur et al. [14]2624
M-Iterative Algorithm [22]1716
Z-Iterative Algorithm1314
Table 3. σ m = n 2 + 2 ( n 2 + n + 3 ) , β m = n + 3 ( 2 n + 6 ) , and γ m = n + 2 ( n 2 + 3 ) .
Table 3. σ m = n 2 + 2 ( n 2 + n + 3 ) , β m = n + 3 ( 2 n + 6 ) , and γ m = n + 2 ( n 2 + 3 ) .
AlgorithmsNumber of Iterations
Initial Point (10)
Initial Point (10.5)
Agarwal et al. [35]5453
Abbas and Nazir [13]2223
Thakur et al. [14]2122
M-Iterative Algorithm [22]1617
Z-Iterative Algorithm1112
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Srivastava, R.; Ahmed, W.; Tassaddiq, A.; Alotaibi, N. Efficiency of a New Iterative Algorithm Using Fixed-Point Approach in the Settings of Uniformly Convex Banach Spaces. Axioms 2024, 13, 502. https://doi.org/10.3390/axioms13080502

AMA Style

Srivastava R, Ahmed W, Tassaddiq A, Alotaibi N. Efficiency of a New Iterative Algorithm Using Fixed-Point Approach in the Settings of Uniformly Convex Banach Spaces. Axioms. 2024; 13(8):502. https://doi.org/10.3390/axioms13080502

Chicago/Turabian Style

Srivastava, Rekha, Wakeel Ahmed, Asifa Tassaddiq, and Nouf Alotaibi. 2024. "Efficiency of a New Iterative Algorithm Using Fixed-Point Approach in the Settings of Uniformly Convex Banach Spaces" Axioms 13, no. 8: 502. https://doi.org/10.3390/axioms13080502

APA Style

Srivastava, R., Ahmed, W., Tassaddiq, A., & Alotaibi, N. (2024). Efficiency of a New Iterative Algorithm Using Fixed-Point Approach in the Settings of Uniformly Convex Banach Spaces. Axioms, 13(8), 502. https://doi.org/10.3390/axioms13080502

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop