Next Article in Journal
A Quintic Spline-Based Computational Method for Solving Singularly Perturbed Periodic Boundary Value Problems
Previous Article in Journal
From Topological Optimization to Spline Layouts: An Approach for Industrial Real-Wise Parts
Previous Article in Special Issue
Fixed-Point Results in Elliptic-Valued Metric Spaces with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Four-Step T-Stable Generalized Iterative Technique with Improved Convergence and Various Applications

1
School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Sector H-12, Islamabad 44000, Pakistan
2
School of Natural Sciences, National University of Sciences and Technology, Sector H-12, Islamabad 44000, Pakistan
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(1), 71; https://doi.org/10.3390/axioms14010071
Submission received: 31 October 2024 / Revised: 3 December 2024 / Accepted: 11 December 2024 / Published: 20 January 2025
(This article belongs to the Special Issue Advances in Fixed Point Theory with Applications)

Abstract

:
This research presents a new form of iterative technique for Garcia-Falset mapping that outperforms previous iterative methods for contraction mappings. We illustrate this fact through comparison and present the findings graphically. The research also investigates convergence of the new iteration in uniformly convex Banach space and explores its stability. To further support our findings, we present its working on to a BV problem and to a delay DE. Finally, we propose a design of an implicit neural network that can be considered as an extension of a traditional feed forward network.

1. Introduction

The theory of fixed points (FP) is a discipline that studies the criteria for the existence and uniqueness of FPs for specific mappings in abstract spaces. This theory, built on the basis of the Banach contraction principle (BCP), demonstrates that the FP of contraction mapping is unique in the entire metric space. In the early 1900s, scholars expanded the BCP to the encompass multiple abstract spaces and developed additional types of contraction mapping. Initially, the notion of fixed points focused solely on locating them. However, in other cases, it is impossible to determine fixed points using direct approaches, particularly for nonlinear mapping. As a result, iterative approximations are offered as an alternative method for estimating the fixed point. Many issues in applied physics and engineering are too complex to solve with traditional analytical approaches [1,2,3,4,5]. In such instances, FP theory proposes several alternate strategies for obtaining the desired results. To begin, we convert the problem to an FP problem, ensuring that the FP set of one problem is equivalent to the solution set of the other. This establishes that the existence of a FP for an equation implies the existence of a solution for it. Suppose N and R represent the standard notations for the set. Consider C to be a closed convex subset of a uniformly convex Banach space (UCBS). T : C C represents a contraction ( resp., nonexpansive) mapping if there is a constant H [ 0 , 1 ) (resp., H = 1 ) that satisfies:
T x T y H x y
for all x , y C . The set of FP of T is denoted by F ( T ) .
In 1965, refs [6,7,8] developed FP theorem for nonexpansive mappings in a UCBS. A Banach space S is uniformly convex if for any ϵ > 0 , some δ > 0 exists such that x + y 2 ( 1 δ ) holds whenever x 1 , y 1 and x y ϵ . Soon after, Goebel [9] established a basic proof for the Kirk–Browder–Gohde Theorem. Several writers have explained that nonexpansive mappings occur spontaneously while studying nonlinear problems in various distance space topologies. Thus, it is entirely reasonable to investigate extensions of these mappings.
BCP ensures the uniqueness of FP for every contraction T on a Banach space. A similar claim for nonexpansive mapping is no longer accurate.
Additionally, even if a nonexpansive T has a FP which is unique, the Picard iteration x n = T n x 0 may not be useful to find it.
Thanks to these aspects, nonexpansive mappings have been a significant research topic in nonlinear analysis. In 2008, Suzuki [10] extended nonexpansive mappings by defining it with a weaker inequality. A self-map T : C C is defined to satisfy condition (C), if T x T y x y holds for each x , y C whenever:
1 2 x T x x y .
It seems trivial that a nonexpansive mapping satisfies condition (C), but [10] demonstrated with an example that the converse does not generally hold true. Thus, Suzuki mappings have a broader class as compared to the nonexpansive mappings. On similar lines, Garcia-Falset et al. [11] established a new class that is more general than the class of Suzuki (C) and can be defined as follows:
T : C C is a Garcia-Falset map (or a map T with condition (E)) if there is μ 1 for x , y C such that:
x T y μ x T x + x y .
It is trivial to verify that every Suzuki mapping T satisfies condition (E) with μ = 3 . We now present an example for a map T : C C , with condition (E), and show that it is not nonexpansive.
Example 1.
Consider C = [ 2 , 7 ] a self-map T : C C is defined by:
T x = 2 x + 5 3 , i f x C 1 = [ 0 , 7 )   5 , i f x C 2 = { 7 } .
To verify that T satisfies condition (E), we need to check
x T y μ x T x + x y ,
for some μ 1 and x , y C .
Fixing the value of μ = 4 , we discuss the following cases. ( M 1 ) : Select x , y C 2 , then T x = 5 = T y . We have:
x T y = | x T y |   = | x 5 |   = | x T x |   4 | x T x | + | x y |   = 4 | | x T x | | + | | x y | | .
( M 2 ) : Choose any x , y C 1 , then T x = 2 x + 5 3 and T y = 2 y + 5 3 . We have:
x T y = | x T y |   | x T x | + | T x T y |   = | x T x | + | 2 x 3 2 y 3 |   = | x T x | + 2 3 | x y |   | x T x | + | x y |   4 | x T x | + | x y |   = 4 | | x T x | | + | | x y | | .
( M 3 ) : Choose any x ( C 1 ) and y C 2 , then T x = 2 x + 5 3 and T y = 5 . We have:
x T y = | x T y |   = | x 5 |   = 3 | x 5 3 |   = 3 | x 2 x + 5 3 |   = 3 | x T x |   4 | x T x |   4 | x T x | + | x y |   = 4 | | x T x | | + | | x y | | .
( M 4 ) : Choose any x ( C 2 ) and y C 1 , then T x = 5 and T y = 2 x + 5 3 . We have:
x T y = | x T y |   = | x 2 y + 5 3 |   = | 3 x 2 y 5 3 |   = | 2 x + x 2 y 5 3 |   | 2 ( x y ) 3 | + | x 5 3 |   = 2 3 | x y | + 1 3 | x 5 |   | x T x | + | x y |   4 | x T x | + | x y |   = 4 | | x T x | | + | | x y | | .
In the above, we proved that in each case T satisfies condition (E). When choosing x = 6.5 and  y = 7  and fixed point of this is p = 5 , it is easily can be seen that 1 = | | T x T y | | > | | x y | | = 0.5 but 1 = | | T x T p | | < | | x p | | = 1.5 . Hence, T : C C is a Garcia-Falset mapping.
Remark 1.
Every Garcia-Falset mapping is not a nonexpansive mapping.
Soon after this discovery for these mappings, the approximation of FP under Ishikawa iteration in a certain-distance space was first studied by Bagherboum [12].

Iterative Techniques

Iterative techniques can solve a variety of issues, including minimization, equilibrium, and viscosity approximation in various domains [13,14,15,16]. Picard [17] proposed an iterative scheme for approximation of the FP in 1890 as follows:
x 1 = x C x n + 1 = T x n , n N
Some well-known iteration processes that are often used to approximate FPs of nonexpansive mappings include Picard [17], Mann [18], Ishikawa [19], Noor [20], Abbas and Nazir [21] and Thakur et al. [22] iterations. A Thakur iteration [22] is as follows:
x n + 1 = ( 1 M n ) T z n + M n T y n y n = T ( ( 1 N n ) z n + N n T z n ) z n = T ( ( 1 O n ) x n + O n T x n ) , n N
where { M n } , { N n } and { O n } are in (0, 1). They established that, for contractive mappings, it converges faster than iterations of [17,18,19,20,21].
The Asghar Rahim iteration [23] is as follows:
x n + 1 = ( 1 M n ) T y n + M n T z n y n = T ( ( 1 N n ) x n + N n z n ) z n = T ( ( 1 O n ) x n + O n T x n ) , n N
where { M n } , { N n } and { O n } are in (0, 1). This process also proven by the authors to be faster than that of [17,18,19,20,21,22].
A Picard–Thakur hybrid iteration [12] is as follows:
j n + 1 = V k n k n = ( 1 M n ) V m n + M n V l n l n = ( 1 N n ) m n + N n V m n m n = ( 1 O n ) j n + O n V j n , n N
where { M n } , { N n } and { O n } are in (0, 1). They proved that this process converges faster than iteration processes (4)–(6) for contraction mappings.
Akanimo iteration [24] is as follows:
x n + 1 = ( 1 M n ) T z n + M n T y n y n = T ( T z n ) z n = T ( ( 1 N n ) x n + N n T x n ) , n N
where { M n } , { N n } are in (0, 1). They proved that it converges faster than iterations (4)–(7) for contractive mappings.
Iterative methods based on fixed-point theory are essential in order to solve the heat equation.
Time delays add complexity to delay differential equations, which is why fixed-point iterative techniques (4)–(7) are employed. By approximating the system’s state at each time step, these methods iteratively approach a solution, aiding in the stabilization and precise solution of equations when delays impact future behavior.
Implicit neural networks can be regarded as extensions of feedforward neural networks that allow for the transfer of training parameters between layers through input-injected weight tying. In fact, backpropagation in implicit networks is achieved by implicit differentiation in gradient computation, while evaluation of function is carried out by solving a FP equation. Compared to typical neural networks, implicit models have better memory efficiency and greater flexibility because of these special characteristics. However, because their FP equations are nonlinear, implicit networks may experience instability during training.
The fact that deep learning models are very effective when equipped with implicit layers has been demonstrated by numerous publications in learning theory. These learning models substitute a rule of composition, which may be a fixed-point scheme or a differential equation solution for the concept of layers. Known deep learning frameworks that use implicit infinite-depth layers are neural ODEs [25], implicit deep learning [26] and deep equilibrium networks [27]. In [28], the convergence of specific classes of implicit networks to global minima is examined.
The research of implicit-depth learning models’ well-posedness and numerical stability has attracted attention lately. An adequate spectral requirement for the convergence and well-posedness of the Picard’s scheme, connected to the implicit network fixed-point equation, is presented by [26].
Robust and well-posed implicit neural networks for the non-Euclidean norm l are built on the basis of contraction theory [29]. In order to create durable models, they develop a training problem that is constrained by the average iteration and the well-posedness condition. The input–output Lipschitz constant is used as a regularizer in this process. A new kind of deep learning model that works on the basis of implicit prediction rules is presented in [26]. These principles are unlike modern neural networks as they are not produced through a recursive approach across multiple layers. Instead, they rely on solving a fixed-point problem in a “state” vector x R n .
The structure of the paper is as follows. First, we present a novel iterative technique for Garcia-Falset mapping and evaluate its convergence and stability in UCBS. Secondly, we compare this with different well-known iterative schemes, and finally, we illustrate its applications in various problems, namely the BV problem, delay DE and a training problem for an implicit neural network.

2. Rate of Convergence

In this paper, we prove that our novel iterative scheme converges faster than iteration processes (4)–(6). Our novel proposed iteration is given by:
x n + 1 = T ( T ( p n ) ) p n = T ( ( 1 M n ) T z n + M n T y n ) y n = T ( ( 1 N n ) x n + N n T z n ) z n = T ( ( 1 O n ) x n + O n T x n ) , n N
where { M n } , { N n } and { O n } are in (0,1).
Theorem 1.
Let C be a closed and convex subset of a UCBS. Let T be a contraction mapping with some constant H ( 0 , 1 ) and fixed point p. Let { u n } be defined by (6) and { x n } by (9), where { M n }, { N n }, { O n } are sequences in [ϵ, 1 ϵ ] for every n N and some ϵ ( 0 , 1 ) . Then, the rate of convergence of { x n } is greater than the rate of { u n } .
Proof. 
As proven in Theorem 2 of [23], we have:
u n + 1 p H n [ 1 ( 1 H ) N O ) ] n .
Let
u n = H n [ 1 ( 1 H ) N O ] n u 1 p .
Using the iterative scheme (9), we have:
z n p = | | T [ ( 1 O n ) x n + O n T x n ] p | |   H | | ( 1 O n ) ( x n p ) + O n ( T x n p ) | |   H [ ( 1 O n ) x n p + O n T x n p ]   H [ ( 1 O n ) x n p + O n H x n p ]   = H [ 1 ( 1 H ) O n ] x n p
so that
y n p = | | T [ ( 1 N n ) z n + N n T z n ] p | |   H | | ( 1 N n ) ( z n p ) + N n ( T z n p ) | |   H [ ( 1 N n ) z n p + N n T z n p ]   H [ ( 1 N n ) z n p + N n H z n p ]   = H [ 1 ( 1 H ) N n ] z n p   < H ( 1 ( 1 H ) N n ) ( 1 ( 1 H ) O n ) x n p
and
p n p = | | T [ ( 1 M n ) T z n + M n T y n ] p | |   H | | ( 1 M n ) ( T z n p ) + M n ( T y n p ) | |   H [ ( 1 M n ) T z n p + M n T y n p ]   H [ ( 1 M n ) H z n p + M n H y n p ]   H [ ( 1 M n ) H 2 ( 1 ( 1 H ) O n ) x n p     + M n H 2 ( 1 ( 1 H ) O n ) ( 1 ( 1 H ) N n ) x n p ]   H 3 ( 1 ( 1 H ) O n ) [ ( 1 M n ) + M n ( 1 ( 1 H ) ) N n ] x n p   = H 3 ( 1 ( 1 H ) O n ) [ ( 1 ( 1 H ) ) N n ] x n p   = H 3 [ 1 ( 1 H ) M n N n ( 1 H ) O n + ( 1 H ) 2 O n N n M n ] x n p   H 3 [ 1 ( 1 H ) M n N n ( 1 H ) O n + ( 1 H ) O n N n M n ] x n p   H 3 [ 1 ( 1 H ) ( M n N n + O n O n N n M n ] x n p   < H [ 1 ( 1 H ) ( M n N n + O n O n N n M n ] x n p .
Thus,
x n + 1 p = T ( T P n ) p   H T p n p   H 2 p n p   H 3 [ 1 ( 1 H ) ( M n N n + O n O n N n M n ] x n p .
Let
b n = H 3 n [ 1 ( 1 H ) ( M n N n + O n O n N n M n ] n x 1 p .
Then,
b n u n = H 3 n [ 1 ( 1 H ) ( M n N n + O n k O n N n M n ] n x 1 p H n [ 1 ( 1 H ) N O ] n u 1 p   = H 3 [ 1 ( 1 H ) ( M n N n + O n O n N n M n ] n x 1 p H n [ 1 ( 1 H ) N O ] n u 1 p 0 n .
From Definition 3.4 of [30], { x n } converges faster than { u n } . □
Remark 2.
As proven in [31], Thakur iteration (5) is faster than Ishikawa [19], Noor [20], Abbas and Nazir [21]. So, we will compare our proposed iteration with Picard (4), Thakur (5) and Asghar Rahimi (6).
We now demonstrate with an example that our novel iteration (9) converges at a greater rate than Picard (4), Thakur (5) and Asghar Rahimi iteration processes (6).
Example 2.
Consider T : R R to be a mapping defined by T ( x ) = x 2 6 x + 30 for any x R . Choose M n = N n = O n = 1 2 , with an initial guess x 1 = 30 . It can be easily seen that x = 5 is a fixed point of T. Table 1 and Figure 1 below shows the comparison of all the iterative schemes to a fixed point of T in 21 iterations.

3. Convergence Results for Garcia-Falset Mapping

Lemma 1.
Let C be a convex and closed subset of a Banach space S and T : C C is a mapping with condition (E). If x n defined by (9) then lim n x n p exists for every fixed point p provided that F ( T ) .
Proof. 
Recall that F ( T ) . By Lemma 6 of [10], we have:
z n p = | | T [ ( 1 O n ) x n + O n T x n ] p | |   | | ( 1 O n ) ( x n p ) + O n ( T x n p ) | |   [ ( 1 O n ) x n p + O n T x n p ]   [ ( 1 O n ) x n p + O n x n p ]   = x n p
so that
y n p = | | T [ ( 1 N n ) z n + N n T z n ] p | |   | | ( 1 N n ) ( z n p ) + N n ( T z n p ) | |   [ ( 1 N n ) z n p + N n T z n p ]   [ ( 1 N n ) z n p + N n z n p ]   = z n p   < x n p
and
p n p = | | T [ ( 1 M n ) T z n + M n T y n ] p | |   | | ( 1 M n ) ( T z n p ) + M n ( T y n p ) | |   [ ( 1 M n ) T z n p + M n T y n p ]   [ ( 1 M n ) z n p + M n y n p ]   [ ( 1 M n ) x n p + M n x n p ]   = x n p .
Thus,
x n + 1 p = T ( p n ) p   p n p   T p n p   x n p
Hence, x n p is bounded and nonincreasing for every p F ( T ) . Thus, lim n x n p exists. □
Lemma 2.
Let C be a convex and closed subset of a Banach space S and T : C C be a Garcia-Falset mapping. Suppose x n is defined by (9) and F ( T ) , then
lim n T x n x n = 0 .
Proof
From Lemma 1, for each p F ( T ) , lim n x n p exists. Suppose that there exists t 0 , such that:
lim n x n p = t
From the proof of Lemma 1 we obtain that z n p x n p . Accordingly, one has
lim n sup z n p lim n sup x n p = t
Now p is the fixed point, and by Lemma 6 of [10], we obtain that T x n p x n p
lim n sup T x n p lim n sup x n p = t
Again, by the proof of Lemma 1, we obtain that y n p z n p . So,
x n + 1 p [ ( 1 M n ) x n p + M n y n p ]
[ ( 1 M n ) x n p + M n z n p ]
Hence,
x n + 1 p x n p x n + p x n p M n z n p x n p .
So,
x n + 1 p z n p .
From (18), we have
t lim n i n f z n p .
Hence, from (19) and (21), we have
t = lim n z n p
From (9) and from (18), we have
t = lim n z n p   = lim n | | T [ ( 1 O n ) x n + O n T x n ] p | |   lim n | | ( 1 O n ) ( x n p ) + O n ( T x n p ) | |   lim n [ ( 1 O n ) x n p + lim n O n T x n p ]   t .
Hence,
t = lim n | | ( 1 O n ) ( x n p ) + O n ( T x n p ) | |
Now, from (19), (20), (22) and Lemma 2.5 of [30], lim n T x n x n = 0 . □

4. Stability

In this section, we establish the stability for Garcia-Falset mappings via our novel iteration process (9).
Theorem 2.
The iterative process { x n } defined by Equation (9) is T-stable in the sense of Harder and Hicks [32].
Proof. 
Suppose u n is an arbitrary sequence in C and u n + 1 = f ( T , u n ) is the sequence generated by (9) converging to a fixed point p (by Theorem 1) and δ n = u n + 1 f ( T , u n ) for all n Z + . We have to prove that lim n δ n = 0 which implies that lim n u n = p . Suppose lim n δ n = 0 ; then, by iteration process (9), we have
u n + 1 p u n + 1 f ( T , u n ) + f ( T , u n ) p   δ n + f ( T , u n ) p   δ n + H 3 [ 1 ( 1 H ) ( M n N n + O n O n N n M n ] u n p .
since
0 < ( 1 ( 1 H ) ( M n N n ) ) 1 ,
and
0 < ( 1 ( 1 H ) ( M n N n + O n O n N n M n ) 1 ,
we obtain u n + 1 p δ n + H 3 u n p .
Define v n = u n p . Then, u n + 1 H 3 v n + δ n . Since lim n δ n = 0 , we have lim n v n = 0 , i.e., lim n u n = p . Conversely, suppose lim n u n = p , we have:
δ n = u n + 1 f ( T , u n )   u n + 1 p + f ( T , u n ) p   u n + 1 p + H 3 [ 1 ( 1 H ) ( M n N n + O n O n N n M n ] u n p   u n + 1 p + H 3 u n p
This implies that lim n δ n = 0 . Hence, iteration process (9) is T-stable. □

5. Application

Thermal analysis and engineering both make extensive use of the heat equation, which represents the temperature distribution over time in a specific location. By adding delays, delay differential equations expand on conventional differential equations and can be used in systems where past states affect future behavior, such as population dynamics. Implicit functions are used by designed implicit neural networks to define outputs, resulting in reliable and effective solutions to challenging issues.

5.1. Heat Equation

Consider the following one-dimensional heat equation:
u t = α 2 u x 2 , 0 < x < L , 0 < t < ,
The initial and homogeneous boundary conditions are given as follows:
u ( x , 0 ) = f ( x ) ,
and
u ( 0 , t ) = 0 and u ( L , t ) = 0 ,
f ( x ) is a continuous function. Utilizing iterative scheme (9), a solution of the problem (24) is approximated with the following assumptions:
| f ( t , a ) f ( t , b ) | max | a b | , for all 0 < t < ,
Theorem 3.
Suppose T : C C is given by:
T ( u ( t , x ) ) = B n f ( t , u ( t , x ) ) , x C .
Let x n be a sequence defined by (9) for T and (27) is satisfied. Then the sequence x n converges to a solution u ( t , x ) of problem (24).
Proof. 
u ( t , x ) satisfies (24) only if it satisfies the following equation:
u ( t , x ) = B n f ( t , u ( t , x ) ) , x C .
Let u , v S and using (27), we obtain
| T ( u ( t , x ) ) T ( v ( t , x ) ) | = | B n f ( t , u ( t , x ) ) , B n f ( t , v ( t , x ) ) |   B n | f ( t , u ( t , x ) ) f ( t , v ( t , x ) ) |   B n max | ( u ( t , x ) ) ( v ( t , x ) ) |   B n | | ( u ( t , x ) ) ( v ( t , x ) ) | | .
Such that
B n = 0 , f o r e v e n n 4 n π , f o r o d d n .
To apply our Theorem, we must have nonexpansive mapping. To make (30) nonexpansive, we will make B n = 1 . As we see in (31), we will take only the odd case. When we take n = 4 π , we obtain
T ( u ( t , x ) ) T ( v ( t , x ) ) = ( u ( t , x ) ) ( v ( t , x ) )
Thus, T is nonexpansive mapping. Hence, (9) converges to the solution of (24). □
Example 3.
Consider the following problem:
u t = α 2 u x 2 , 0 < x < 2 , t = 0.5 , α = 0.5 ,
with initial and boundary conditions given as follows:
u ( x , 0 ) = 1 ,
and
u ( 0 , t ) = 0 and u ( 2 , t ) = 0 .
The problem (33) has an exact solution given by:
u ( t , x ) = n = 1 , odd 4 n π e 0.25 n π 2 2 sin n π x 2 .
T : C C is defined by:
T u ( t , x ) = n = 1 , odd 4 n π e 0.25 n π 2 2 sin n π x 2 .
The iterative scheme converges to (34) for the operator defined in Equation (35) which is shown in Table 2 and Figure 2.
Remark 3.
It is visible that after 7 iterations, new iteration (9) converges to solution of problem (33) up to 2 decimals faster than the iterative techniques [22,23,24], and after 50 iterations, ita convergence rate is faster than iterative techniques [22,23,24], as shown in Table 2 and Figure 2.

5.2. Delay Differential Equations

A delay differential equation (DDE) is a class of differential equations that includes terms dependent on past values of the solution. It takes the following form:
d y ( t ) d t = f ( t , y ( t ) , y ( t τ ) )
where y ( t ) is the unknown function of time t, τ is the delay parameter and f is a function that describes how the rate of change of y depends on both the current state y ( t ) and the state at an earlier time y ( t τ ) . This inclusion of past states allows DDEs to model systems where historical information impacts future dynamics, such as in biological, engineering or economic processes.
In this subsection, we use novel iterative scheme (9) to find the solution.
Consider C ( [ u , v ] ) , where the Banach space [33] of all continuous functions on closed interval [ u , v ] has the Chebyshev norm defined as follows:
| | x z | | = sup r [ u , v ] | x ( r ) z ( r ) | .
We consider the following delay differential equation:
x ( r ) = ψ ( r , x ( r ) , x ( r γ ) ) , r [ r 0 , v ]
with initial condition
x ( r ) = ζ ( r ) , r [ r 0 γ , r 0 ]
A function x C ( [ r 0 γ , v ] , R )     C 1 ( [ r 0 , v ] , R ) that satisfies (36) and (37) is the solution of (36).
We further suppose that the following conditions are satisfied:
  • r 0 , v R , γ > 0 ,
  • ψ C ( [ r 0 , v ] × R 2 , R ) ,
  • ζ C ( [ r 0 γ , v ] , R ) ,
  • There exists Q ψ > 0 such that:
    | ψ ( r , s 1 , s 2 ) ψ ( r , t 1 , t 2 ) | Q ψ i = 1 2 | s i t i | , s i , t i R , r [ r o , v ]
  • 2 Q ψ ( v r 0 ) < 1 .
Using (36) and (37), we construct the integral equation as follows:
x ( r ) = ζ ( r ) , r [ r 0 γ , v ] ζ ( r 0 ) + r 0 r ψ ( t , x ( t ) , x ( t γ ) ) d t , r [ r 0 , v ] .
We present the following result as a generalization of the result of Coman et al. [34].
Theorem 4.
Suppose conditions ( 1 ) to ( 5 ) are satisfied. Then, (36) and (37) have the unique solution x * C ( [ r 0 γ , v ] , R )     C 1 ( [ r 0 , v ] , R ) , and the iterative scheme (9) converges to x * and
x * = lim n T n ( x )
Proof
Let x n be the iterative scheme defined by (9) for an operator T given in (39).
Let x * F ( T ) . We show that x n x * as n for each r [ r 0 γ , r 0 ] .
Now, for each r [ r 0 , v ] , we have
z n x * = T ( ( 1 O n ) x n + O n T x n ) x *   sup r [ r 0 , v ] T ( ( ( 1 O n ) x n + O n T x n ) ) ( t ) T x * ( t )   sup r [ r 0 , v ] ζ ( r 0 ) + r 0 r ψ ( s , ( ( 1 O n ) x n + O n T x n ) ( s ) ,     ( ( 1 O n ) x n + O n T x n ) ( s γ ) ) d s     ( ζ ( r 0 ) + r 0 r ψ ( s , x * ( s ) , x * ( s γ ) ) d s )   r 0 r ψ ( s , ( ( 1 O n ) x n + O n T x n ) ( s ) ,     ( ( 1 O n ) x n + O n T x n ) ( s γ ) ) d s ψ ( s , x * ( s ) , x * ( s γ ) ) d s )
  sup r [ r 0 , v ] r 0 r L ψ ( ( 1 O n ) x n + O n T x n ) ( s ) x * ( s )     + ( ( 1 O n ) x n + O n T x n ) ( s γ ) x * ( s γ ) d s   r 0 r L ψ sup r [ r 0 , v ] ( ( 1 O n ) x n + O n T x n ) ( s ) x * ( s )     + sup r [ r 0 , v ] ( ( 1 O n ) x n + O n T x n ) ( s γ ) x * ( s γ ) d s   r 0 r L ψ ( ( 1 O n ) x n + O n T x n ) ( s ) x * ( s )     + ( ( 1 O n ) x n + O n T x n ) ( s ) x * ( s ) d s   2 L ψ ( v r 0 ) ( ( 1 O n ) x n + O n T x n ) ( s ) x * ( s ) .
( ( 1 O n ) x n + O n T x n ) x * = ( ( 1 O n ) x n + O n T x n ) x *   ( 1 O n ) x n x * + O n T x n T x *   ( 1 O n ) x n x * + O n sup r [ r 0 , v ]     ( r 0 r ψ ( s , ( x n ( s ) , x n ( s γ ) d s r 0 r ψ ( s , x * ( s ) , x * ( s γ ) ) d s )   ( 1 O n ) x n x *   + O n r 0 r L ψ | | x n ( s ) x * ( s ) | | + | | x n ( s ) x * ( s ) | | d s   ( 1 O n ) x n x * + O n 2 L ψ ( v r 0 ) | | x n x * | |   [ 1 O n ( 1 2 L ψ ( v r 0 ) ) ] | | x n x * | |
Moreover,
y n x * = T ( ( 1 N n ) x n + N n T z n ) x *   sup r [ r 0 , v ] T ( ( ( 1 N n ) x n + N n T z n ) ) ( t ) T x * ( t )   sup r [ r 0 , v ]     ζ ( r 0 ) + r 0 r ψ ( s , ( ( 1 N n ) x n + N n T z n ) ( s ) ,     ( ( 1 N n ) x n + N n T z n ) ( s γ ) ) d s ( ζ ( r 0 ) + r 0 r ψ ( s , x * ( s ) , x * ( s γ ) ) d s )   r 0 r ψ ( s , ( ( 1 N n ) x n + N n T z n ) ( s ) ,
    ( ( 1 N n ) x n + N n T z n ) ( s γ ) ) d s ψ ( s , x * ( s ) , x * ( s γ ) ) d s )   sup r [ r 0 , v ] r 0 r L ψ ( ( 1 N n ) x n + N n T z n ) ( s ) x * ( s )     + ( ( 1 N n ) x n + N n T z n ) ( s γ ) x * ( s γ ) d s   r 0 r L ψ sup r [ r 0 , v ] ( ( 1 N n ) x n + N n T z n ) ( s ) x * ( s )     + sup r [ r 0 , v ] ( ( 1 N n ) x n + N n T z n ) ( s γ ) x * ( s γ ) d s   r 0 r L ψ ( ( 1 N n ) x n + N n T z n ) ( s ) x * ( s )     + ( ( 1 N n ) x n + N n T z n ) ( s ) x * ( s ) d s   2 L ψ ( v r 0 ) ( ( 1 N n ) x n + N n T z n ) ( s ) x * ( s ) .
( ( 1 N n ) x n + N n T z n ) x * = ( ( 1 N n ) x n + N n T z n ) x *   ( 1 N n ) x n x * + N n T z n T x *   ( 1 N n ) x n x * + N n sup r [ r 0 , v ]     ( r 0 r ψ ( s , ( z n ( s ) , z n ( s γ ) d s r 0 r ψ ( s , x * ( s ) , x * ( s γ ) ) d s )   ( 1 N n ) x n x *   + N n r 0 r L ψ | | z n ( s ) x * ( s ) | | + | | z n ( s ) x * ( s ) | | d s   ( 1 N n ) x n x * + N n 2 L ψ ( v r 0 ) | | z n x * | |   ( 1 N n ) x n x * + N n 2 L ψ ( v r 0 )     [ 1 O n ( 1 2 L ψ ( v r 0 ) ) ] | | x n x * | |   [ 1 N n ( 1 2 L ψ ( v r 0 ) ) ] [ 1 O n ( 1 2 L ψ ( v r 0 ) ) ]     | | x n x * | |
and
p n x * = T ( 1 M n ) T z n + M n T y n x *   sup r [ r 0 , v ] T 1 M n T z n + M n T y n ( t ) T x * ( t )   sup r [ r 0 , v ] ζ ( r 0 ) + r 0 r ψ s , 1 M n T z n ( s ) + M n T y n ( s ) ,     1 M n T z n ( s γ ) + M n T y n ( s γ ) d s     ζ ( r 0 ) + r 0 r ψ s , x * ( s ) , x * ( s γ ) d s
  r 0 r ψ s , 1 M n T z n ( s ) + M n T y n ( s ) ,     1 M n T z n ( s γ ) + M n T y n ( s γ ) ψ s , x * ( s ) , x * ( s γ ) d s   r 0 r L ψ sup r [ r 0 , v ] 1 M n T z n ( s ) + M n T y n ( s ) x * ( s )     + sup r [ r 0 , v ] 1 M n T z n ( s γ ) + M n T y n ( s γ ) x * ( s γ ) d s   r 0 r L ψ 1 M n T z n ( s ) + M n T y n ( s ) x * ( s )     + 1 M n T z n ( s γ ) + M n T y n ( s γ ) x * ( s γ ) d s   2 L ψ ( v r 0 ) ( 1 M n ) T z n + M n T y n x * ( s ) .
( ( 1 M n ) T z n + M n T y n ) x * = ( ( 1 M n ) T z n + M n T y n ) T x *   ( 1 M n ) T z n T x * + M n T y n T x *   ( 1 M n ) sup r [ r 0 , v ]     ( r 0 r ψ ( s , ( z n ( s ) , z n ( s γ ) d s r 0 r ψ ( s , x * ( s ) , x * ( s γ ) ) d s )   + M n sup r [ r 0 , v ]     ( r 0 r ψ ( s , ( y n ( s ) , y n ( s γ ) d s r 0 r ψ ( s , x * ( s ) , x * ( s γ ) ) d s )   ( 1 M n ) r 0 r L ψ | | z n ( s ) x * ( s ) | | + | | z n ( s ) x * ( s ) | | d s   + M n r 0 r L ψ | | y n ( s ) x * ( s ) | | + | | y n ( s ) x * ( s ) | | d s   ( 1 M n ) 2 L ψ ( v r 0 ) | | z n x * | |   + M n 2 L ψ ( v r 0 ) | | y n x * | |   ( 1 M n ) 2 L ψ ( v r 0 ) [ 1 O n ( 1 2 L ψ ( v r 0 ) ) ] + M n 2     L ψ ( v r 0 ) [ 1 N n ( 1 2 L ψ ( v r 0 ) ) ]     [ 1 O n ( 1 2 L ψ ( v r 0 ) ) ] | | x n x * | |   = 2 L ψ ( v r 0 )     [ 1 ( 1 2 L ψ ( v r 0 ) ) ( M n N n + O n M n N n O n ) ]     | | x n x * | |
Finally,
x n + 1 x * = T ( T p n ) x *   sup r [ r 0 , v ] T ( T p n ) ( t ) T x * ( t )   sup r [ r 0 , v ]
    ζ ( r 0 ) + r 0 r ψ ( s , T p n ( s ) , T p n ( s γ ) ) d s ( ζ ( r 0 ) + r 0 r ψ ( s , x * ( s ) , x * ( s γ ) ) d s )   r 0 r ψ ( s , T p n ( s ) , T p n ( s γ ) ) d s ψ ( s , x * ( s ) , x * ( s γ ) ) d s )   sup r [ r 0 , v ] r 0 r L ψ ( T p n ( s ) x * ( s ) + T p n ( s γ ) x * ( s γ ) ) d s   r 0 r L ψ ( sup r [ r 0 , v ] ( T p n ( s ) x * ( s ) + sup r [ r 0 , v ] T p n ( s γ ) x * ( s γ ) ) ) d s   r 0 r L ψ ( | | T p n ( s ) x * ( s ) | | + | | T p n ( s ) x * ( s ) | | ) d s   2 L ψ ( v r 0 ) | | T p n x * | | .
T p n x * = T ( p n ) x *   sup r [ r 0 , v ] T ( p n ) ( t ) T x * ( t )   sup r [ r 0 , v ]     ζ ( r 0 ) + r 0 r ψ ( s , p n ( s ) , p n ( s γ ) ) d s ( ζ ( r 0 ) + r 0 r ψ ( s , x * ( s ) , x * ( s γ ) ) d s )   r 0 r ψ ( s , p n ( s ) , p n ( s γ ) ) d s ψ ( s , x * ( s ) , x * ( s γ ) ) d s )   sup r [ r 0 , v ] r 0 r L ψ ( p n ( s ) x * ( s ) + p n ( s γ ) x * ( s γ ) ) d s   r 0 r L ψ ( sup r [ r 0 , v ] ( p n ( s ) x * ( s ) + sup r [ r 0 , v ] p n ( s γ ) x * ( s γ ) ) ) d s   r 0 r L ψ ( | | p n ( s ) x * ( s ) | | + | | p n ( s ) x * ( s ) | | ) d s   2 L ψ ( v r 0 ) | | p n x * | | .   = 2 L ψ ( v r 0 ) 2     [ 1 ( 1 2 L ψ ( v r 0 ) ) ( M n N n + O n M n N n O n ) ]     | | x n x * | |
Remark 4.
By condition 5 and as [ 1 ( 1 2 L ψ ( v r 0 ) ) ( M n N n + O n M n N n O n ) ] = τ n < 1 and | | x n x * | | = r n , the conditions of Lemma 3 of [35] are satisfied. Hence, lim n | | x n x * | | = 0 .

5.3. Implicit Neural Network

In this section, we present a modified implicit neural network which can be regard as an extension of the traditional feed-forward neural network.
Deep equilibrium (DEQ) models, an emerging class of implicit models that map inputs to fixed points in neural networks, are gaining popularity in the deep learning community. A deep equilibrium (DEQ) model deviates from classical depth by solving for the fixed point of a single nonlinear layer R. This structure allows for the decoupling of the layer’s internal structure (which regulates representational capacity) from how the fixed point is determined (which affects inference-time efficiency), which is typically carried out using classic techniques.
We aim to build the neural network given in Figure 3 that translates from a data space x to an inference space y. The implicit component of the network utilizes a latent space X, and data are translated through L maps of this latent space from x to X. We defined that T is a network operator that transforms X × x X by:
T ( X , x ) = Δ R ( X , L ( x ) )
The objective is to find the unique fixed point X x * of T ( . ; x ) given input data x. We will then use a final mapping J : X y to transfer X x * to the inference space y. Because of this, we can create an implicit network N by
N ( x ) = Δ J ( X x * ) , where   T ( X x * = X x * , x )
Implicit models specify their outputs as solutions to nonlinear dynamical systems, as opposed to stacking a number of operators hierarchically. For instance, the outputs of DEQ models, which are the subject of this work, are defined as fixed points (i.e., equilibria) of a layer R and input x; that is, output
x * = R ( x * , x )
Theorem 5.
Let C be a convex and closed subset of a UCBS and σ : C C be a contraction mapping (activation function). Then, Equation (53) models well-posed and robust neural network provided that the weights { M t }, { N t }, { O t },{ W ( . ) } and biases { b ( . ) } are in [δ, 1 δ ] for any n N and some δ ( 0 , 1 ) .
q t = σ ( ( 1 O t ) X + O t σ ( W 1 X + W 2 P t 1 + b q ) ) r t = σ ( ( 1 N t ) X + N t σ ( W 3 q t + b r ) ) z t = σ ( ( 1 M t ) σ ( W 3 q t + b r ) + M t σ ( W 4 r t + b z ) ) P t = σ ( W 5 z t + b P ) X t = σ ( W 6 P t + b y )
Proof. 
Under the given conditions, Equation (53) models a well-posed system as the existence of unique fixed point for the contraction mapping σ is guaranteed [17]. The robustness of system can be verified by Theorem 2 where the iterative scheme is proved to be T-stable, which shows that the smaller perturbation to the system does not effect the output. □
Example 4.
Training implicit neural network. Suppose we want to build a neural network that can predict an exam score based on the number of hours studied by assuming that whoever studies for x [ 0 , 3 ] hours tends to achieve an exam score of y [ 0 , 10 ] , which can be defined by function f ( x ) = 10 3 x . For this purpose, we train a network that takes an input of 3 (hours) and gives an output of 10 (score).
Key steps for solving iteration using implicit neural networks:
  • FormulatetheProblem: Represent the task as a fixed-point equation.
  • ChooseanIterativeSolver: Select an appropriate iterative method (4)–(7) to solve the implicit equation.
  • Initialization: Define an initial guess for x and determine the conditions for convergence.
  • BackpropagationthroughIterations: Implement the backpropagation process, considering the differentiation of the fixed-point iteration with respect to the parameters using implicit differentiation.
  • ConvergenceandStabilityAnalysis: Analyze the convergence behavior and stability of the iterative process, ensuring that the method reliably finds a solution within the desired tolerance.
Given:
  • Maximum input (max) = 3;
  • Minimum input (min) = 0;
  • Input value = 3.
Normalizing the input and output:
normalized _ value = value min max min
we get:
normalized _ value = 3 0 3 0 = 3 3 = 1
By normalization, input 3 becomes 1, and similarly, output 10 also becomes 1. We start the Example by taking the following:
  • Input X = 1 ;
  • Weights: W 1 = 0.1 , W 2 = 0.2 , W 3 = 0.3 , W 4 = 0.4 , W 5 = 0.5 , W 6 = 0.3 ;
  • Bias: b q = 0.5 , b r = 0.3 , b z = 0.1 , b P = 0.3 , b y = 0.4 ;
  • O = 0.3 , N = 0.4 , M = 0.5 ;
  • Learning rate α = 0.02 .
Novel Proposed Method:
We will compute y t using (53). For this, let us start by taking the activation function as σ ( x ) = 1 1 + e x p ( x ) , which is a contraction on ( 0 , ) .
At Time Step t = 1 :
Now applying our proposed iterative method, we obtain:
q 1 = 0.69
r 1 = 0.65
z 1 = 0.34
P 1 = 0.44
X 1 = 0.48
This will be output y t at time step t = 1
Compute Loss:
The mean square error cost function is defined as follows:
C ( W , b ) = x | | 1 2 ( y ( x ) y t ) 2 | |
where:
  • x: input;
  • y t : output at time step t;
  • W: weights collected in the network;
  • b: biases;
  • v : usual length of vector v.
Let us use the Mean Squared Error (MSE) L o s s . The true output is y = 1.00 ;
L o s s = 1 2 ( y y 1 ) 2
= 1 2 ( 1.00 0.48 ) 2 0.52 .
Backpropagation:
Now, by using backpropagation, we will update weights and biases.
L o s s y 1 = ( 1.00 0.48 ) = 0.52
Gradient Learning Algorithm:
We will update weights and biases using the gradient learning algorithm. This technique can be written as follows:
P n + 1 = P n α f ( P n )
Updating   W 6 :
L o s s W 6 = L o s s y 1 y 1 σ σ W 6 = 0.52 × ( 1 t a n 2 ( 0.48 ) ) × P 1 = 0.52 × 0.76 × 0.44 = 0.18
U p d a t e d W 6 = W 6 ( p r e v i o u s ) 0.2 × L o s s W 6 = 0.33
Updating b y :
L o s s b y = L o s s y 1 y 1 σ σ b y = 0.52 × ( 1 t a n 2 ( 0.48 ) ) = 0.41
U p d a t e d b y = b y ( p r e v i o u s ) 0.2 × L o s s b y = 0.48
In a similar way, we will calculate other weights and biases as well.
Updated Weights and Biases:
WeightsBiases
W 1 = 0.10 b q = 0.50
W 2 = 0.20 b r = 0.30
W 3 = 0.30 b z = 0.10
W 4 = 0.40 b P = 0.31
W 5 = 0.50 b y = 0.48
W 6 = 0.33
At Time Step t = 2 :
We will calculate output y 2 by using updated weights and biases.
Using the same procedure, we calculated
y 2 = 0.55
Compute L o s s :
Let us use the Mean Squared Error (MSE) L o s s .
L o s s 0.45 .
By repeating the same procedure:
At Time Step t = 6 :
After 6 iterations, our model is trained and we obtain the following loss:
L o s s 0.27 .
Now, as our model is trained, we will check at x = 0.5 and see what the output for this will be. Output y at normalized input is
y = 0.50
We obtained the estimated output y = 5.0 (score) for an input of x = ( 1.5 ) hours.
It is proven in Section 2 that the rate of convergence of the iterative scheme (9) is higher than several others; therefore, the convergence rate of the trained network (53) is higher than many traditional networks like FNN, RNN, etc.
Since the proposed iterative method is faster in its convergence rate, therefore, the implicit neural network operating on its basis also has an improved convergence rate.

6. Conclusions

This research presents a novel iterative scheme that converges faster than Picard [17], Thakur et al. [22], Asghar Rahimi [23], Sintunavarat iteration [36], Jubair et al. [37], Okeke and Abbas [38], Agarwal et al. [39] and Akanimo [24]. The comparison of (9) with other iterative schemes through an Example 2 is also demonstrated. Finally, applications raised from various fields of science are given to show the applicability of our scheme. As a future work, it would be very interesting to see the applicability of this iterative scheme for solving nonlinear ODEs and PDEs after discretization. For similar work, see [40].

Author Contributions

Conceptualization, Methodology, Supervision, Review: Q.K. Original draft preparation and editing: S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Alagoz, O.; Birol, G.; Sezgin, G. Numerical reckoning fixed points for Barinde mappings via a faster iteration process. Facta Univ. Ser. Math. Inform. 2008, 33, 295–305. [Google Scholar]
  2. Ullah, K.; Ahmad, J.; Arshad, M.; Ma, Z. Approximating fixed points using a faster iterative method and application to split feasibility problems. Computation 2021, 9, 90. [Google Scholar] [CrossRef]
  3. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  4. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Prob. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  5. López, G.; Márquez, V.M.; Xu, H.K. Halpern iteration for Nonexpansive mappings. Contemp. Math. 2010, 513, 211–231. [Google Scholar]
  6. Kirk, W.A. A fixed point Theorem for mappings which do not increase distances. Amer. Math. Mon. 1965, 72, 1004–1006. [Google Scholar] [CrossRef]
  7. Browder, F.E. Nonexpansive nonlinear operators in a Banach space. Proc. Natl. Acad. Sci. USA 1965, 54, 1041–1044. [Google Scholar] [CrossRef]
  8. Göhde, D. Zum Prinzip der kontraktiven Abbildung. Math. Nachr. 1965, 30, 251–258. [Google Scholar] [CrossRef]
  9. Goebel, K. An elementary proof of the fixed-point Theorem of Browder and Kirk. Michigan Math. J. 1969, 16, 381–383. [Google Scholar] [CrossRef]
  10. Suzuki, T. Fixed point Theorems and convergence Theorems for some generalized nonexpansive mappings. J. Math. Anal. Appl. 2008, 340, 1088–1095. [Google Scholar] [CrossRef]
  11. Garcia-Falset, J.; Llorens-Fuster, E.; Suzuki, T. Fixed point theory for a class of generalized nonexpansive mappings. J. Math. Anal. Appl. 2011, 375, 185–195. [Google Scholar] [CrossRef]
  12. Bagherboum, M. Approximating fixed points of mappings satisfying condition (E) in Busemann space. Numer. Algorithms 2016, 71, 25–39. [Google Scholar] [CrossRef]
  13. Kitkuan, D.; Muangchoo, K.; Padcharoen, A.; Pakkaranang, N.; Kumam, P. A viscosity forward-backward splitting approximation method in Banach spaces and its application to convex optimization and image restoration problems. Comput. Math. Methods 2020, 2, e1098. [Google Scholar] [CrossRef]
  14. Kumam, W.; Pakkaranang, N.; Kumam, P.; Cholamjiak, P. Convergence analysis of modified Picard’s hybrid iterative algorithms for total asymptotically nonexpansive mappings in Hadamard spaces. Int. J. Comput. Math. 2020, 97, 157–188. [Google Scholar] [CrossRef]
  15. Sunthrayuth, P.; Pakkaranang, N.; Kumam, P.; Thounthong, P.; Cholamjiak, P. Convergence Theorems for generalized viscosity explicit methods for nonexpansive mappings in Banach spaces and some applications. Mathematics 2019, 7, 161. [Google Scholar] [CrossRef]
  16. Thounthong, P.; Pakkaranang, N.; Cho, Y.J.; Kumam, W.; Kumam, P. The numerical reckoning of modified proximal point methods for minimization problems in non-positive curvature metric spaces. J. Nonlinear Sci. Appl. 2020, 97, 245–262. [Google Scholar] [CrossRef]
  17. Picard, E. Mémoire sur la théorie des équations aux dérivées partielles et la méthode des approximations successives. J. Math. Pures Appl. 1890, 6, 145–210. [Google Scholar]
  18. Mann, W.R. Mean value methods in iteration. Proc. Amer. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
  19. Ishikawa, S. Fixed points by a new iteration method. Proc. Amer. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
  20. Noor, M.A. New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 2000, 251, 217–229. [Google Scholar] [CrossRef]
  21. Abbas, M.; Nazir, T. A new faster iteration process applied to constrained minimization and feasibility problems. Mate. Vesnik 2014, 66, 223–234. [Google Scholar]
  22. Thakur, B.S.; Thakur, D.; Postolache, M. A new iteration scheme for approximating fixed points of nonexpansive mappings. Filomat 2015, 30, 2711–2720. [Google Scholar] [CrossRef]
  23. Rahimi, A.; Rezaei, A.; Daraby, B.; Ghasemi, M. A new faster iteration process to fixed points of generalized alpha-nonexpansive mappings in Banach spaces. Int. J. Nonlinear Anal. Appl. 2024, 15, 1–10. [Google Scholar]
  24. Okeke, G.A.; Udo, A.V.; Alqahtani, R.T.; Kaplan, M.; Ahmed, W.E. A novel iterative scheme for solving delay differential equations and third order boundary value problems via Green’s functions. AIMS Math. 2024, 9, 6468–6498. [Google Scholar] [CrossRef]
  25. Chen, T.Q.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D. Neural ordinary differential equations. In Proceedings of the Advances in Neural Information Processing Systems 31 (NeurIPS 2018), Montréal, QC, Canada, 3–8 December 2018; Volume 31. [Google Scholar]
  26. Ghaoui, E.; Gu, F.; Travacca, B.; Askari, A.; Tsai, A. Implicit deep learning. Siam J. Math. Data Sci. 2021, 3, 930–950. [Google Scholar] [CrossRef]
  27. Bai, S.; Kolter, J.Z.; Koltun, V. Deep equilibrium models. arXiv 2019, arXiv:1909.01377. [Google Scholar]
  28. Kawaguchi, K. On the theory of implicit deep learning: Global convergence with implicit layers. arXiv 2021, arXiv:2102.07346. [Google Scholar]
  29. Jafarpour, S.; Davydov, A.; Proskurnikov, A.V.; Bullo, F. Robust implicit networks via non-Euclidean contractions. arXiv 2022, arXiv:2106.03194. [Google Scholar]
  30. Sahu, D.R. Applications of the S-iteration process to constrained minimization problems and split feasibility problems. Fixed Point Theory 2011, 12, 187–204. [Google Scholar]
  31. Gopi, R.; Pragadeeswarar, V.; De La Sen, M. Thakur’s Iterative Scheme for Approximating Common Fixed Points to a Pair of Relatively Nonexpansive Mappings. J. Math. 2022, 2022, 55377686. [Google Scholar] [CrossRef]
  32. Harder, A.M.; Hicks, T.L. A Stable Iteration Procedure for Nonexpansive Mappings. Math. Japon. 1988, 33, 687–692. [Google Scholar]
  33. Heammerlin, G.; Hoffmann, K.H. Numerical Mathematics; Springer Science and Business Media: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  34. Coman, G.; Rus, I.; Pavel, G.; Rus, I. Introduction in the Operational Equations Theory; Dacia: Cluj-Napoca, Romania, 1976. [Google Scholar]
  35. Weng, X. Fixed point iteration for local strictly pseudo-contractive mapping. Proc. Am. Math. Soc. 1991, 113, 727–731. [Google Scholar] [CrossRef]
  36. Sintunavarat, W.; Pitea, A. On a new iteration scheme for numerical reckoning fixed points of Berinde mappings with convergence analysis. J. Nonlinear Sci. Appl. 2016, 9, 2553–2562. [Google Scholar] [CrossRef]
  37. Ali, F.; Ali, J. Convergence, stability and data dependence of a new iterative algorithm with an application. Comput. Appl. Math. 2020, 39, 267. [Google Scholar] [CrossRef]
  38. Okeke, G.A.; Abbas, M. A solution of delay differential equations via Picard, Krasnoselskii hybrid iterative process. Arab. J. Math. 2017, 6, 21–29. [Google Scholar] [CrossRef]
  39. Agarwal, R.P.; O’Regan, D.; Sahu, D. Fixed Point Theory for Lipschitzian-Type Mappings with Applications; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  40. Wang, X. Fixed-Point Iterative Method with Eighth-Order Constructed by Undetermined Parameter Technique for Solving Nonlinear Systems. Symmetry 2021, 13, 863. [Google Scholar] [CrossRef]
Figure 1. Convergence rate of iterative schemes [22].
Figure 1. Convergence rate of iterative schemes [22].
Axioms 14 00071 g001
Figure 2. Comparison of iteration processes convergence [22].
Figure 2. Comparison of iteration processes convergence [22].
Axioms 14 00071 g002
Figure 3. Feed-forward networks act by computing J L . Implicit networks add a fixed point condition using R. When R is Garcia-Falset, repeatedly applying R to update a latent variable x k converges to a fixed point x * = R ( x * ; L ( x ) ) .
Figure 3. Feed-forward networks act by computing J L . Implicit networks add a fixed point condition using R. When R is Garcia-Falset, repeatedly applying R to update a latent variable x k converges to a fixed point x * = R ( x * ; L ( x ) ) .
Axioms 14 00071 g003
Table 1. Comparing iteration convergence.
Table 1. Comparing iteration convergence.
No. of IterationsPicardThakurAsghar RahimiNew Iteration
130303030
227.386127875325.460502725823.238172211614.6168424629
324.812965013221.069165938416.87939449465.5974190240
422.289132838016.892416344011.26973406485.0032637416
519.826009322113.04310365737.16508147635.0000149868
617.43888154989.71639298655.38496410995.0000000688
715.14864021657.21258638645.04540527465.0000000003
812.98420036475.77726805415.00492303605.0000000000
910.98563866705.21542503675.00052840025.0000000000
109.20708558235.05356782375.00005665205.0000000000
117.71543332725.01289023945.00000607325.0000000000
126.57535637545.00307603325.00000065105.0000000000
135.81232941355.00073256025.00000006985.0000000000
145.37672732525.00017437575.00000000755.0000000000
155.16225074745.00004150295.00000000085.0000000000
165.06708281905.00000987785.00000000015.0000000000
175.02720910455.00000235095.00000000005.0000000000
185.01094569455.00000055955.00000000005.0000000000
195.00438833295.00000013325.00000000005.0000000000
205.00175695025.00000003175.00000000005.0000000000
215.00070303935.00000000755.00000000005.0000000000
Table 2. Comparing iteration convergence.
Table 2. Comparing iteration convergence.
No. of IterationsAkanimoThakurAsghar RahimiNew Iteration
10.10.10.10.1
2−0.0644720187−0.0869402962−0.1630096033−0.3803139327
3−0.1922289988−0.2264595551−0.3322517610−0.5494920044
4−0.2915201418−0.3306827011−0.4414827416−0.6100326607
5−0.3687681179−0.4086549542−0.5122262545−0.6318737760
6−0.4289375510−0.4670790018−0.5581749961−0.6397788231
7−0.4758578291−0.5109175395−0.5880826744−0.6426433848
8−0.5124838838−0.5438511695−0.6075783795−0.6436818788
9−0.5410994344−0.5686166193−0.6202998349−0.6440584262
10−0.5634729424−0.5872541993−0.6286065855−0.6441949665
48−0.6442665057−0.6442716845−0.6442726463−0.6442726473
49−0.6442678310−0.6442719208−0.6442726467−0.6442726473
50−0.6442688702−0.6442720992−0.6442726469−0.6442726473
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kiran, Q.; Begum, S. Four-Step T-Stable Generalized Iterative Technique with Improved Convergence and Various Applications. Axioms 2025, 14, 71. https://doi.org/10.3390/axioms14010071

AMA Style

Kiran Q, Begum S. Four-Step T-Stable Generalized Iterative Technique with Improved Convergence and Various Applications. Axioms. 2025; 14(1):71. https://doi.org/10.3390/axioms14010071

Chicago/Turabian Style

Kiran, Quanita, and Shaista Begum. 2025. "Four-Step T-Stable Generalized Iterative Technique with Improved Convergence and Various Applications" Axioms 14, no. 1: 71. https://doi.org/10.3390/axioms14010071

APA Style

Kiran, Q., & Begum, S. (2025). Four-Step T-Stable Generalized Iterative Technique with Improved Convergence and Various Applications. Axioms, 14(1), 71. https://doi.org/10.3390/axioms14010071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop