Next Article in Journal
Response Analysis and Vibration Suppression of Fractional Viscoelastic Shape Memory Alloy Spring Oscillator Under Harmonic Excitation
Previous Article in Journal
One Class of Stackelberg Linear–Quadratic Differential Games with Cheap Control of a Leader: Asymptotic Analysis of an Open-Loop Solution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extension of an Eighth-Order Iterative Technique to Address Non-Linear Problems

1
Escuela Politécnica Superior de Zamora, Universidad de Salamanca, Avda. de Requejo 33, 49029 Zamora, Spain
2
Scientific Computing Group, Universidad de Salamanca, Plaza de la Merced, 37008 Salamanca, Spain
3
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
4
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
5
Department of Mathematics, Saveetha School of Engineering, SIMATS, Chennai 602105, India
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(11), 802; https://doi.org/10.3390/axioms13110802
Submission received: 22 October 2024 / Revised: 12 November 2024 / Accepted: 15 November 2024 / Published: 18 November 2024
(This article belongs to the Section Mathematical Analysis)

Abstract

:
The convergence order of an iterative method used to solve equations is usually determined by using Taylor series expansions, which in turn require high-order derivatives, which are not necessarily present in the method. Therefore, such convergence analysis cannot guarantee the theoretical convergence of the method to a solution if these derivatives do not exist. However, the method can converge. This indicates that the most sufficient convergence conditions required by the Taylor approach can be replaced by weaker ones. Other drawbacks exist, such as information on the isolation of simple solutions or the number of iterations that must be performed to achieve the desired error tolerance. This paper positively addresses all these issues by considering a technique that uses only the operators on the method and Ω-generalized continuity to control the derivative. Moreover, both local and semi-local convergence analyses are presented for Banach space-valued operators. The technique can be used to extend the applicability of other methods along the same lines. A large number of concrete examples are shown in which the convergence conditions are fulfilled.
MSC:
65H10; 65Y20; 65G99; 41A58

1. Introduction

In this study, our goal is to obtain a solution ϰ * of the non-linear equation
P ( x ) = 0 ,
where P : D G 1 G 2 , is assumed to be a differentiable operator in the Fréchet sense with G 1 and G 2 Banach spaces, and D is an open and convex set. Only in certain cases is it possible to obtain exactly the solution ϰ * . As a result, given certain assumptions based on the initial estimate, researchers rely on the construction of iterative methods that produce a sequence converging to ϰ * . The fixed point method, successive substitutions method or Picard method is defined by
x + 1 = Q ( x ) ,
where Q : G 1 G 1 is a continuous operator. This method is of convergence order one [1,2,3,4].
Newton’s method [1,2,3,4,5,6] is a well-known one-step iterative procedure, which is defined for x 0 D and each = 0 , 1 , 2 , … by
x + 1 = x P ( x ) 1 P ( x ) .
This method has convergence of second order if x 0 is chosen close enough to the solution denoted by ϰ * . By adding substeps to a one-step method, higher-order convergence methods (of order greater than two) have been obtained, as found in the literature [7,8,9,10,11,12,13,14]. As an example, the Traub two-step method
y = x P ( x ) 1 P ( x ) , x + 1 = y P ( x ) 1 P ( y )
is of order three, whereas the two-step method
y = x P ( x ) 1 P ( x ) , x + 1 = y P ( y ) 1 P ( y )
is of order four. Furthermore, numerous one- and multi-step methods have been proposed to improve the convergence order and computational cost of Newton’s method [15,16,17,18,19,20,21,22,23].
For a given x 0 D , let us consider the following multi-step iterative approach to solving (1), given in [24]:
y = x P ( x ) 1 P ( x ) , A = P ( x ) 1 P ( x ) P ( y ) , z = y I + I + 5 4 A A P ( x ) 1 P ( y ) , x + 1 = z I + I + 3 2 A A P ( x ) 1 P ( z ) , = 0 , 1 , 2 , .
It was shown that this method presents eighth-order convergence, highlighting that it only uses one operator inversion per iteration. Its high efficiency was demonstrated by comparing it with other methods that have appeared in the literature. However, there are certain problems limiting the applicability of this method (2), particularly when Taylor series are used to establish convergence. These problems are as follows.
 (P1
The convergence order eight was determined in [24] provided that G 1 = G 2 = R k and assuming the existence and boundedness of higher-order derivatives P ( i ) , i = 2 , 3 , , 9 , which do not appear in the formulation of the method. Let us see an example with G 1 = G 2 = R , D = [ 1.25 , 1.25 ] . Define the function p : D R by p ( t ) = θ 1 t 2 log t + θ 2 t 3 + θ 3 t 4 , if t 0 , and p ( t ) = 0 , if t = 0 , where θ 1 0 , and θ 2 + θ 3 = 0 . It is clear that t * = 1 D solves the equation p ( t ) = 0 . However, for j 2 , the derivatives p ( j ) ( t ) are not continuous at t = 0 . Hence, the results in [24] cannot ensure the convergence of the method to t * = 1 . However, this method converges when taking, for example, t 0 = 1.2 D . Consequently, we can ensure that the conditions in [24] related to the convergence of the method are weakened.
In addition, there are other factors that restrict the applicability of this method.
 (P2
No prior estimates are provided, and the number of iterations that must be performed to achieve a predetermined error tolerance is unknown.
 (P3
Since the radius of convergence is not given in [24], selecting initial estimates that guarantee convergence to ϰ * is difficult in general.
 (P4
The uniqueness of ϰ * in a neighborhood around it is not determined.
 (P5
The study in [24] is restricted to R k .
 (P6
The semi-local analysis of convergence, which is in fact the most interesting, has not been developed in [24].
Therefore, issues ( P 1 ) ( P 6 ) must be addressed in our study. It is also worth noting that Taylor series expansions constitute the dominant technique for the study of multi-step iterative methods, especially when it comes to showing the convergence order.
As a novelty Our study provides the following new insights.
( P 1 )
The new local conditions are based on controlling the first derivative P that is present in the method.
( P 2 )
A prior estimate of ϰ * x is developed. Thus, the number of iterations to be performed can be known in advance.
( P 3 )
A radius of convergence is provided.
( P 4 )
A set is determined that contains only one solution.
( P 5 )
The studies are provided for Banach space-valued operators.
( P 6 )
A semi-local analysis based on majorizing sequences [2,4] is carried out.
Both types of analyses rely on generalized continuity conditions on P , which are employed to control it [1,2,3]. The same set of conditions is used in both analyses. It is also worth noting that the methodology of this study can be applied to other methods using Taylor series and inverses of linear operators, such as those in [6,15,16,17,18,19,20,21,22,23,25,26].
A detailed historical overview of convergence analysis methods can also be found in [2,4,5,21,26]. The flow chart of the proposed convergence technique is divided into two parts.
(1) Local convergence analysis
Sufficient local conditions are provided to establish the convergence of the method. The iterates are shown to exist on a ball centered at the solution and of a certain radius, which is well defined. Convergence is assured provided that the initial guess the initial point x 0 is chosen from the ball. It is also shown that the sequence x ϰ * p = 0 converges to zero sequence. Furthermore, a certain ball is specified that contains only one solution of Equation (1).
(2) Semi-local convergence analysis
Scalar sequences majorize (control) the iterates, which are shown to exist in a ball centred at x 0 and of a certain computable radius. Convergence is established as long as P ( x 0 ) 1 P ( x 0 ) is small enough. A priori estimates of x + 1 x and ϰ * x determine the number of iterations to that must be performed to reach a predetermined error tolerance. The uniqueness of the solution results in a finite radius ball centered at x 0 , completing this type of analysis.
The rest of the study is structured as follows: local and semi-local analyses are developed in Section 2 and Section 3, respectively; Section 4 contains concrete numerical applications; and Section 5 presents some concluding remarks.

2. Analysis of the Local Convergence

Let T = [ 0 , + ] . The following abbreviations will be used in the analysis.
FCND: a function that is continuous as well as nondecreasing on an interval.
SPS: the smallest and positive solution.
The local analysis relies on the following conditions.
 (H1
There exists an FCND 0 : T T such that the equation 0 ( t ) 1 = 0 has the SPS denoted by s 0 . Let us assume that T 0 = [ 0 , s 0 ) .
 (H2
There exists an FCND : T 0 T such that, for l 1 : T 0 T defined by
l 1 ( t ) = 0 1 ( 1 θ ) t d θ 1 0 ( t ) ,
the equation l 1 ( t ) 1 = 0 has the SPS in T 0 . Let r 1 stand for this SPS.
 (H3
The equation 0 l 1 ( t ) t 1 = 0 has solutions in T 0 . Let us denote by s 1 the SPS on T 0 . Set T 1 = [ 0 , s 1 ) .
Define the functions ¯ : T 1 T , l 2 : T 1 T by
¯ ( t ) = 1 + l 1 ( t ) t or 0 ( t ) + 0 l 1 ( t ) t
and
l 2 ( t ) = [ 0 1 ( 1 θ ) l 1 ( t ) t d θ 1 0 ( l 1 ( t ) t ) + ¯ ( t ) 1 + 0 1 0 θ l 1 ( t ) t d θ ( 1 0 ( t ) 1 0 ( l 1 ( t ) t ) + 1 + 5 4 ¯ ( t ) 1 0 ( t ) ¯ ( t ) 1 + 0 1 0 θ l 1 ( t ) t d θ 1 0 ( t ) 2 ] l 1 ( t ) .
The choice of ¯ ( t ) depends on the functions 0 ( t ) and ( t ) . We will choose the one that provides the largest radius of convergence.
 (H4
The equation l 2 ( t ) 1 = 0 has the SPS on T 1 { 0 } . Let us denote by r 2 the SPS of this equation in T 1 { 0 } .
 (H5
The equation 0 l 2 ( t ) t 1 = 0 has the SPS in T 1 . Let us denote by s 2 the SPS in T 1 { 0 } . Set T 2 = [ 0 , s 2 ) and define the function l 3 : T 2 T by
l 3 ( t ) = [ 0 1 ( 1 θ ) l 2 ( t ) t d θ 1 0 l 2 ( t ) t + ¯ ¯ ( t ) 1 + 0 1 0 θ l 2 ( t ) t d θ 1 0 ( t ) 1 0 l 2 ( t ) t + 1 + 3 2 ¯ ( t ) 1 0 ( t ) ¯ ( t ) 1 + 0 1 0 θ l 2 ( t ) t d θ 1 0 ( t ) 2 ] l 2 ( t ) .
Define the function ¯ ¯ : T 2 T by
¯ ¯ ( t ) = 1 + l 2 ( t ) t or 0 ( t ) + 0 l 2 ( t ) t .
The choice of ¯ ( t ) depends on the functions 0 ( t ) and ( t ) . We will choose the one that provides the largest radius of convergence.
 (H6
The equation l 3 ( t ) 1 = 0 has a solution on T 2 . Let us denote by r 3 the SPS in T 2 { 0 } . Let
r = min { r i } , i = 1 , 2 , 3 , and T * = [ 0 , r ) .
These definition implies that for each t T * , it holds that
0 0 ( t ) < 1 ,
0 0 l 1 ( t ) t < 1 ,
0 0 l 2 ( t ) t < 1 ,
and
0 l i ( t ) < 1 , i = 1 , 2 , 3 .
From now on, by E ( x , λ ) , we mean the open ball with center x D and radius λ > 0 . Moreover, the closure of E ( x , λ ) is denoted by E [ x , λ ] .
Furthermore, we assume the existence of a linear operator related to the functions 0 and , as defined below:
 (H7
There exists ϰ * D that solves the equation P ( x ) = 0 and there also exists an invertible linear operator M such that, for each x D ,
M 1 P ( x ) M 0 ( x ϰ * ) .
Set D 1 = D E ( ϰ * , s 0 ) .
 (H8
For any x ˜ , x ¯ D 1 , we have that
M 1 P ( x ˜ ) P ( x ¯ ) ( x ˜ x ¯ ) .
 (H9
E [ ϰ * , r ] D .
Remark 1.
The choice M = P ( ϰ * ) is popular, but not necessarily the most flexible one. However, in this case, ϰ * is simple. We do not adopt such an assumption in conditions ( H 1 ) ( H 8 ) . Consequently, our approach can be used to obtain solutions of multiplicity greater than one provided that M P ( ϰ * ) . Other choices can be M = I (the identity operator) or M = P ( x ˜ ) , where x ˜ D is some auxiliary point.
The main local convergence analysis for method (2) is presented below.
Theorem 1.
Suppose that conditions ( H 1 ) ( H 9 ) hold. If x 0 E ( ϰ * , r ) { ϰ * } , then the sequence { x } p = 0 generated by method (2) is well defined on the ball E ( ϰ * , r ) . Moreover, for each = 0 , 1 , 2 , , it holds that
{ x } E ( ϰ * , r ) ,
y ϰ * l 1 ( x ϰ * ) x ϰ * x ϰ * ,
z ϰ * l 2 ( x ϰ * ) x ϰ * x ϰ * ,
x + 1 ϰ * l 3 ( x ϰ * ) x ϰ * x ϰ * ,
where the radius of convergence r is given in (1), and the functions l i are those given previously.
Proof. 
By the hypothesis x 0 E ( ϰ * , r ) { ϰ * } E ( ϰ * , r ) . So, the expression (4) clearly holds if = 0 . Let u E ( ϰ * , r ) . The application of Condition ( H 7 ) and the Formula (3) give in return
M 1 P ( u ) M 0 ( u ϰ * ) 0 ( r ) < 1 .
The estimate (12), in combination with the perturbation lemma on linear and invertible operators by Banach [2], implies the existence of P ( u ) , which verifies
P ( u ) 1 M 1 1 0 ( u ϰ * ) .
In particular, for u = x 0 , the linear operator P ( x 0 ) 1 exists. Thus, the iterate y 0 exists. Then, we can write
y 0 ϰ * = x 0 ϰ * P ( x 0 ) 1 P ( x 0 ) = 0 1 P ( x 0 ) 1 P ( ϰ * ) P ϰ * + θ ( x 0 ϰ * ) P ( x 0 ) d θ ( x 0 ϰ * ) .
Using condition ( H 8 ) , (13) (for u = x 0 ), (7) (for i = 1), and Formulas (3) and (14), we arrive at
y 0 ϰ * = 0 1 ( 1 θ ) x 0 ϰ * d θ x 0 ϰ * 1 0 ( x 0 ϰ * ) l 1 ( x 0 ϰ * ) x 0 ϰ * x 0 ϰ * < r .
Thus, the iterate y 0 E ( x 0 , r ) , and (9) holds if = 0 . Note that z 0 and x 1 are given in the second and third steps of (2), from which we can write
z 0 ϰ * = y 0 ϰ * P ( y 0 ) 1 P ( y 0 ) + P ( y 0 ) 1 P ( y 0 ) P ( x 0 ) P ( x 0 ) 1 P ( y 0 ) I + 5 4 A 0 A 0 P ( x 0 ) 1 P ( y 0 ) ,
leading to
z 0 ϰ * = [ 0 1 ( 1 θ ) y 0 ϰ * d θ 1 0 ( y 0 ϰ * ) + ¯ 0 1 + 0 1 0 θ y 0 ϰ * d θ 1 0 ( x 0 ϰ * ) 1 0 ( y 0 ϰ * ) + 1 + 5 4 0 1 0 ( x 0 ϰ * ) ¯ 0 ( 1 + 0 1 0 θ y 0 ϰ * d θ ) 1 0 ( x 0 ϰ * ) 2 ] y 0 ϰ * l 1 ( x 0 ϰ * ) x 0 ϰ * x 0 ϰ * ,
where ¯ 0 = ¯ ( x 0 ϰ * ) , and we also use the estimates
M 1 P ( y 0 ) P ( x 0 ) ( y 0 x 0 ) ( y 0 ϰ * + x 0 ϰ * ) 1 + l 1 ( x 0 ϰ * ) x 0 ϰ * ¯ 0 ,
or
M 1 P ( y 0 ) P ( x 0 ) M 1 P ( y 0 ) M + M 1 P ( x 0 ) M 0 ( y 0 ϰ * + 0 x 0 ϰ * ) 0 l 1 ( x 0 ϰ * ) x 0 ϰ * + 0 ( x 0 ϰ * ) ¯ 0
and
P ( y 0 ) = P ( y 0 ) P ( ϰ * ) = 0 1 P ϰ * + θ ( y 0 ϰ * ) d θ ( y 0 ϰ * ) ,
leading to
M 1 P ( y 0 ) M 1 0 1 P ( ϰ * + θ ( y 0 ϰ * ) ) M + M d θ ( y 0 ϰ * ) 1 + 0 1 0 θ y 0 ϰ * d θ y 0 ϰ * .
Furthermore, from the last step of (2), we can write
x 1 ϰ * = z 0 ϰ * P ( z 0 ) 1 P ( z 0 ) + P ( z 0 ) 1 P ( x 0 ) P ( z 0 ) P ( z 0 ) 1 P ( z 0 ) I + 3 2 A 0 A 0 P ( x 0 ) 1 P ( z 0 ) .
Thus, from (7) (for i = 3 ), (13) ( for u = x 0 , z 0 ) , (15)–(19) for ( y 0 , z 0 ) , and (20), we arrive at
x 1 ϰ * = [ 0 1 ( 1 θ ) z 0 ϰ * d θ 1 0 ( z 0 ϰ * ) + ¯ ¯ 0 1 + 0 1 0 θ z 0 ϰ * d θ 1 0 ( x 0 ϰ * ) 1 0 ( z 0 ϰ * ) + 1 + 3 2 ¯ 0 1 0 ( x 0 ϰ * ) 1 + 0 1 0 θ z 0 ϰ * d θ ( 1 0 ( x 0 ϰ * ) ) 2 ] z 0 ϰ * | l 3 ( x 0 ϰ * ) x 0 ϰ * x 0 ϰ * ,
where ¯ ¯ 0 = ¯ ¯ ( x 0 ϰ * ) .
Hence, the iterate x 1 E ( ϰ * , r ) , and (8) holds for = 1 and = 0 , respectively. Proceeding inductively, one can prove (8)–(11) provided that the iterates x m , y m , x m + 1 , z m replace x 0 , y 0 , z 0 , x 1 , respectively, in the preceding calculations. Finally, from the estimation
x m + 1 x m c x m ϰ * c m + 1 x m ϰ * < r ,
where c = l 3 ( x 0 ϰ * ) [ 0 , 1 ] , we conclude that the iterate x m + 1 E ( ϰ * , r ) and lim m x m = ϰ * . The uniqueness of the solution ϰ * in a certain neighborhood containing it is given in the next result. □
Proposition 1.
Let us suppose that there exists s 3 > 0 such that condition ( H 7 ) is satisfied in the ball E ( ϰ * , s 3 ) , and there exists s 4 s 3 such that, for the function 0 defined in ( C 1 ) , it holds that
0 1 0 ( θ s 4 ) d θ < 1 .
Set D 3 = D E [ ϰ * , s 4 ] . Then, the equation P ( x ) = 0 has a unique solution ϰ * in D 3 .
Proof. 
We proceed by contradiction. Suppose that there exists ϰ * * D 3 such that P ( ϰ * * ) = 0 . Define the linear operator L 1 by L 1 = 0 1 P ϰ * + θ ( ϰ * * ϰ * ) d θ . It follows from (23) and ( H 7 ) that
M 1 ( L 1 M ) 0 1 0 ( θ ϰ * * ϰ * ) d θ 0 1 0 ( θ s 4 ) d θ < 1 .
Thus, the operator L 1 is invertible. Consequently, from the identity
ϰ * * ϰ * = L 1 1 P ( ϰ * * ) P ( ϰ * ) = L 1 1 ( 0 ) = 0 ,
we conclude that ϰ * * = ϰ * . □
Remark 2.
Under all conditions ( H 1 ) ( H 9 ) , we will obtain s 3 = r using Proposition 1.

3. Semi-Local Analysis for Method (2)

The conditions for this case, as well as the calculations, are the same as in the local case. In this case, the point ϰ * and the functions 0 and are replaced by x 0 and the functions v 0 and v, respectively.
Now, we assume the following.
 (C1
There exists a F C N D v 0 : T T such that v 0 ( t ) 1 = 0 has the SPS in T 0 . Denote by ρ 0 the S P S in T 0 . Set T 3 = [ 0 , ρ 0 ) .
 (C2
There exists a F C N D v : T 3 T such that v ( t ) 1 = 0 has the SPS in T 3 . Define the sequences { α } , { β } , and { γ } for α 0 = 0 , some β 0 0 , and each = 0 , 1 , 2 , by
λ = 0 1 v ( 1 θ ) ( β α ) d θ ( β α ) ,
v ¯ = v ( β α ) or v 0 ( α ) + v 0 ( β ) ,
γ = β + 1 + 1 + 5 4 v ¯ 1 v 0 ( α ) v ¯ 1 v 0 ( α ) λ 1 v 0 ( α ) ,
μ = 1 + 0 1 v 0 β + θ ( γ β ) d θ ( γ β ) + λ ,
α + 1 = γ + 1 + 1 + 3 2 v ¯ 1 v 0 ( α ) v ¯ 1 v 0 ( α ) μ 1 v 0 ( α ) ,
δ + 1 = 0 1 v ( 1 θ ) ( α + 1 α ) d θ ( α + 1 α ) + 1 + v 0 ( α ) ( α + 1 β )
and
β + 1 = α + 1 + δ + 1 1 v 0 ( α + 1 ) .
Note that the sequences { α } , { β } , and { γ } are majorizing for { x } , { y } , and { z } , respectively (see Theorem 2).
 (C3
There exists ρ 1 [ 0 , ρ 0 ] such that, for each = 0 , 1 , 2 , ,
v 0 ( α ) < 1 , and α ρ 1 .
By adopting the above conditions in (26), we obtain
0 α β γ ρ 1
and there exists α * [ 0 , ρ 1 ) such that lim α = α * .
 (C4
There exist a point x 0 D and an invertible operator K such that, for each x D ,
K 1 P ( x ) K v 0 ( x x 0 ) .
Set D 3 = D E ( x 0 , ρ 3 ) . Notice that, if x = x 0 , we have
K 1 P ( x 0 ) M v 0 ( 0 ) < 1 .
Thus, P ( x 0 ) is invertible. Hence, we can take β 0 P ( x 0 ) 1 P ( x 0 ) .
 (C5
For each x ¯ , x ˜ D , we have
K 1 P ( x ˜ ) P ( x ¯ ) v ( x ˜ x ¯ ) .
 (C6
E [ x 0 , a * ] D .
Remark 3.
(i) We can take K = P ( x 0 ) , although this is not the most flexible choice. Other choices can be K = I or K = P ( x ¯ ) , where x ¯ is an auxiliary point.
The main semi-local analysis of convergence follows for method (2).
Theorem 2.
Suppose that conditions ( C 1 ) ( C 6 ) hold. Then, the sequence { x } p = 0 is well defined and it holds that
{ x } E ( x 0 , a * ) ,
y x β α ,
z y γ β
and
x + 1 z α + 1 γ .
Furthermore, there exists ϰ * E [ x 0 , a * ] solving Equation (1) such that
ϰ * x a * a .
Proof. 
As in the local analysis, mathematical induction is employed to prove items (27)–(30). From the definition of β 0 , (26), and the first substep of method (2), we obtain
y 0 x 0 = P ( x 0 ) 1 P ( x 0 ) β 0 = β 0 α 0 < a * .
Thus, we have that y 0 E ( x 0 , a * ) , and (28) holds if m = 0 . Let u E ( x 0 , a * ) . From the definition of a * , (26) and the condition ( C 4 ) , we obtain
M 1 P ( u ) M v 0 ( u x 0 ) v 0 ( a * ) < 1 .
Therefore, the linear operator P ( u ) is invertible, and
P ( u ) 1 M 1 1 v 0 ( u x 0 ) .
If u = x 0 in (32), then P ( x 0 ) is invertible. Consequently, the iterates y 0 , x 0 , and x 1 are well defined. We have
P ( y m ) = P ( y m ) P ( x m ) P ( x m ) ( y m x m ) = 0 1 P x m + θ ( y m x m ) P ( x m ) d θ ( y m x m ) ,
Hence, we obtain
M 1 P ( y m ) 0 1 v ( 1 θ ) y m x m d θ y m x m 0 1 v ( 1 θ ) β m α m d θ β m α m = λ m ,
M 1 P ( y m ) P ( x m ) v ( y m x m ) v ( β m α m ) v ¯ m
or
M 1 P ( y m ) P ( x m ) M 1 P ( y m ) M + M 1 P ( x m ) M v 0 ( y m x 0 ) + v 0 ( x m x 0 ) + v 0 ( α m ) + v 0 ( β m ) v ¯ m .
Then, from the second substep of (2), (26), (32) for ( u = x 0 ) , and (33)–(35), we have that
z m y m 1 + 1 + 5 4 v ¯ m 1 v 0 ( α m ) v ¯ m 1 v 0 ( α m ) λ m 1 v 0 ( α m ) = γ m β m
and
z m x 0 z m y m + y m x 0 γ m β m + β m α 0 = γ m < a * .
Thus, the iterate z m E ( x 0 , a * ) , and (29) holds if m = 0 . We can write
P ( z m ) = P ( z m ) P ( y m ) + P ( y m ) .
Therefore, we obtain
L 1 P ( z m ) 1 + 0 1 v 0 β m + θ ( γ m β m ) d θ ( γ m β m ) + λ m = μ m ,
where we have used
P ( z m ) P ( y m ) = 0 1 P y m + θ ( z m y m ) d θ ( z m y m ) .
Then, we yield
M 1 P ( z m ) P ( y m ) 0 1 M 1 P y m + θ ( z m y m ) M + M d θ ( z m y m ) 1 + 0 1 v 0 y m x 0 + θ z m y m d θ z m y m 1 + 0 1 v 0 β m + θ ( γ m β m ) d θ ( γ m β m ) .
Next, from the third substep of method (2), (26), and (37), we obtain in turn
x m + 1 z m I + I + 3 2 A m A m P ( x m ) 1 M M 1 P ( z m ) 1 + 1 + 3 2 v ¯ m 1 v 0 ( α m ) v ¯ m 1 v 0 ( α m ) μ m 1 v 0 ( α m ) = x m + 1 γ m
and
x m + 1 x 0 x m + 1 z m + z m x 0 α m + 1 γ m + γ m α 0 = α m + 1 < a * .
Thus, the iterates x m + 1 E ( x 0 , a * ) , and (30) holds. We can write
P ( x m + 1 ) = P ( x m + 1 ) P ( x m ) P ( x m ) ( x m + 1 x m ) + P ( x m ) ( x m + 1 y m ) ,
M 1 P ( x m + 1 ) δ ¯ m + 1 δ m + 1 ,
leading to
y m + 1 x m + 1 P ( x m + 1 ) 1 M M 1 P ( x m + 1 ) δ ¯ m + 1 1 v 0 ( x m + 1 x 0 ) δ m + 1 1 v 0 ( α m + 1 ) = β m + 1 α m + 1
and
y m + 1 x 0 y m + 1 x m + 1 + x m + 1 x 0 β m + 1 α m + 1 + α m + 1 α 0 = β m + 1 < α * .
Taking m + in (39) and using the continuity of P, we deduce that P ( ϰ * ) = 0 . Finally, the estimate (31) follows from
x m + j x m α m + j α m
assuming that j + . □
As in the local analysis, the uniqueness of the solution of equation P ( x ) = 0 can be obtained as follows.
Proposition 2.
Let us assume that there exists a solution y * E ( x 0 , ρ 2 ) of equation P ( x ) = 0 for some ρ 2 > 0 , the condition ( C 4 ) holds on the ball E ( x 0 , ρ 2 ) , and there exists ρ 3 ρ 2 such that
0 1 v 0 ( 1 θ ) ρ 2 + θ ρ 3 d θ < 1 .
Set D 4 = D E [ x 0 , ρ 3 ] .
Then, the equation P ( x ) = 0 has only one solution in D 4 .
Proof. 
Suppose that there also exists z * D 4 such that P ( z * ) = 0 . Define the linear operator L 2 = P y * + θ ( z * y * ) d θ . It follows from ( C 4 ) and (41) that
M 1 ( L 2 M ) 0 1 v 0 ( 1 θ ) y * x 0 + θ z * x 0 d θ 0 1 v 0 ( 1 θ ) ρ 2 + θ ρ 3 d θ < 1 ,
Therefore, the linear operator L 2 is invertible, and z * = y * . □
Remark 4.
(i) The limit point a * can be replaced by ρ 0 in condition ( C 6 ) .
(ii)
If all conditions ( C 1 ) ( C 6 ) hold, then set ρ 2 = a * and y * = ϰ * in Proposition 2.

4. Numerical Examples

In this section, we present the computational results, developed through a plethora of numerical examples. Among them, two examples are academic in nature, while the remaining four involve applied science problems, including the Hammerstein integral equation of the first kind, the Fisher’s equation, and boundary value problems (BVP). The corresponding results for these examples are collected in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7. In addition, we compute the computational order of convergence ( C O C ) based on the formula
λ = ln x j + 1 ϰ * | x j ϰ * ln x j ϰ * x j 1 ϰ * , f o r j = 1 , 2 ,
or the approximate computational order of convergence ( A C O C ) [25,26] given by
λ * = ln x j + 1 x j x j x j 1 ln x j x j 1 x j 1 x j 2 , f o r j = 2 , 3 ,
The stopping criteria and error tolerance are established on the following basis:
( i ) x j + 1 x j < ϵ ;
( i i ) P ( x j ) < ϵ , where ϵ = 10 400 .
M a t h e m a t i c a 11 with multiple precision arithmetic is used for the computational results.
The first two examples are used to validate local convergence through conditions ( H 1 ) ( H 9 ) , provided that M = P ( ϰ * ) .
Example 1.
Let us assume that G 1 = G 2 = R 3 and D = E ( 0 , 1 ) . In addition, we choose P on D with u = u 1 u 2 u 3 as
P ( u ) = u 3 e 1 2 u 2 2 + u 3 e u 1 1 .
It follows from (44) that
P ( u ) = 0 0 1 0 ( e 1 ) u 2 1 e u 1 0 0 .
It is straightforward to see that ϰ * = 0 0 0 . Then, conditions ( H 1 ) ( H 9 ) hold, taking
0 ( t ) = ( e 1 ) t , and ( t ) = e 1 e 1 t .
In Table 1, we collect the obtained radii of convergence for Example 1.
Example 2.
We consider G 1 = G 2 = C [ 0 , 1 ] and D = E ( 0 , 1 ) . We consider the well-known Hammerstein first-kind non-linear integral operator P:
P ( w ) ( x ) = w ( x ) 2 0 1 x ξ w ( ξ ) 3 d ξ .
Then, we obtain
P w ( p ) ( x ) = p ( x ) 6 0 1 x ξ w ( ξ ) 2 p ( ξ ) d ξ ,
for each p C [ 0 , 1 ] .
By substituting these values in conditions ( H 1 ) ( H 9 ) , we obtain
0 ( t ) = 6 t , and ( t ) = 12 t .
The radii of convergence when using (2) for Example 2 are depicted in Table 2.
The rest of the examples test the performance of method (2).
Example 3.
Let us consider the following boundary value problem [2]:
w + μ 2 w 2 + 1 = 0
with w ( 0 ) = 0 and w ( 1 ) = 1 . The interval [ 0 , 1 ] is uniformly divided into l subintervals, which yields
γ 0 = 0 < γ 1 < γ 2 < < γ l 1 < γ l = 1 , γ j + 1 = γ j + h , h = 1 l .
Then, we can choose w 0 = w ( γ 0 ) = 0 , w 1 = w ( γ 1 ) , , w l 1 = w ( γ l 1 ) , w l = w ( γ l ) = 1 . Using the classical second-order approximations for the derivatives
w θ = w θ + 1 w θ 1 2 h , w θ = w θ 1 2 w θ + w θ + 1 h 2 , θ N ,
we discretize (45) and obtain the following non-linear system of equations:
w θ 1 2 w θ + w θ + 1 + μ 2 4 ( w θ + 1 w θ 1 ) 2 + h 2 = 0 , θ = 1 , 2 , , l 1 .
In Table 3, we present the computational results after solving the system (46) by taking = 71 and μ = 1 2 with method (2). The approximate root after five iterations is
x 5 = ( 0.02371 , 0.04708 , 0.07012 , 0.09283 , 0.1152 , 0.1372 , 0.1590 , 0.1804 , 0.2015 , 0.2223 , 0.2429 , 0.2631 , 0.2830 , 0.3026 , 0.3219 , 0.3409 , 0.3597 , 0.3781 , 0.3963 , 0.4142 , 0.4318 , 0.4491 , 0.4662 , 0.4830 , 0.4995 , 0.5158 , 0.5318 , 0.5475 , 0.5630 , 0.5782 , 0.5932 , 0.6079 , 0.6224 , 0.6366 , 0.6506 , 0.6643 , 0.6777 , 0.6910 , 0.7040 , 0.7167 , 0.7292 , 0.7415 , 0.7535 , 0.7653 , 0.7769 , 0.7883 , 0.7994 , 0.8103 , 0.8209 , 0.8313 , 0.8415 , 0.8515 , 0.8613 , 0.8708 , 0.8801 , 0.8892 , 0.8981 , 0.9068 , 0.9152 , 0.9234 , 0.9315 , 0.9393 , 0.9468 , 0.9542 , 0.9614 , 0.9683 , 0.9751 , 0.9816 , 0.9879 , 0.9940 ) T
Example 4.
Let us consider a well-known Fisher’s problem [6], which is given by
μ t = η μ x x + μ ( 1 μ ) = 0 ,
with homogeneous Neumann’s boundary conditions
μ ( x , 0 ) = 1.5 + 0.5 cos ( π x ) , 0 x 1 , μ x ( 0 , t ) = 0 , μ x ( 1 , t ) = 0 , 0 t 1 ,
where η is a diffusion parameter. We apply the finite difference discretization approach to (47) and obtain a non-linear system, considering a mesh with N 1 points in the spatial direction and N 2 points in the temporal direction. At the grid points of a mesh, w i , j denotes the approximate values of the solution, i.e., w i , j = μ ( x i , t j ) . The corresponding step sizes are h = 1 / N 1 and l = 1 / N 2 , respectively. By using the following centered, backward, and forward approximations of the derivatives
μ x x ( x i , t j ) = ( w i + 1 , j 2 w i , j + w i 1 , j ) / h 2 , μ t ( x i , t j ) = ( w i , j w i , j 1 ) / l , and μ x ( x i , t j ) = ( w i + 1 , j w i , j ) / h , t [ 0 , 1 ] ,
we obtain a discretization of problem (47) given by the system
w 1 , j w i , j 1 l w i , j 1 w i , j η w i + 1 , j 2 w i , j + w i 1 , j h 2 , i = 1 , 2 , 3 , , N 1 ; j = 1 , 2 , 3 , , N 2 .
Choosing N 1 = N 2 = 11 , we obtain a system of 121 equations with 121 unknowns. For the numerical computations, we take η = 1 . The approximate solution of this non-linear system after five iterations of method (2) is given below:
x 5 = ( 1.645 , 1.473 , 1.375 , 1.312 , 1.269 , 1.236 , 1.210 , 1.188 , 1.169 , 1.153 , 1.138 , 1.623 , 1.464 , 1.370 , 1.310 , 1.268 , 1.236 , 1.210 , 1.188 , 1.169 , 1.153 , 1.138 , 1.583 , 1.445 , 1.361 , 1.306 , 1.266 , 1.235 , 1.209 , 1.188 , 1.169 , 1.153 , 1.138 , 1.528 , 1.419 , 1.349 , 1.300 , 1.263 , 1.233 , 1.208 , 1.187 , 1.169 , 1.153 , 1.138 , 1.463 , 1.388 , 1.334 , 1.292 , 1.259 , 1.231 , 1.208 , 1.187 , 1.169 , 1.153 , 1.138 , 1.395 , 1.355 , 1.317 , 1.284 , 1.255 , 1.229 , 1.207 , 1.186 , 1.168 , 1.152 , 1.138 , 1.328 , 1.322 , 1.301 , 1.276 , 1.251 , 1.227 , 1.206 , 1.186 , 1.168 , 1.152 , 1.138 , 1.268 , 1.292 , 1.286 , 1.269 , 1.248 , 1.226 , 1.205 , 1.186 , 1.168 , 1.152 , 1.138 , 1.219 , 1.267 , 1.274 , 1.263 , 1.245 , 1.224 , 1.204 , 1.185 , 1.168 , 1.152 , 1.138 , 1.185 , 1.250 , 1.265 , 1.258 , 1.242 , 1.223 , 1.204 , 1.185 , 1.168 , 1.152 , 1.138 , 1.168 , 1.240 , 1.261 , 1.256 , 1.241 , 1.223 , 1.203 , 1.185 , 1.168 , 1.152 , 1.138 ) T .
We illustrate the numerical results in Table 4.
Example 5.
We study the Hammerstein non-linear integral equation (details can be found in [2] (pp. 19–20)). The following Hammerstein integral equation is a standard applied science example of computational analysis:
x ( s ) = 1 + 1 5 0 1 G ( s , t ) x ( t ) 3 d t ,
where x C [ 0 , 1 ] , s , t [ 0 , 1 ] and the kernel G is
G ( s , t ) = ( 1 s ) t , t s , s ( 1 t ) , s t .
The Gauss–Legendre quadrature formula can be used to transform the above equation into a finite-dimensional problem. We approximate the integral by 0 1 g ( t ) d t j = 1 10 w j g ( t j ) , considering appropriate weights w j and abscissas t j . For i = j = 10 , the t j and w j are depicted in Table 5. The x i ( i = 1 , 2 , , 10 ) are adopted to represent the approximations of x ( t i ) . Thus, we obtain the following system of non-linear equations:
5 x i 5 j = 1 10 a i j x j 3 = 0 ,
where
a i j = w j t j ( 1 t i ) , j i , w j t i ( 1 t j ) , i < j .
In Table 6, we collect the obtained data with method (2). The approximate root after four iterations is
x 4 = ( 1.001 , 1.006 , 1.014 , 1.021 , 1.026 , 1.026 , 1.021 , 1.014 , 1.006 , 1.0013 ) T .
Example 6.
Finally, we examine a more complex system of non-linear equations, consisting of 300 equations with 300 unknowns. This analysis highlights the method’s ability to handle significant computational challenges and demonstrates its applicability for a wide range of practical applications involving large-scale non-linear systems. We consider the following system:
P ( X ) = x j 2 x j + 1 1 = 0 , 1 j 299 , x 300 2 x 1 1 = 0 , o t h e r w i s e .
The desired zero of this problem is ϰ * = 1 1 1 . The number of iterations, the CPU time, the absolute value of function at the corresponding point, the absolute residual error, and the COC of Example 6 are shown in Table 7.

5. Concluding Remarks

In this study, certain drawbacks are identified that prevent the applicability of iterative methods if the usual Taylor expansion series approach is utilized to demonstrate convergence. Motivated by these issues, and in order to extend the applicability of iterative methods, a different technique is developed that does not use the Taylor series. In this way, both local and semi-local convergence analyses are based solely on the operators that define the methods, which extends their applicability to more abstract spaces, such as Banach spaces. Although the technique has been demonstrated with method (2), it can also be used to analyze other methods, such as those in [6,15,16,17,18,19,20,21,22,23,25,26]. Numerous concrete examples have been included to demonstrate the presented approach. This will be the direction of our future studies. In addition, we will try to further weaken the sufficient convergence conditions and even consider the necessary ones.

Author Contributions

Conceptualization, R.B., I.K.A. and H.R.; methodology, R.B. and I.K.A.; software, R.B. and I.K.A.; validation, R.B., I.K.A. and H.R.; formal analysis, R.B., I.K.A., H.R. and H.A.; investigation, R.B. and I.K.A.; resources, R.B. and I.K.A.; data curation, R.B., I.K.A. and H.R.; writing—original draft preparation, R.B. and I.K.A.; writing—review and editing, R.B., I.K.A., H.R. and H.A.; visualization, R.B., I.K.A., H.R. and H.A.; supervision, R.B., I.K.A. and H.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare that there is no conflict of interest.

References

  1. Argyros, G.I.; Regmi, S.; Argyros, I.K.; George, S. Contemporary Algorithms, 4th ed.; Nova Publisher: New York, NY, USA, 2024. [Google Scholar]
  2. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  3. Argyros, I.K. Unified Convergence Criteria for iterative Banach space valued methods with applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  4. Argyros, I.K. Theory and Applications of Iterative Methods, 2nd Edition Engineering Series; CRC Press-Taylor and Francis Group: Boch Raton, FL, USA, 2022. [Google Scholar]
  5. Ostrowski, A.M. Solutions of Equations and Systems of Equations; Academic Press: New York, NY, USA; London, UK, 1966. [Google Scholar]
  6. Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
  7. Ogbereyivwe, O.; Ojo-Orobosa, V. Family of optimal two-step fourth order iterative method and its extension for solving nonlinear equations. J. Interdiscip. Math. 2021, 24, 1347–1365. [Google Scholar] [CrossRef]
  8. Akram, S.; Khalid, M.; Junjua MU, D.; Altaf, S.; Kumar, S. Extension of King’s iterative scheme by means of memory for nonlinear equations. Symmetry 2023, 15, 1116. [Google Scholar] [CrossRef]
  9. Panday, S.; Mittal, S.K.; Stoenoiu, C.E.; Jäntschi, L. A New Adaptive Eleventh-Order Memory Algorithm for Solving. Nonlinear Equations. Math. 2024, 12, 1809. [Google Scholar] [CrossRef]
  10. Sharma, H.; Kansal, M. A modified Chebyshev-Halley-type iterative family with memory for solving nonlinear equations and its stability analysis. Math. Methods Appl. Sci. 2023, 46, 12549–12569. [Google Scholar] [CrossRef]
  11. Wang, X.; Tao, Y. A new Newton method with memory for solving nonlinear equations. Mathematics 2020, 8, 108. [Google Scholar] [CrossRef]
  12. Torkashvand, V. A two-step method adaptive with memory with eighth-order for solving nonlinear equations and its dynamic. Comput. Methods Differ. Equat. 2022, 10, 1007–1026. [Google Scholar] [CrossRef]
  13. Thangkhenpau, G.; Panday, S.; Mittal, S.K.; Jäntschi, L. Novel parametric families of with and without memory iterative methods for multiple roots of nonlinear equations. Mathematics 2023, 11, 2036. [Google Scholar] [CrossRef]
  14. Zheng, Q.; Zhao, X.; Liu, Y. An optimal biparametric multipoint family and its self- acceleration with memory for solving nonlinear equations. Algorithms 2015, 8, 1111–1120. [Google Scholar] [CrossRef]
  15. Li, X.; Mu, C.; Ma, J.; Wang, C. Sixteenth-order method for nonlinear equations. Appl. Math. Comput. 2010, 215, 3754–3758. [Google Scholar] [CrossRef]
  16. Sharma, J.R.; Sharma, R. A family of modified Ostrowski’s methods with accelerated eighth order convergence. Numer. Algorithms 2010, 54, 445–458. [Google Scholar] [CrossRef]
  17. Thukral, R.; Petkovic, M.S. A family of three-point methods of optimal order for solving nonlinear equations. J. Comput. Appl. Math. 2010, 233, 2278–2284. [Google Scholar] [CrossRef]
  18. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. New modifications of Potra-Ptak’s method with optimal fourth and eighth order of convergence. J. Comput. Appl. Math. 2010, 234, 2969–2976. [Google Scholar] [CrossRef]
  19. Liu, L.; Wang, X. Eighth-order methods with high efficiency index for solving nonlinear equations. Appl. Math. Comput. 2010, 215, 3449–3454. [Google Scholar] [CrossRef]
  20. Liu, L.; Wang, X. New eighth-order methods for solving nonlinear equations. J. Comput. Appl. Math. 2010, 234, 1611–1620. [Google Scholar]
  21. Nedzhibov, G.H. A family of multi-point iterative methods for nonlinear equations. J. Comput. Appl. Math. 2008, 222, 244–250. [Google Scholar] [CrossRef]
  22. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarrat’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  23. Kou, J.; Li, Y.; Wang, X. Some modification of Newton’s method with fifth-order convergence. J. Comput. Appl. Math. 2007, 209, 146–152. [Google Scholar] [CrossRef]
  24. Wang, X. Fixed-point iterative method with eight-order constructed by undermined paramater technique for solving nonlinear systems. Symmetry 2021, 13, 863. [Google Scholar] [CrossRef]
  25. Grau-Sánchez, M.; Noguera, M.; Gutiérrez, J.M. On some computational orders of convergence. Appl. Math. Lett. 2010, 23, 472–478. [Google Scholar] [CrossRef]
  26. Zhanlav, T.; Otgondorj, K.H. Higher order Jarratt-like iterations for solving systems of nonlinear equations. Appl. Math. Comput. 2021, 395, 125849. [Google Scholar] [CrossRef]
Table 1. Example 1: Radii of convergence.
Table 1. Example 1: Radii of convergence.
Method s 0 r 1 s 1 r 2 r 3 r
(2)0.581980.382690.422360.212340.175790.17579
Table 2. Example 2: Radii of convergence.
Table 2. Example 2: Radii of convergence.
Method s 0 r 1 s 1 r 2 r 3 r
(2)0.166670.0833330.1030060.0394880.0315470.031547
Table 3. Computational results of Example 3.
Table 3. Computational results of Example 3.
Method x 0 P ( x ) x + 1 x λ * CPU
Timing
(2) 1001 1000 1001 1000 1.6 × 10 1494 3.4 × 10 1493 4 6.0364 291.68
Table 4. Computational results of Example 4.
Table 4. Computational results of Example 4.
Methods x 0 P ( x ) x + 1 x λ * CPU
Timing
(2) 1 + 1 121 1 + 2 121 1 + 121 121 . 3.3 × 10 1197 2.2 × 10 1198 4 5.9812 198.29
Table 5. The values of abscissas t j and weights w j .
Table 5. The values of abscissas t j and weights w j .
j t j w j
1 0.01304673574141413996101799 0.03333567215434406879678440
2 0.06746831665550774463395165 0.07472567457529029657288816
3 0.16029521585048779688283632 0.10954318125799102199776746
4 0.28330230293537640460036703 0.13463335965499817754561346
5 0.42556283050918439455758700 0.14776211235737643508694649
6 0.57443716949081560544241300 0.14776211235737643508694649
7 0.71669769706462359539963297 0.13463335965499817754561346
8 0.83970478414951220311716368 0.10954318125799102199776746
9 0.93253168334449225536604834 0.07472567457529029657288816
10 0.98695326425858586003898201 0.03333567215434406879678440
Table 6. Computational results of Example 5.
Table 6. Computational results of Example 5.
Methods x 0 P ( x ) x + 1 x λ * CPU
Timing
(2) 9 10 9 10 . 4.7 × 10 510 9.9 × 10 511 3 5.9783 0.457194
Table 7. Computational results of Example 6.
Table 7. Computational results of Example 6.
Method x 0 P ( x ) x + 1 x λ CPU
Timing
(2) 0.95 0.95 7.0 × 10 468 2.3 × 10 468 3 8.1734 698.052
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ramos, H.; Argyros, I.K.; Behl, R.; Alshehri, H. Extension of an Eighth-Order Iterative Technique to Address Non-Linear Problems. Axioms 2024, 13, 802. https://doi.org/10.3390/axioms13110802

AMA Style

Ramos H, Argyros IK, Behl R, Alshehri H. Extension of an Eighth-Order Iterative Technique to Address Non-Linear Problems. Axioms. 2024; 13(11):802. https://doi.org/10.3390/axioms13110802

Chicago/Turabian Style

Ramos, Higinio, Ioannis K. Argyros, Ramandeep Behl, and Hashim Alshehri. 2024. "Extension of an Eighth-Order Iterative Technique to Address Non-Linear Problems" Axioms 13, no. 11: 802. https://doi.org/10.3390/axioms13110802

APA Style

Ramos, H., Argyros, I. K., Behl, R., & Alshehri, H. (2024). Extension of an Eighth-Order Iterative Technique to Address Non-Linear Problems. Axioms, 13(11), 802. https://doi.org/10.3390/axioms13110802

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop