Next Article in Journal
A Robust Study of Tumor-Immune Cells Dynamics through Non-Integer Derivative
Next Article in Special Issue
Structured Doubling Algorithm for a Class of Large-Scale Discrete-Time Algebraic Riccati Equations with High-Ranked Constant Term
Previous Article in Journal
Nonlinear Piecewise Caputo Fractional Pantograph System with Respect to Another Function
Previous Article in Special Issue
Spectral Collocation Approach via Normalized Shifted Jacobi Polynomials for the Nonlinear Lane-Emden Equation with Fractal-Fractional Derivative
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Order of Convergence, Extensions of Newton–Simpson Method for Solving Nonlinear Equations and Their Dynamics

1
Department of Mathematical & Computational Science, National Institute of Technology Karnataka, Surathkal 575 025, India
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(2), 163; https://doi.org/10.3390/fractalfract7020163
Submission received: 29 December 2022 / Revised: 2 February 2023 / Accepted: 4 February 2023 / Published: 6 February 2023
(This article belongs to the Special Issue Applications of Iterative Methods in Solving Nonlinear Equations)

Abstract

:
Local convergence of order three has been established for the Newton–Simpson method (NS), provided that derivatives up to order four exist. However, these derivatives may not exist and the NS can converge. For this reason, we recover the convergence order based only on the first two derivatives. Moreover, the semilocal convergence of NS and some of its extensions not given before is developed. Furthermore, the dynamics are explored for these methods with many illustrations. The study contains examples verifying the theoretical conditions.
MSC:
49M15; 47H99; 65J15; 65G99; 41A25

1. Introduction

It is common practice to approximate a solution a * of the nonlinear problem (1) using Newton’s method.
F ( a ) = 0 ,
where F : Ω B B 1 is a Fréchet differentiable operator between Banach spaces B and B 1 , and Ω is an open convex set.
Several modifications of Newton’s method have been studied [1,2,3,4,5,6,7,8,9,10] to accelerate the convergence (i.e., improve the convergence order) or to reduce the number of functional evaluations in each step (i.e., improve the computational efficiency). In [1] (also see [11]), Cordero and Torregrosa considered the following modifications of Newton’s method, called the Newton–Simpson (NS) method defined for n = 0 , 1 , 2 by
b n = a n F ( a n ) 1 F ( a n ) a n + 1 = a n 6 A n 1 F ( a n ) ,
where A n = F ( a n ) + 4 F a n + b n 2 + F ( b n ) ; when B = B 1 = R j . It is proved in [1] that method (2) is of order three. Recall that the iterative method has order of convergence p > 0 if for ϵ n = a n a *
ϵ n + 1 σ ϵ n p .
The parameter σ is called convergent rate.
The proof in [1] required the operator F to be at least four times differentiable, which reduces the applicability of the NS method. The analysis in [1] is based on Taylor expansion and is restricted to Euclidean space.
The NS method is studied in this paper for general Banach space setting, and our analysis does not depend on Taylor expansions. Hence, we do not require the assumptions on the derivatives of F of order more than two. In fact, we obtained the convergence order three for the NS method using assumptions on the derivatives of F of order up to two. Thus, our analysis improves the applicability of these methods.
For example: Let B = B 1 = R , Ω = [ 0.5 , 1.5 ] . Define f on Ω by
f ( s ) = s 4 log s 2 + s 6 s 5 i f s 0 0 i f s = 0 .
Then, we get f ( 1 ) = 0 , and
f ( 4 ) ( s ) = 24 log s 2 + 360 s 2 120 s + 100 .
Obviously f ( 4 ) ( s ) is not bounded on Ω . Thus, the convergence of method (2) is not guaranteed by the analyses in [1], although it may converge.
The main contributions of this paper are: (1) we obtain the convergence order of three for method (2) with assumptions on F and F , thus the applicability of method (2) is extended to problems involving operators whose first and second derivatives exist (the analysis in [1] requires the operator to be differentiable at least four times); (2) we extend the method (2) to a method with convergence order five (see (3) below) and to a method with convergence order six (see (4) below); (3) semilocal convergence of methods (2)–(4) are provided in this paper.
In Section 2, we prove that the method (2) is of order three. Furthermore, we extend the order of method (2) to methods with order of convergence five in Section 3, and with order of convergence six in Section 4, using Cordero et al.’s [12,13] technique. The extended fifth order method is defined for n = 0 , 1 , 2 as follows:
b n = a n F ( a n ) 1 F ( a n ) c n = a n 6 A n 1 F ( a n ) a n + 1 = c n F ( b n ) 1 F ( c n )
and the extended sixth order method is defined for n = 0 , 1 , 2 as follows:
b n = a n F ( a n ) 1 F ( a n ) c n = a n 6 A n 1 F ( a n ) a n + 1 = c n F ( c n ) 1 F ( c n ) .
Semilocal convergences of methods (2)–(4) are given in Section 5; numerical examples are given in Section 6. The dynamics of the methods are given in Section 7, and the paper ends with a conclusion in Section 8.

2. Order of Convergence for Method (2)

In this section, the solution a * Ω is assumed to be simple. The convergent assumptions are:
(a1)
F ( a * ) 1 L ( B 1 , B ) ,
(a2)
F ( a * ) 1 ( F ( a ) F ( b ) ) L a b for all a , b Ω ,
(a3)
F ( a * ) 1 F ( b ) L 1 for all b Ω ,
(a4)
F ( a * ) 1 ( F ( a ) F ( b ) ) L 2 a b for all a , b Ω and
(a5)
F ( a * ) 1 F ( a ) L 3 for all a Ω .
Consider the functions φ , φ 1 , h 1 : [ 0 , 1 L ) R defined by
φ ( t ) = L 2 ( 1 L t ) φ 1 ( t ) = L 2 ( 1 + L t 2 ( 1 L t ) )
and
h 1 ( t ) = φ 1 ( t ) t 1 ,
is a nondecreasing continuous function satisfying
h 1 ( 0 ) = 1 < 0 and l i m t 1 L h 1 ( t ) = + .
Therefore, there exists a smallest zero r 1 ( 0 , 1 L ) such that h 1 ( t ) = 0 .
Next, we define functions ψ 1 , δ 1 : [ 0 , r 1 ) R , by
ψ 1 ( t ) = L 1 2 ( 1 φ 1 ( t ) t ) φ ( t ) + L 2 L 3 24 ( 1 φ 1 ( t ) t ) ( 1 L t )
and δ 1 ( t ) = ψ 1 ( t ) t 2 1 . Then, δ 1 is a nondecreasing continuous function satisfying δ 1 ( 0 ) = 1 < 0 and l i m t r 1 δ 1 ( t ) = + . Therefore δ 1 has a smallest zero r 2 ( 0 , r 1 ) . Let
r = m i n { 2 3 L , r 2 } .
Then ∀ t [ 0 , r ) , we have
0 φ ( t ) t < 1
0 φ 1 ( t ) t < 1
and
0 ψ 1 ( t ) t 2 < 1 .
In the rest of this paper, we use the notation B ( a * , λ ) = { x B : x a * < λ } and B ¯ ( a * , λ ) = { x B : x a * λ } for some λ > 0 and a * B .
Theorem 1. 
Under the assumptions (a1)–(a5), the sequence { a n } defined by (2), with the initial point a 0 B ( a * , r ) { a * } is well defined and remains in B ¯ ( a * , r ) for n = 0 , 1 , 2 , and converges to a solution a * of (1). Moreover, we have the following
b n a * φ ( r ) ϵ n 2
and
ϵ n + 1 ψ 1 ( r ) ϵ n 3 .
Proof. 
Mathematical induction is employed for the proof. Suppose x B ( a * , r ) . Then, by (a2), we have
F ( a * ) 1 ( F ( x ) F ( a * ) ) L x a * L r < 1 ,
so, by Banach lemma on invertible operator [14], we have
F ( x ) 1 F ( a * ) 1 1 L x a * .
Mean Value Theorem gives
F ( a 0 ) = F ( a 0 ) F ( a * ) = 0 1 F ( a * + t ( a 0 a * ) ) d t ( a 0 a * ) ,
so for the method (2), we have
b 0 a * a 0 a * F ( a 0 ) 1 F ( a 0 ) = F ( a 0 ) 1 F ( a * ) 0 1 F ( a * ) 1 ( F ( a 0 ) F ( a * + t ( a 0 a * ) ) ) d t ( a 0 a * ) .
Thus, by (11) and (a2), we have
b 0 a * L 2 ( 1 L ϵ 0 ) ϵ 0 2 φ ( ϵ 0 ) ϵ 0 2 < ϵ 0 < r .
Therefore, (9) holds for n = 0 and b 0 B ( a * , r ) . Next, we shall prove that A 0 1 is well defined. Note that
( 6 F ( a * ) ) 1 ( A 0 6 F ( a * ) ) = ( 6 F ( a * ) ) 1 ( F ( a 0 ) + 4 F ( a 0 + b 0 2 ) + F ( b 0 ) 6 F ( a * ) ) 1 6 ( F ( a * ) 1 ( F ( a 0 ) F ( a * ) ) + F ( a * ) 1 ( F ( b 0 ) F ( a * ) ) + 4 F ( a * ) 1 ( F ( a 0 + b 0 2 ) F ( a * ) ) ) 3 L 6 [ ϵ 0 + b 0 a * ] = L 2 [ ϵ 0 + L 2 ( 1 L ϵ 0 ) ϵ 0 2 ] φ 1 ( ϵ 0 ) ϵ 0 < 1 .
Thus, the Banach lemma for inverses [14] A 0 1 exists and
A 0 1 F ( a * ) 1 6 ( 1 φ 1 ( ϵ 0 ) ϵ 0 ) .
From (2) and (12), it follows that
a 1 a * = a 0 a * 6 A 0 1 F ( a 0 ) = A 0 1 0 1 [ A 0 6 F ( a * + t ( a 0 a * ) ) ] d t ( a 0 a * ) = A 0 1 0 1 F ( a 0 ) F ( a * + t ( a 0 a * ) ) + F ( b 0 ) F ( a * + t ( a 0 a * ) ) + 4 F ( a 0 + b 0 2 ) F ( a * + t ( a 0 a * ) ) d t ( a 0 a * ) .
The Mean Value Theorem gives
a 1 a * = A 0 1 0 1 0 1 F ( a * + t ( a 0 a * ) ) + θ ( a 0 a * t ( a 0 a * ) ) d θ ( a 0 a * t ( a 0 a * ) ) d t + 0 1 0 1 F ( a * + t ( a 0 a * ) ) + θ ( b 0 a * t ( a 0 a * ) ) d θ ( b 0 a * t ( a 0 a * ) ) d t + 4 F ( a 0 + b 0 2 ) F ( a * + t ( a 0 a * ) ) ( a 0 a * ) = A 0 1 0 1 0 1 G 0 ( θ , t ) d θ ( 1 2 t ) ( a 0 a * ) d t + 0 1 0 1 H 0 ( θ , t ) d θ ( b 0 a * ) d t + 0 1 0 1 ( G 0 ( θ , t ) H 0 ( θ , t ) ) d θ ( a 0 a * ) t d t + 0 1 4 F ( a 0 + b 0 2 ) F ( a * + t ( a 0 a * ) ) d t ( a 0 a * ) = : K 1 + K 2 + K 3 + K 4 ,
where
G 0 ( θ , t ) = F ( a * + t ( a 0 a * ) ) + θ ( a 0 a * t ( a 0 a * ) ) , H 0 ( θ , t ) = F ( a * + t ( a 0 a * ) ) + θ ( b 0 a * t ( a 0 a * ) ) , K 1 = A 0 1 0 1 0 1 G 0 ( θ , t ) d θ ( 1 2 t ) ( a 0 a * ) d t ( a 0 a * ) , K 2 = A 0 1 0 1 0 1 H 0 ( θ , t ) d θ ( b 0 a * ) d t ( a 0 a * ) , K 3 = A 0 1 0 1 0 1 ( G 0 ( θ , t ) H 0 ( θ , t ) ) d θ ( a 0 a * ) 2 t d t
and
K 4 = A 0 1 0 1 4 [ F ( a 0 + b 0 2 ) F ( a * + t ( a 0 a * ) ) ] d t ( a 0 a * ) .
From (13), (14) and assumptions (a1)–(a5), we get
K 1 = A 0 1 0 1 0 1 G 0 ( θ , t ) d θ ( 1 2 t ) ( a 0 a * ) 2 d t A 0 1 F ( a * ) max t 0 1 F ( a * ) 1 G 0 ( θ , t ) d θ 0 1 ( 1 2 t ) ( a 0 a * ) 2 d t = 0 ,
K 2 = A 0 1 0 1 0 1 H 0 ( θ , t ) d θ ( b 0 a * ) d t ( a 0 a * ) A 0 1 F ( a * ) 0 1 0 1 F ( a * ) 1 H 0 ( θ , t ) d θ ( b 0 a * ) d t ϵ 0 L 1 A 0 1 F ( a * ) b 0 a * ϵ 0 L 1 6 ( 1 φ 1 ( ϵ 0 ) ϵ 0 ) φ ( ϵ 0 ) ϵ 0 3 ,
K 3 = A 0 1 0 1 0 1 ( G 0 ( θ , t ) H 0 ( θ , t ) ) d θ ( a 0 a * ) 2 t d t A 0 1 F ( a * ) 0 1 0 1 F ( a * ) 1 ( G 0 ( θ , t ) H 0 ( θ , t ) ) d θ ( a 0 a * ) 2 t d t L 2 24 ( 1 φ 1 ( ϵ 0 ) ϵ 0 ) a 0 b 0 ϵ 0 2 L 2 24 ( 1 φ 1 ( ϵ 0 ) ϵ 0 ) F ( a 0 ) 1 0 1 F ( a * + t ( a 0 a * ) ) d t ϵ 0 3 L 2 24 ( 1 φ 1 ( ϵ 0 ) ϵ 0 ) × F ( a 0 ) 1 F ( a * ) 0 1 F ( a * ) 1 F ( a * + t ( a 0 a * ) ) d t ϵ 0 3 L 2 L 3 24 ( 1 φ 1 ( ϵ 0 ) ϵ 0 ) ( 1 L ϵ 0 ) ϵ 0 3 .
Let H 1 ( θ , t ) = F ( a * + t ( a 0 a * ) + θ ( a 0 + b 0 2 a * + t ( a 0 a * ) ) . Then,
K 4 = 4 A 0 1 F ( a * ) 0 1 F ( a * ) 1 [ F ( a 0 + b 0 2 ) F ( a * + t ( a 0 a * ) ) ] d t ( a 0 a * ) 4 A 0 1 F ( a * ) 0 1 0 1 F ( a * ) 1 H 1 ( θ , t ) × ( a 0 + b 0 2 a * t ( a 0 a * ) ) d θ d t ( a 0 a * ) 2 A 0 1 F ( a * ) 0 1 0 1 F ( a * ) 1 H 1 ( θ , t ) ( 1 2 t ) ( a 0 a * ) 2 d θ d t + 0 1 0 1 F ( a * ) 1 H 1 ( θ , t ) ( b 0 a * ) ( a 0 a * ) d θ d t 2 A 0 1 F ( a * ) max t [ 0 , 1 ] 0 1 F ( a * ) 1 H 1 ( θ , t ) d θ 0 1 ( 1 2 t ) ( a 0 a * ) 2 d t + 0 1 0 1 F ( a * ) 1 H 1 ( θ , t ) ( b 0 a * ) ( a 0 a * ) d θ d t 2 A 0 1 F ( a * ) 0 1 0 1 F ( a * ) 1 H 1 ( θ , t ) ( b 0 a * ) ( a 0 a * ) d θ d t 2 L 1 6 ( 1 φ 1 ( ϵ 0 ) ϵ 0 ) b 0 a * ϵ 0 = L 1 3 ( 1 φ 1 ( ϵ 0 ) ϵ 0 ) φ ( ϵ 0 ) ϵ 0 3 .
By (15) and the inequalities (16)–(19), we have
ϵ 1 K 1 + K 2 + K 3 + K 4 ψ 1 ( r ) ϵ 0 3 .
Therefore, since ψ 1 ( r ) r 2 < 1 , we have ϵ 1 < r , so the iterate a 1 B ( a * , r ) .
The proof for (9) and (10) is completed, if one replaces a 0 , b 0 , a 1 in the above estimates with a n , b n , a n + 1 .

3. Order of Convergence for Method (3)

Let ψ 2 , δ 2 : [ 0 , 3 1 L ) R be defined by
ψ 2 ( t ) = L 1 L φ ( t ) t 2 ( φ ( t ) + 1 2 ψ 1 ( t ) t ) ψ 1 ( t )
and δ 2 ( t ) = ψ 2 ( t ) t 4 1 . Then, δ 2 ( 0 ) = 1 and δ 2 ( t ) + as t 3 1 L . Therefore, δ 2 has a smallest zero r 3 ( 0 , 3 1 L ) . Let
R = m i n { r , r 3 } .
Then, for all t [ 0 , R ) ,
0 φ ( t ) t < 1 , 0 ψ 1 ( t ) t 2 < 1
and
0 ψ 2 ( t ) t 4 < 1 .
The next theorem provides the convergence order of method (3).
Theorem 2. 
Under the conditions (a1)–(a5), the sequence { a n } defined by (3), with the initial point a 0 B ( a * , R ) { a * } is well defined and remains in B ¯ ( a * , R ) for n = 0 , 1 , 2 , and converges to a solution a * of (1). Moreover, we have the following estimates
b n a * φ ( R ) ϵ n 2 ,
c n a * ψ 1 ( R ) ϵ n 3 ,
ϵ n + 1 ψ 2 ( R ) ϵ n 5 .
Proof. 
The proof of (22) and (23) (by mimicking the proof of Theorem 1) follows as in Theorem 1. To prove (24), observe that
a n + 1 a * = c n a * F ( b n ) 1 F ( c n ) = F ( b n ) 1 0 1 ( F ( b n ) F ( a * + t ( c n a * ) ) ) d t ) ( c n a * ) = F ( b n ) 1 F ( a * ) 0 1 F ( a * ) 1 ( F ( b n ) F ( a * + t ( c n a * ) ) ) d t ( c n a * )
and hence by (a2) and (11), we have
ϵ n + 1 L 1 L b n a * ( b n a * + 1 2 c n a * ) c n a * L 1 L φ ( ϵ n ) ϵ n 2 ( φ ( ϵ n ) + 1 2 ψ 1 ( ϵ n ) ϵ n ) × ψ 1 ( ϵ n ) ϵ n 5 ψ 2 ( R ) ϵ n 5 .
Therefore, since ψ 2 ( R ) R 4 < 1 , the iterate a n + 1 B ( a * , R ) . The rest of the proof is analogous to the proof of Theorem 1. □

4. Order of Convergence for Method (4)

Consider the continuous nondecreasing function α : [ 0 , 1 L ) R , defined by
α ( t ) = L ψ 1 ( t ) t 3 1 .
Then, α ( 0 ) = 1 and α ( t ) as t 1 L . So, ∃ ρ > 0 such that α ( ρ ) = 0 .
Let ψ 3 , δ 3 : [ 0 , ρ ) R be defined by
ψ 3 ( t ) = L 2 ( 1 L ψ 1 ( t ) t 3 ) ψ 1 ( t ) 2
and δ 3 ( t ) = ψ 3 ( t ) t 5 1 . Then, δ 3 ( 0 ) = 1 and δ 3 ( t ) + as t ρ . Therefore, ψ 3 has a smallest zero r 4 ( 0 , ρ ) . Let
R 1 = min { r , r 4 } .
Then, for all t [ 0 , R 1 ) , we have
0 ψ 3 ( t ) t 5 < 1 .
Theorem 3. 
Under the assumptions (a1)–(a5), the sequence { a n } defined by (4), starting from a 0 B ( a * , R 1 ) { a * } is well defined and remains in B ¯ ( a * , R 1 ) for n = 0 , 1 , 2 , and converges to a solution a * of (1). Moreover, we have the following
b n a * φ ( R 1 ) ϵ n 2 ,
c n a * ψ 1 ( R 1 ) ϵ n 3 ,
and
ϵ n + 1 ψ 3 ( R 1 ) ϵ n 6 .
Proof. 
The proof of (26) and (27) (by mimicking the proof of Theorem 1) follows as in Theorem 1. To prove (28), observe that
a n + 1 a * = c n a * F ( c n ) 1 F ( c n ) = F ( c n ) 1 0 1 ( F ( c n ) F ( a * + t ( c n a * ) ) ) d t ( c n a * ) = F ( c n ) 1 F ( a * ) 0 1 F ( a * ) 1 ( F ( c n ) F ( a * + t ( c n a * ) ) ) d t ( c n a * )
By (a4) and (11), we have
ϵ n + 1 L 2 ( 1 L c n a * ) c n a * 2 L 2 ( 1 L ψ 1 ( ϵ n ) ϵ n 3 ) ψ 1 2 ( ϵ n ) ϵ n 6 .
Since ψ 3 ( R 1 ) R 1 5 < 1 , the iterate a n + 1 B ( a * , R 1 ) . The rest of the proof is analogous to the proof of Theorem 1. □
Next, a result on the uniqueness of the solution a * is presented.
Proposition 1. 
Assume:
(1) 
The element a * B ( a * , ρ ) is a simple solution of (1), and (a2) holds.
(2) 
There exists δ ρ so that
L δ < 2 .
Set Ω 1 = Ω B ¯ ( a * , δ ) . Then, a * is the unique solution of Equation (1) in the domain Ω 1 .
Proof. 
Let q Ω 1 with F ( q ) = 0 . Define T = 0 1 F ( a * + θ ( q a * ) ) d θ . Using (a2) and (31), one obtains
F ( a * ) 1 ( T F ( a * ) ) L 0 1 θ q a * d θ L 2 δ < 1 ,
so q = a * , follows from the invertibility of T and the identity T ( q a * ) = F ( q ) F ( a * ) = 0 0 = 0 .

5. Semilocal Convergence

We develop a common analysis based on scalar majorizing sequences and the concept of ω continuity [14,15].
Let us first deal with the method (2). Define the scalar sequences { α n } and { β n } using two continuous and nondecreasing functions ω 0 : [ 0 , + ) R , ω : [ 0 , + ) R for α 0 = 0 , β 0 0 by
ω ¯ n = 4 ω ( β n α n 2 ) + ω ( β n α n ) OR 7 ω ( α n ) + 4 ω 0 ( α n + β n 2 ) + ω 0 ( β n ) , q n = 1 6 ω 0 ( α n ) + 4 ω 0 ( α n + β n 2 ) + ω 0 ( β n ) , α n + 1 = β n + ω ¯ n ( β n α n ) 6 ( 1 q n ) , δ n + 1 = 0 1 ω ( θ ( α n + 1 α n ) ) d θ ( α k + 1 α k ) + ( 1 + ω 0 ( α n ) ) ( α n + 1 β n ) , β n + 1 = α n + 1 + δ n + 1 1 ω 0 ( α n + 1 ) .
These sequences are shown to be majorizing for the method (2) in Theorem 4. However, a general convergence result is presented for these methods.
Lemma 1. 
Suppose that for each n = 0 , 1 , 2 , there exists μ 0 such that
ω 0 ( α n ) < 1 , q n < 1 a n d α n μ .
Then, the sequences { α n } , { β n } generated by the Formula (32) are convergent to some λ [ β 0 , μ ] and 0 α n β n α n + 1 λ .
Proof. 
It follows from the Formula (32), the properties of the functions ω 0 , ω , and the condition (33), that sequences { α n } , { β n } are nondecreasing and bounded from above by μ . Hence, they are convergent to λ .
Remark 1. 
(i) The parameter λ is the unique and common least upper bound of the sequences { α n } and { β n } .
(ii) 
If the function ω 0 is strictly increasing, then a possible choice for μ = ω 0 1 ( 1 ) .
(iii) 
Suppose that the function ω 0 ( t ) 1 has a minimal zero ρ ( 0 , + ) . Then, the function ω can be restricted on the interval ( 0 , ρ ) and μ ρ .
Next, we connect the functions ω 0 , ω , the sequence (32) and the limit point λ to the operators on the method (2).
Suppose:
(c1)
There exists a starting point a 0 Ω and a parameter β 0 0 such that F ( a 0 ) 1 L ( B 1 , B ) and F ( a 0 ) 1 F ( a 0 ) β 0 .
(c2)
F ( a 0 ) 1 ( F ( v ) F ( a 0 ) ) ω 0 ( v a 0 ) for all v Ω .
Set Ω 0 = Ω B ( a 0 , ρ ) .
(c3)
F ( a 0 ) 1 ( F ( v 2 ) F ( v 1 ) ) ω ( v 2 v 1 ) for all v 1 , v 2 Ω 0 .
(c4)
The condition (33) holds for μ = ρ . and
(c5)
B ¯ ( a 0 , λ ) Ω .
We now have the tools to develop the semilocal convergence for the method (2).
Theorem 4. 
Suppose that the conditions (c1)–(c5) hold. Then, the sequences { a n } , { b n } developed by the method (2) are convergent to some a * B ( a 0 , λ ) solving the equation F ( a ) = 0 .
Proof. 
The verification of the following assertions is needed:
b n a n β n α n
and
a n + 1 b n α n + 1 β n .
The method of mathematical induction is employed. From the condition (c1) and the Formula (32), it follows that
b 0 a 0 = F ( a 0 ) 1 F ( a 0 ) β 0 = β 0 α 0 < λ .
Thus, the assertion (34) holds if n = 0 and the iterate b 0 B ( a 0 , λ ) . Pick a point v B ( a 0 , λ ) . Then, the application of the condition (c2) and the definition of λ give
F ( a 0 ) 1 ( F ( v ) F ( a 0 ) ) ω 0 ( v a 0 ) ω 0 ( λ ) < 1 .
Hence, we get F ( v ) 1 L ( B 1 , B ) and
F ( v ) 1 F ( a 0 ) 1 1 ω 0 ( v a 0 ) .
We also need the estimate
( 6 F ( a 0 ) ) 1 ( A k 6 F ( a 0 ) ) = ( 6 F ( a 0 ) ) 1 ( F ( a k ) + 4 F ( a k + b k 2 ) + F ( b k ) 6 F ( a 0 ) ) 1 6 ( F ( a 0 ) 1 ( F ( a k ) F ( a 0 ) ) + F ( a 0 ) 1 ( F ( b k ) F ( a 0 ) ) + 4 F ( a 0 ) 1 ( F ( a k + b k 2 ) F ( a 0 ) ) ) 1 6 [ ω 0 ( a k a 0 ) + 4 ω 0 a k a 0 + b k a 0 2 + ω 0 ( b k a 0 ) ] q k < 1 ,
so
A k 1 F ( a 0 ) 1 6 ( 1 q k ) .
Suppose that the assertions (34) and (35) hold ∀ n = 0 , 1 , 2 , , k . The induction hypothesis gives
a k + 1 a 0 a k + 1 b k + b k a 0 α k + 1 β k + β k α 0 = α k + 1 < λ .
Furthermore, we can write in turn following the second substep of the method (2)
a k + 1 b k = ( F ( a k ) 1 6 A k 1 ) F ( a k ) = ( 6 A k 1 F ( a k ) 1 ) F ( a k ) = A k 1 ( 6 F ( a k ) A k ) F ( a k ) 1 F ( a k ) = A k 1 ( 6 F ( a k ) A k ) ( b k a k ) .
We also get
F ( a 0 ) 1 ( 6 F ( a k ) A k ) F ( a 0 ) 1 ( F ( a k ) F a k + b k 2 ) + F ( a 0 ) 1 ( F ( a k ) F ( b k ) ) ω ¯ k .
In view of (32), (36) (for v = b k ), (37)–(39), we obtain
a k + 1 b k ω ¯ k ( β k α k ) 6 ( 1 q k ) = α k + 1 β k .
These estimates show that the iterate a k + 1 B ¯ ( a 0 , λ ) and the assertion (35) hold. Then, we can write following the first substep of the method (2)
F ( a k + 1 ) = F ( a k + 1 ) F ( a k ) F ( a k ) ( b k a k ) F ( a k ) ( a k + 1 a k ) + F ( a k ) ( a k + 1 a k ) = F ( a k + 1 ) F ( a k ) F ( a k ) ( a k + 1 a k ) + F ( a k ) ( a k + 1 b k ) .
It follows from (32), (c2), (c3) and (41) that
F ( a 0 ) 1 F ( a k + 1 ) 0 1 ω ( θ ( a k + 1 a k ) ) d θ a k + 1 a k + F ( a 0 ) 1 ( F ( a k F ( a 0 ) + F ( a 0 ) ) ( a k + 1 b k ) 0 1 ω ( θ ( α k + 1 α k ) ) d θ ( α k + 1 α k ) + ( 1 + ω 0 ( α k ) ) ( α k + 1 β k ) = δ k + 1 .
Hence, the first substep of the method (2), (36) (for v = a k + 1 ) and (42) give
b k + 1 a k + 1 F ( a k + 1 ) 1 F ( a 0 ) F ( a 0 ) 1 F ( a k + 1 ) δ k + 1 1 ω 0 ( a k + 1 a 0 ) δ k + 1 1 ω 0 ( α k + 1 ) = β k + 1 α k + 1 ,
and
b k + 1 a 0 b k + 1 a k + 1 + a k + 1 a 0 β k + 1 α k + 1 + α k + 1 α 0 = β k + 1 < λ .
Therefore the induction for the assertions (34), (35) is completed and a k , b k B ( a 0 , λ ) for all k = 0 , 1 , 2 , By the condition (c4) and Lemma 1, the sequences { α k } , { γ k } are Cauchy. Consequently, by (34) and (35), the sequences { a k } , { b k } are also Cauchy, and as such convergent to some a * B ¯ ( a 0 , λ ) . Finally, the continuity of the operator F and (42) if k + imply F ( a * ) = 0 .
A uniqueness region for the solution is satisfied.
Proposition 2. 
Suppose:
(a) 
There exists a simple solution d B ( a 0 , r 0 ) of the equation F ( a ) = 0 for some r 0 > 0 .
(b) 
he condition (c2) holds on the ball B ( a 0 , r 0 ) .
(c) 
There exists r r 0 such that
0 1 ω 0 ( ( 1 θ ) r 0 + θ r ) d θ < 1
Set Ω 1 = Ω B ¯ ( a 0 , r ) .
Then the equation F ( a ) = 0 is uniquely solvable by d in the region Ω 1 .
Proof. 
Let d 1 Ω 1 with F ( d 1 ) = 0 . Define the linear operator M = 0 1 F ( d + θ ( d 1 d ) ) d θ . By applying (c2) and (43), we get in turn that
F ( a 0 ) 1 ( M F ( a 0 ) ) 0 1 ω 0 ( ( 1 θ ) d a 0 + θ d 1 a 0 ) d θ 0 1 ω 0 ( ( 1 θ ) r 0 + θ r ) d θ < 1 ,
so d 1 = d .
Remark 2.  (a) The condition (c5) can be replaced by ( c 5 ) B ¯ ( a 0 , ρ ) Ω .
(b) 
Under all the conditions (c1)–(c5) we can choose d = a * and c 0 = λ .
Similarly, we develop the semilocal convergence analysis of the method (3) and the method (4).
A majorizing sequence { α n } , { β n } , { γ n } for the method (3) is given by
γ n = β n + ω ¯ n ( β n α n ) 6 ( 1 q n ) , p n = ( 1 + 0 1 ω 0 ( β n + θ ( γ n β n ) ) d θ ) ( γ n β n ) + 0 1 ω ( θ ( β n α n ) ) d θ ( β n α n ) , α n + 1 = γ n + p n 1 ω 0 ( β n ) , β n + 1 = α n + 1 + δ n + 1 1 ω 0 ( α n + 1 ) .
Moreover, a majorizing sequence for the method (4) is
γ n = β n + ω ¯ n ( β n α n ) 6 ( 1 q n ) , α n + 1 = γ n + p n 1 ω 0 ( γ n ) , β n + 1 = α n + 1 + δ n + 1 1 ω ( α n + 1 ) .
Clearly, the corresponding convergence conditions to (33) are, respectively,
ω 0 ( α n ) < 1 , ω 0 ( β n ) < 1 , q n < 1 , α n μ
ω 0 ( α n ) < 1 , ω 0 ( γ n ) < 1 , q n < 1 , α n μ
These conditions replace (c4) respectively.
The limit point is not necessarily the same for all three methods, but in order to simplify the notation, we use the same symbol λ . Under these modifications, we present the semilocal convergence of the method (3) and the method (4).
Theorem 5. 
Suppose that the conditions (c1)–(c5) hold. Then, there exists a * B ¯ ( a 0 , λ ) satisfying F ( a * ) = 0 under the method (3).
Proof. 
We proceed as in the proof of Theorem 4, but there are some differences. We get
c k b k = A k 1 ( 6 F ( a k ) A k ) ( b k a k )
so
c k b k ω ¯ k ( β k α k ) 6 ( 1 q k ) = γ k β k .
Notice that
F ( c k ) = F ( c k ) F ( b k ) + F ( b k ) = 0 1 F ( b k + θ ( c k b k ) ) d θ ( c k b k ) + F ( b k ) F ( a k ) F ( a k ) ( b k a k ) = 0 1 F ( b k + θ ( c k b k ) ) d θ F ( a 0 ) + F ( a 0 ) ( c k b k ) + 0 1 [ F ( a k + θ ( b k a k ) ) d θ F ( a k ) ] ( b k a k )
leading to
F ( a 0 ) 1 F ( c k ) 1 + 0 1 ω 0 ( β k + θ ( γ k β k ) ) d θ ( γ k β k ) + 0 1 ω ( θ ( β k α k ) ) d θ ( β k α k ) = p k ,
So,
a k + 1 c k F ( b k ) 1 F ( a 0 ) F ( a 0 ) 1 F ( c k ) p k 1 ω 0 ( β k ) = α k + 1 γ k c k a 0 c k b k + b k a 0 γ k β k + β k α 0 = γ k < λ
and
a k + 1 a 0 a k + 1 c k + c k a 0 α k + 1 γ k + γ k α 0 = α k + 1 < λ .
The rest is identical to Theorem 4. □
Theorem 6. 
Suppose that the conditions (c1)–(c5) hold. Then, there exists a * B ¯ ( a 0 , λ ) satisfying F ( a * ) = 0 under the method (4).
Proof. 
The third substep of the method (4) gives instead
a k + 1 c k p k 1 ω 0 ( γ k ) = α k + 1 γ k .
The rest follows as in Theorem 5. □
Notice that the uniqueness of the solution region has been given in Proposition 2.

6. Examples

Here, we present two examples to verify the parameters used to prove the theorems and one example to compare the convergence with that of the Noor–Waseem-type method studied in [16]. The notation K[0, 1] stands for the continuous functions on the interval [ 0 , 1 ] under maximum norm.
Example 1. 
Let B = B 1 = K [ 0 , 1 ] . Let Ω = B ¯ ( 0 , 1 ) . Consider the operator F on Ω as
F ( ψ ) ( x ) = ψ ( x ) 5 0 1 x θ ψ ( θ ) 3 d θ .
The derivative F is
F ( ψ ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x θ ψ ( θ ) 2 ξ ( θ ) d θ , f o r e a c h ξ Ω .
Note that a * = 0 . The conditions (a1)–(a5) hold, provided that L = 15 , L 3 = L 2 = 8.5 and L 1 = 31 . Then the parameters are:
r 1 = 0.050929 , r 2 = 0.039970 = r = R 1 , 2 3 L = 0.0444 , R = r 3 = 0.039032 , r 4 = 0.040450 .
Example 2. 
Let B = B 1 = R 3 , Ω = B ¯ ( 0 , 1 ) , a * = ( 0 , 0 , 1 ) T r . The mapping F on Ω for w = ( λ 1 , λ 2 , λ 3 ) T r by
F ( w ) = ( sin λ 1 , λ 2 2 5 + λ 2 , λ 3 ) T r .
Then,
F ( w ) = cos λ 1 0 0 0 2 λ 2 5 + 1 0 0 0 1
and
F ( w ) = sin λ 1 0 0 0 0 0 0 0 0 0 0 0 0 2 5 0 0 0 0 0 0 0 0 0 0 0 0 0 .
Then (a1)–(a5) hold if L = L 2 = 1 , L 3 = 7 5 and L 1 = 2 5 . Then, the parameters are:
r 1 = 0.763932 , r 2 = 0.696295 , 2 3 L = 0.6667 = r = R 1 , R = r 3 = 0.650184 , r 4 = 0.694554 .
Next, the Noor–Waseem-type method studied in [16] is compared to the methods (2)–(4).
Example 3. 
The system [17]
3 t 1 2 t 2 + t 2 2 = 1 t 1 4 + t 1 t 2 3 = 1
is solved. The solutions a * are: ( 1 , 0.2 ) , ( 0.4 , 1.3 ) and ( 0.9 , 0.3 ) . We consider the solution ( 0.9 , 0.3 ) for approximating using the methods (2)–(4), with the initial point ( 2 , 1 ) . The following Table 1, Table 2 and Table 3 provide the obtained results, where a n is the nth iterate.
Remark 3. 
Note that the Ratio columns in the tables show that the methods (2)–(4) are of orders 3 , 5 and 6 , respectively (by ignoring the first few iterates). From the tables, one can observe that the higher the order, the faster is the convergence.

7. Basins of Attraction

In order to obtain a visual region of convergence, we study the Fatou sets and Julia set of the methods (2)–(4). Recall that for a sequence { ξ i } produced by the above methods starting with ξ 0 converging to ξ * , the set S = { ξ 0 C : ξ i converges to the zero ξ * as i tends to } is called the Basin of Attraction (BA) or Fatou set [18] and S c , the complement of S, is known as a Julia set. The BAs associated with the roots of the three systems of equations are studied for the methods (2)–(4).
Example 4. 
s 3 t = 0 t 3 s = 0
  • with solutions { ( 1 , 1 ) , ( 0 , 0 ) , ( 1 , 1 ) }.
Example 5. 
3 s 2 t t s 3 = 0 s 3 3 s t 2 1 = 0
  • with solutions { ( 1 2 , 3 2 ) , ( 1 2 , 3 2 ) , ( 1 , 0 ) }.
Example 6. 
s 2 + t 2 4 = 0 3 s 2 + 7 t 2 16 = 0
  • with solutions { ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) }.
We consider the rectangular region R = { ( s , t ) R 2 : 2 s 2 , 2 t 2 } and find the basins of attraction associated with a given root in R. Consider an equidistant grid of 401 × 401 points and consider each point as an initial point, then check whether the point gives convergence to any of these roots. A maximum of 50 iterations are performed for each of the points, and the point which does not give convergence with error tolerance of 10 8 is considered a point at which the iterative function does not converge. Corresponding to each root, a color is assigned, and the initial points which give convergence to that root are marked with their respective colors. Dark regions represent the points which do not give convergence.
Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 show the BA corresponding to each root of the above examples (Examples 4–6) for the methods (2)–(4). It is clear to see that the Julia set (black region) comprises all the initial guesses from which the iterative approach does not converge to any of the roots.
All the calculations in this paper were performed on a 16-core 64-bit Windows machine with Intel Core i7-10700 CPU @ 2.90GHz, using MATLAB.
In Figure 1 (corresponding to Example 4), the red region is the set of all initial points from which the iterate (2) converges to ( 1 , 1 ) , the blue region is the set of all initial points from which the iterate (2) converges to ( 0 , 0 ) and the green region is the set of all initial points from which the iterate (2) converges to ( 1 , 1 ) . The black region represents the Julia set.
In Figure 2 (corresponding to Example 4), the red region is the set of all initial points from which the iterate (3) converges to ( 1 , 1 ) , the blue region is the set of all initial points from which the iterate (3) converges to ( 0 , 0 ) and the green region is the set of all initial points from which the iterate (3) converges to ( 1 , 1 ) . The black region represents the Julia set.
In Figure 3 (corresponding to Example 4), the red region is the set of all initial points from which the iterate (4) converges to ( 1 , 1 ) , the blue region is the set of all initial points from which the iterate (4) converges to ( 0 , 0 ) and the green region is the set of all initial points from which the iterate (4) converges to ( 1 , 1 ) . The black region represents the Julia set.
In Figure 4 (corresponding to Example 5), the red region is the set of all initial points from which the iterate (2) converges to ( 1 / 2 , ( 3 ) / 2 ) , the blue region is the set of all initial points from which the iterate (2) converges to ( 1 / 2 , ( 3 ) / 2 ) and the green region is the set of all initial points from which the iterate (2) converges to ( 1 , 0 ) . The black region represents the Julia set.
In Figure 5 (corresponding to Example 5), the red region is the set of all initial points from which the iterate (3) converges to ( 1 / 2 , ( 3 ) / 2 ) , the blue region is the set of all initial points from which the iterate (3) converges to ( 1 / 2 , ( 3 ) / 2 ) and the green region is the set of all initial points from which the iterate (3) converges to ( 1 , 0 ) . The black region represents the Julia set.
In Figure 6 (corresponding to Example 5), the red region is the set of all initial points from which the iterate (4) converges to ( 1 / 2 , ( 3 ) / 2 ) , the blue region is the set of all initial points from which the iterate (4) converges to ( 1 / 2 , ( 3 ) / 2 ) and the green region is the set of all initial points from which the iterate (4) converges to ( 1 , 0 ) . The black region represents the Julia set.
In Figure 7 (corresponding to Example 6), the red region is the set of all initial points from which the iterate (2) converges to ( ( 3 ) , 1 ) , the blue region is the set of all initial points from which the iterate (2) converges to ( ( 3 ) , 1 ) , the green region is the set of all initial points from which the iterate (2) converges to ( ( 3 ) , 1 ) and the yellow region is the set of all initial points from which the iterate (2) converges to ( ( 3 ) , 1 ) . The black region represents the Julia set.
In Figure 8 (corresponding to Example 6), the red region is the set of all initial points from which the iterate (3) converges to ( ( 3 ) , 1 ) , the blue region is the set of all initial points from which the iterate (3) converges to ( ( 3 ) , 1 ) , the green region is the set of all initial points from which the iterate (3) converges to ( ( 3 ) , 1 ) and the yellow region is the set of all initial points from which the iterate (3) converges to ( ( 3 ) , 1 ) . The black region represents the Julia set.
In Figure 9 (corresponding to Example 6), the red region is the set of all initial points from which the iterate (4) converges to ( ( 3 ) , 1 ) , the blue region is the set of all initial points from which the iterate (4) converges to ( ( 3 ) , 1 ) , the green region is the set of all initial points from which the iterate (4) converges to ( ( 3 ) , 1 ) and the yellow region is the set of all initial points from which the iterate (4) converges to ( ( 3 ) , 1 ) . The black region represents the Julia set.

8. Conclusions

Without employing Taylor expansion or making assumptions on derivatives of order no higher than two, the orders of convergence of Newton–Simpson-type methods are determined. Our idea can be used to obtain the convergence order of other similar methods. The theoretical results obtained in this paper are further justified using numerical experiments. In future, it is envisaged to be possible to provide a unified convergence analysis for methods of the form (2)–(4).

Author Contributions

Conceptualization, S.G., A.K., R.S., J.P. and I.K.A.; methodology, S.G., A.K., R.S., J.P. and I.K.A.; software, S.G., A.K., R.S., J.P. and I.K.A.; validation, S.G., A.K., R.S., J.P. and I.K.A.; formal analysis, S.G., A.K., R.S., J.P. and I.K.A.; investigation, S.G., A.K., R.S., J.P. and I.K.A.; resources, S.G., A.K., R.S., J.P. and I.K.A.; data curation, S.G., A.K., R.S., J.P. and I.K.A.; writing—original draft preparation, S.G., A.K., R.S., J.P. and I.K.A.; writing—review and editing, S.G., A.K., R.S., J.P. and I.K.A.; visualization, S.G., A.K., R.S., J.P. and I.K.A.; supervision, S.G., A.K., R.S., J.P. and I.K.A.; project administration, S.G., A.K., R.S., J.P. and I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Acknowledgments

The authors Santhosh George, Jidesh P and Ajil K wish to thank the SERB, Govt. of India for the Project No. CRG/2021/004776.

Conflicts of Interest

The authors declare that there are no conflict of interest.

References

  1. Cordero, A.; Torregrosa, J.R. Variants of newtons method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar]
  2. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
  3. Darvishi, M.T.; Barati, A. A third-order newton-type method to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 187, 630–635. [Google Scholar] [CrossRef]
  4. Frontini, M.; Sormani, E. Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 2004, 149, 771–782. [Google Scholar] [CrossRef]
  5. Homeier, H.H.H. A modified newton method with cubic convergence: The multivariable case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
  6. Khirallah, M.Q.; Hafiz, M.A. Novel three order methods for solving a system of nonlinear equations. Bull. Math. Sci. Appl. 2012, 2, 1–14. [Google Scholar]
  7. Magréñan, A.A.; Argyros, I.K.; Rainer, J.J.; Sicilia, J.A. Ball convergence of a sixth-order Newton-like method based on means under weak conditions. J. Math. Chem. 2018, 56, 2117–2131. [Google Scholar] [CrossRef]
  8. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. J. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
  9. Podisuk, M.; Chundang, U.; Sanprasert, W. Single-step formulas and multi-step formulas of integration method for solving the initial value problem of ordinary differential equation. Appl. Math. Comput. 2007, 190, 1438–1444. [Google Scholar] [CrossRef]
  10. Saeed K, M.; Remesh, K.; George, S.; Padikkal, J.; Argyros, I.K. Local Convergence of Traub’s Method and Its Extensions. Fractal Fract. 2023, 7, 98. [Google Scholar] [CrossRef]
  11. Liu, Z.; Zheng, Q.; Huang, C. Third- and fifth Newton-Gauss methods for solving nonlinear equations with n variables. Appl. Math. Comput. 2016, 290, 250–257. [Google Scholar] [CrossRef]
  12. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef] [Green Version]
  13. Cordero, A.; Martínez, E.; Toregrossa, J.R. Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 2012, 231, 541–551. [Google Scholar] [CrossRef]
  14. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press, Taylor and Francis Group: Boca Raton, FL, USA, 2022. [Google Scholar]
  15. Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Schemes; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
  16. George, S.; Sadananda, R.; Jidesh, P.; Argyros, I.K. On the Order of Convergence of Noor-Waseem Method. Mathematics 2022, 10, 4544. [Google Scholar] [CrossRef]
  17. Iliev, A.; Iliev, I. Numerical method with order t for solving system nonlinear equations. In Proceedings of the Collection of Works from the Scientific Conference Dedicated to 30 Years of FMI, Plovdiv, Bulgaria, 1–3 November 2000; pp. 105–112. [Google Scholar]
  18. Magréñan, A.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
Figure 1. Dynamics of the method (2) with BA for Example 4.
Figure 1. Dynamics of the method (2) with BA for Example 4.
Fractalfract 07 00163 g001
Figure 2. Dynamics of the method (3) associated with BA of Example 4.
Figure 2. Dynamics of the method (3) associated with BA of Example 4.
Fractalfract 07 00163 g002
Figure 3. Dynamics of the method (4) associated with BA of Example 4.
Figure 3. Dynamics of the method (4) associated with BA of Example 4.
Fractalfract 07 00163 g003
Figure 4. Dynamics of the method (2) associated with BA of Example 5.
Figure 4. Dynamics of the method (2) associated with BA of Example 5.
Fractalfract 07 00163 g004
Figure 5. Dynamics of the method (3) associated with BA of Example 5.
Figure 5. Dynamics of the method (3) associated with BA of Example 5.
Fractalfract 07 00163 g005
Figure 6. Dynamics of the method (4) associated with BA of Example 5.
Figure 6. Dynamics of the method (4) associated with BA of Example 5.
Fractalfract 07 00163 g006
Figure 7. Dynamics of the method (2) associated with BA of Example 6.
Figure 7. Dynamics of the method (2) associated with BA of Example 6.
Fractalfract 07 00163 g007
Figure 8. Dynamics of the method (3) associated with BA of Example 6.
Figure 8. Dynamics of the method (3) associated with BA of Example 6.
Fractalfract 07 00163 g008
Figure 9. Dynamics of the method (4) associated with BA of Example 6.
Figure 9. Dynamics of the method (4) associated with BA of Example 6.
Fractalfract 07 00163 g009
Table 1. Method—Order 3.
Table 1. Method—Order 3.
nNoor–Waseem Method [16]RatioNewton–Simpson Method (2)Ratio
a n = ( t 1 n , t 2 n ) ϵ n + 1 ϵ n 3 a n = ( t 1 n , t 2 n ) ϵ n + 1 ϵ n 3
0 ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 )
1 ( 1.264067 , 0.166747 ) 0.052791 ( 1.263927 , 0.166887 ) 0.052792
2 ( 1.019624 , 0.265386 ) 0.259247 ( 1.019452 , 0.265424 ) 0.259156
3 ( 0.992854 , 0.306346 ) 1.578713 ( 0.992853 , 0.306348 ) 1.580144
4 ( 0.992780 , 0.306440 ) 1.977941 ( 0.992780 , 0.306440 ) 1.977957
5 ( 0.992780 , 0.306440 ) 1.979028 ( 0.992780 , 0.306440 ) 1.979028
Table 2. Method—Order 5.
Table 2. Method—Order 5.
nNoor–Waseem Method [16]RatioNewton–Simpson Method (3)Ratio
a n = ( t 1 n , t 2 n ) ϵ n + 1 ϵ n 5 a n = ( t 1 n , t 2 n ) ϵ n + 1 ϵ n 5
0 ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 )
1 ( 1.127204 , 0.054887 ) 0.004363 ( 1.127146 , 0.054883 ) 0.004363
2 ( 0.993331 , 0.305731 ) 0.501551 ( 0.993328 , 0.305734 ) 0.501670
3 ( 0.992780 , 0.306440 ) 3.889725 ( 0.992780 , 0.306440 ) 3.889832
4 ( 0.992780 , 0.306440 ) 3.916553 ( 0.992780 , 0.306440 ) 3.916553
Table 3. Method—Order 6.
Table 3. Method—Order 6.
nNoor–Waseem Method [16]RatioNewton–Simpson Method (4)Ratio
a n = ( t 1 n , t 2 n ) ϵ n + 1 ϵ n 6 a n = ( t 1 n , t 2 n ) ϵ n + 1 ϵ n 6
0 ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 )
1 ( 1.067979 , 0.174843 ) 0.001211 ( 1.067906 , 0.174885 ) 0.001211
2 ( 0.992784 , 0.306436 ) 1.383068 ( 0.992784 , 0.306436 ) 1.384152
3 ( 0.992780 , 0.306440 ) 5.509412 ( 0.992780 , 0.306440 ) 5.509414
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

George, S.; Kunnarath, A.; Sadananda, R.; Padikkal, J.; Argyros, I.K. Order of Convergence, Extensions of Newton–Simpson Method for Solving Nonlinear Equations and Their Dynamics. Fractal Fract. 2023, 7, 163. https://doi.org/10.3390/fractalfract7020163

AMA Style

George S, Kunnarath A, Sadananda R, Padikkal J, Argyros IK. Order of Convergence, Extensions of Newton–Simpson Method for Solving Nonlinear Equations and Their Dynamics. Fractal and Fractional. 2023; 7(2):163. https://doi.org/10.3390/fractalfract7020163

Chicago/Turabian Style

George, Santhosh, Ajil Kunnarath, Ramya Sadananda, Jidesh Padikkal, and Ioannis K. Argyros. 2023. "Order of Convergence, Extensions of Newton–Simpson Method for Solving Nonlinear Equations and Their Dynamics" Fractal and Fractional 7, no. 2: 163. https://doi.org/10.3390/fractalfract7020163

APA Style

George, S., Kunnarath, A., Sadananda, R., Padikkal, J., & Argyros, I. K. (2023). Order of Convergence, Extensions of Newton–Simpson Method for Solving Nonlinear Equations and Their Dynamics. Fractal and Fractional, 7(2), 163. https://doi.org/10.3390/fractalfract7020163

Article Metrics

Back to TopTop