Next Article in Journal
A Compact Scheme Combining the Fast Time Stepping Method for Solving 2D Fractional Subdiffusion Equations
Previous Article in Journal
Robust Localization for Near- and Far-Field Signals with an Unknown Number of Sources
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Order of Convergence and Dynamics of Newton–Gauss-Type Methods

1
Department of Mathematical & Computational Science, National Institute of Technology Karnataka, Surathkal 575 025, India
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Fractal Fract. 2023, 7(2), 185; https://doi.org/10.3390/fractalfract7020185
Submission received: 9 January 2023 / Revised: 4 February 2023 / Accepted: 9 February 2023 / Published: 13 February 2023
(This article belongs to the Section General Mathematics, Analysis)

Abstract

:
On the basis of the new iterative technique designed by Zhongli Liu in 2016 with convergence orders of three and five, an extension to order six can be found in this paper. The study of high-convergence-order iterative methods under weak conditions is of extreme importance, because higher order means that fewer iterations are carried out to achieve a predetermined error tolerance. In order to enhance the practicality of these methods by Zhongli Liu, the convergence analysis is carried out without the application of Taylor expansion and requires the operator to be only two times differentiable, unlike the earlier studies. A semilocal convergence analysis is provided. Furthermore, numerical experiments verifying the convergence criteria, comparative studies and the dynamics are discussed for better interpretation.
MSC:
49M15; 47H99; 65D99; 65J15; 65G99

1. Introduction

Driven by the needs of applications in applied mathematics, Refs. [1,2,3,4,5,6] finding a solution  x *  of the equation
L ( x ) = 0 ,
is considered significant, where  L : Ω T T 1  is a nonlinear Fréchet differentiable operator. Here and below, T and  T 1  are Banach spaces and  Ω  is an open convex set in  T .  Due to the complexity in finding closed form solutions to (1), it is often advised to adopt iterative methods to find  x * . Newton’s method has been modified in numerous ways in the studies found in [7,8,9,10,11,12,13,14]. This paper is based on the Newton–Gauss iterative method studied by Zhongli Liu et al. in [15]. Precisely, in [15], Zhongli Liu et al. constructed the following method (see (2) below) by employing the two-point Gauss quadrature formula, given for each  n = 0 , 1 , 2 ,  as
y n = x n L ( x n ) 1 L ( x n ) x n + 1 = x n 2 A n 1 L ( x n ) ,
where  A n = L ( u n v n ) + L ( u n + v n ) u n = x n + y n 2 , v n = y n x n 2 3 .
This was further extended to a method of order five given by
y n = x n L ( x n ) 1 L ( x n ) z n = x n 2 A n 1 L ( x n ) , x n + 1 = z n L ( y n ) 1 L ( z n ) .
Recall [1] that a sequence  { x n }  converges to  x *  with convergence order  p > 0 ,  if for  ϵ n x = x n x * ,
ϵ n + 1 x c ( ϵ n x ) p ,
where c is called the rate of convergence.
The current level of research is established in [7,8,9,10,11,12,13,14,15]. In these references, the convergence order was established using higher order derivatives. Thus, these results cannot be applied to solve equations involving operators that are not at least five (if the order of the method is four) times differentiable. This limits their applicability, and other problems also exist:
(a)
No computable error bounds are provided;
(b)
There is no information on the uniqueness domain of the solution;
(c)
The local convergence analysis is provided only when  T = T 1 = R m ;
(d)
The more interesting (than the local) semilocal convergence analysis is not provided.
We address all of these concerns in the more general setting of a Banach space using generalized conditions and majorizing sequences. Moreover, our extended method (see (4)) is of order six, not five.
The study on the order of convergence of (2) and (3) in [15] involves Taylor expansion. The major concern in [15] is the necessity of assumptions on the derivatives of  L  up to order five, reducing the utility of the above methods. As an example, consider  f : [ 2 , 2 ] R  given by
f ( t ) = 1 20 ( t 4 log t 2 + t 6 t 5 ) i f t [ 2 , 2 ] { 0 } 0 i f t = 0 .
It follows that
f ( t ) = 1 20 ( 2 t 3 + 4 t 3 log t 2 + 6 t 5 5 t 4 ) f ( t ) = 1 20 ( 14 t 2 + 12 t 2 log t 2 + 30 t 4 20 t 3 ) f ″′ ( t ) = 1 20 ( 52 t + 24 t log t 2 + 120 t 3 60 t 2 ) f I V ( t ) = 1 20 ( 24 log t 2 + 360 t 2 120 t + 100 ) .
Observe that  f I V ( t )  is not bounded.
Our study is solely based on obtaining the required order of convergence of the above methods without using Taylor expansion and assumptions only on  L  and  L .  This enhances the applicability of the considered iterative method to a wider range of practical problems. Our approach can be used to study other similar methods [5,6,16,17,18].
Furthermore, we have extended this to a method of order six with the ideas in [3,4] defined for  n = 0 , 1 , 2 ,  given by
y n = x n L ( x n ) 1 L ( x n ) z n = x n 2 A n 1 L ( x n ) , x n + 1 = z n L ( z n ) 1 L ( z n ) .
The outline of the article is as follows. We discuss the convergence of the methods (2), (3) and (4) in Section 2, Section 3 and Section 4 respectively. Semilocal convergence of the methods is developed in Section 5. Section 6 deals with examples. Section 7 is dedicated to the dynamics and the basins of attraction of the methods (2), (3) and (4). Section 8 gives the conclusion.

2. Convergence Analysis of (2)

Hereafter,  B ( x * , τ ) = { x T : x x * < τ }  and  B ¯ ( x * , τ ) = { x T : x x * τ }  for some  τ > 0 .
The following assumptions are made in our study:
(A1)
a simple solution  x *  of (1) exists and  L ( x * ) 1 L ( T 1 , T ) ;
(A2)
L ( x * ) 1 ( L ( w 1 ) L ( w 2 ) ) L w 1 w 2 w 1 , w 2 Ω ;
(A3)
L ( x * ) 1 ( L ( w 1 ) L ( w 2 ) ) L 2 w 1 w 2 w 1 , w 2 Ω ;
(A4)
L ( x * ) 1 L ( w 2 ) L 1 w 2 Ω ;
(A5)
L ( w 1 ) 1 ( L ( w 1 ) L ( w 2 ) ) L 3 w 1 w 2 w 1 , w 2 Ω .
(A6)
B ( x * , 1 + 3 3 r ) Ω ,  for parameter  r > 0  to be specified in what follows and  L > 0 , L 1 > 0 , L 2 > 0  and  L 3 > 0  are scalars.
We define the functions  ϕ , ϕ 1 , h 1 : [ 0 , 1 L ) R  by
ϕ ( t ) = L 2 ( 1 L t ) ,
ϕ 1 ( t ) = L 2 1 + L 2 ( 1 L t ) t
and let  h 1 ( t ) = ϕ 1 ( t ) t 1 .  Observe that  h 1  is a continuous nondecreasing function. Furthermore,  h 1 ( 0 ) = 1 < 0  and  lim t 1 L h 1 ( t ) = + .  Therefore, there exists a smallest zero  r 1 ( 0 , 1 L )  for  h 1 ( t ) = 0 .
Let the functions  ϕ 2 , h 2 : [ 0 , 1 L ) R  be defined by
ϕ 2 ( t ) = 1 1 ϕ 1 ( t ) t ( 1 + 3 ) L 1 ϕ ( t ) 2 3 + L 2 ( 1 + ϕ ( t ) t ) 24
and  h 2 ( t ) = ϕ 2 ( t ) t 2 1 .  Observe that  h 2  is a nondecreasing and continuous function,  h 2 ( 0 ) = 1 < 0  and  lim t r 1 h 2 ( t ) = + .  Therefore,  h 2  has a smallest zero  ρ ( 0 , r 1 ) .
Let
r = min { 2 3 L , ρ } .
Then,  0 ϕ ( t ) t < 1 , 0 ϕ 1 ( t ) t < 1  and  0 ϕ 2 ( t ) t 2 < 1 , t [ 0 , r ) .
For convenience, we use the notation  ϵ n x = x n x * , ϵ n y = y n x * , ϵ n z = z n x *  and  B * = B ( x * , r ) { x * } .
Theorem 1.
Assume (A1)–(A6) hold, and let r be as in (5). Then, the sequence  { x n }  given by the formula (2), for  x 0 B *  exists,  x n B ¯ ( x * , r ) n = 0 , 1 , 2 ,  and  lim n x n = x * .  Furthermore,
ϵ n y ϕ ( r ) ( ϵ n x ) 2
and
ϵ n + 1 x ϕ 2 ( r ) ( ϵ n x ) 3 .
Proof. 
This is proved utilizing induction. Let  x B ( x * , r ) .  Using (A2) gives
L ( x * ) 1 ( L ( x ) L ( x * ) ) L x x * L r < 1 .
By the Banach result on invertible operators [1,2], it follows that
L ( x ) 1 L ( x * ) 1 1 L x x * .
The Mean Value Theorem (MVT) leads to
L ( ξ ) = L ( ξ ) L ( x * ) = 0 1 L ( x * + t ( ξ x * ) ) d t ( ξ x * ) ,
for any  ξ T .  Next, because  y 0 = x 0 L ( x 0 ) 1 L ( x 0 ) ,  by (9) (with  ξ = x 0 ), we have
ϵ 0 y = x 0 x * L ( x 0 ) 1 L ( x 0 ) = ( x 0 x * ) L ( x 0 ) 1 0 1 L ( x * + t ( x 0 x * ) ) d t ( x 0 x * ) = 0 1 L ( x 0 ) 1 ( L ( x 0 ) L ( x * + t ( x 0 x * ) ) ) d t ( x 0 x * ) 0 1 L ( x 0 ) 1 ( L ( x 0 ) L ( x * + t ( x 0 x * ) ) ) d t ϵ 0 x = 0 1 L ( x 0 ) 1 L ( x * ) L ( x * ) 1 ( L ( x 0 ) L ( x * + t ( x 0 x * ) ) ) d t ϵ 0 x .
Thus, by our assumptions,
ϵ 0 y L 2 ( 1 L ϵ 0 x ) ( ϵ 0 x ) 2 ϕ ( ϵ 0 x ) ( ϵ 0 x ) 2 < ϵ 0 x < r .
So, the iterate  y 0 B ( x * , r )  and the result holds for  n = 0 .  In addition,  A 0 1  is well defined. In fact,
( 2 L ( x * ) ) 1 ( A 0 2 L ( x * ) ) = ( 2 L ( x * ) ) 1 ( L ( u 0 v 0 ) + L ( u 0 + v 0 ) 2 L ( x * ) ) = 1 2 L ( x * ) 1 ( L ( ( 1 + 3 ) x 0 + ( 3 1 ) y 0 2 3 ) L ( x * ) ) + L ( x * ) 1 ( L ( ( 3 1 ) x 0 + ( 3 + 1 ) y 0 2 3 ) L ( x * ) ) L 2 ( 1 + 3 ) x 0 + ( 3 1 ) y 0 2 3 x * + ( 3 1 ) x 0 + ( 3 + 1 ) y 0 2 3 x * L 2 [ ϵ 0 x + ϵ 0 y ] L 2 [ 1 + L 2 ( 1 L r ) ϵ 0 x ] ϵ 0 x ϕ 1 ( r ) ϵ 0 x 1 ,
where we also used
u 0 + v 0 x * = 3 ( x 0 x * ) + 3 ( y 0 x * ) + ( y 0 x * ) + ( x * x 0 ) 2 3 1 + 3 3 r
and
u 0 v 0 x * = 3 ( x 0 x * ) + 3 ( y 0 x * ) + ( x 0 x * ) + ( x * y 0 ) 2 3 1 + 3 3 r .
That is  u 0 + v 0 , u 0 v 0 B ( x * , 1 + 3 3 r ) .  Therefore, by Banach lemma on invertible operators [1 A 0 1  exists and
A 0 1 L ( x * ) 1 2 ( 1 ϕ 1 ( ϵ 0 x ) ϵ 0 x )
From (2) and (9), we obtain
x 1 x * = x 0 x * 2 A 0 1 L ( x 0 ) = A 0 1 ( A 0 2 0 1 L ( x * + t ( x 0 x * ) ) d t ) ( x 0 x * ) = A 0 1 ( L ( u 0 v 0 ) + L ( u 0 + v 0 ) 2 0 1 L ( x * + t ( x 0 x * ) ) d t ) ( x 0 x * ) = A 0 1 [ L x 0 ( 1 + 3 ) + y 0 ( 3 1 ) 2 3 + L x 0 ( 3 1 ) + y 0 ( 3 + 1 ) 2 3 2 0 1 L ( x * + t ( x 0 x * ) ) d t ] ( x 0 x * ) = A 0 1 0 1 L x 0 ( 1 + 3 ) + y 0 ( 3 1 ) 2 3 L ( x * + t ( x 0 x * ) ) d t ( x 0 x * ) + A 0 1 0 1 L x 0 ( 3 1 ) + y 0 ( 3 + 1 ) 2 3 L ( x * + t ( x 0 x * ) ) d t × ( x 0 x * ) .
Let
G 0 ( θ , t ) = L x * + t ( x 0 x * ) + θ x 0 ( 1 + 3 ) + y 0 ( 3 1 ) 2 3 x * t ( x 0 x * ) H 0 ( θ , t ) = L x * + t ( x 0 x * ) + θ x 0 ( 3 1 ) + y 0 ( 3 + 1 ) 2 3 x * t ( x 0 x * )
By (MVT) for second derivatives, (12) gives
x 1 x * = A 0 1 [ 0 1 0 1 G 0 ( θ , t ) d θ x 0 ( 1 + 3 ) + y 0 ( 3 1 ) 2 3 x * t ( x 0 x * ) d t + 0 1 0 1 H 0 ( θ , t ) d θ x 0 ( 3 1 ) + y 0 ( 3 + 1 ) 2 3 x * t ( x 0 x * ) d t ] ( x 0 x * ) = A 0 1 2 3 [ 0 1 0 1 G 0 ( θ , t ) d θ ( x 0 x * ( y 0 x * ) + 3 ( x 0 x * ) + 3 ( y 0 x * ) 2 3 t ( x 0 x * ) ) d t + 0 1 0 1 H 0 ( θ , t ) d θ ( y 0 x * + x * x 0 + 3 ( x 0 x * ) + 3 ( y 0 x * ) 2 3 t ( x 0 x * ) ) d t ] ( x 0 x * ) = A 0 1 2 0 1 0 1 G 0 ( θ , t ) d θ ( x 0 x * ) ( 1 2 t ) d t ( x 0 x * ) ( 1 + 3 ) A 0 1 2 3 0 1 0 1 G 0 ( θ , t ) d θ ( y 0 x * ) d t ( x 0 x * ) + A 0 1 2 0 1 0 1 H 0 ( θ , t ) d θ ( ( x 0 x * ) ( 1 2 t ) d t ) ( x 0 x * ) + ( 1 + 3 ) A 0 1 2 3 0 1 0 1 H 0 ( θ , t ) d θ ( y 0 x * ) d t ( x 0 x * ) + A 0 1 2 3 0 1 0 1 ( G 0 ( θ , t ) H 0 ( θ , t ) ) d θ d t ( x 0 x * ) 2 = : M 1 + M 2 + M 3 + M 4 + M 5 ,
where
M 1 = A 0 1 2 0 1 0 1 G 0 ( θ , t ) d θ ( 1 2 t ) d t ( x 0 x * ) 2 M 2 = ( 1 + 3 ) A 0 1 2 3 0 1 0 1 G 0 ( θ , t ) d θ ( y 0 x * ) d t ( x 0 x * ) M 3 = A 0 1 2 0 1 0 1 H 0 ( θ , t ) d θ ( ( x 0 x * ) ( 1 2 t ) d t ) ( x 0 x * ) M 4 = ( 1 + 3 ) A 0 1 2 3 0 1 0 1 H 0 ( θ , t ) d θ ( y 0 x * ) d t ( x 0 x * )
and
M 5 = A 0 1 2 3 0 1 0 1 ( G 0 ( θ , t ) H 0 ( θ , t ) ) d θ d t ( x 0 x * ) 2 .
Then, by (12) and assumption (A4),
M 1 = A 0 1 2 0 1 0 1 G 0 ( θ , t ) d θ ( 1 2 t ) d t ( x 0 x * ) 2 A 0 1 2 L ( x * ) max t 0 1 L ( x * ) 1 G 0 ( θ , t ) d θ 0 1 ( 1 2 t ) ( x 0 x * ) 2 d t = 0 .
Similarly, one can observe
M 3 = 0 .
By the assumptions(A1)–(A5) and (12), we get
M 2 = ( 1 + 3 ) A 0 1 2 3 L ( x * ) 0 1 0 1 L ( x * ) 1 G 0 ( θ , t ) d θ ( y 0 x * ) d t ( x 0 x * ) 1 + 3 2 3 L 1 2 ( 1 ϕ 1 ( ϵ 0 x ) ϵ 0 x ) ϵ 0 y ϵ 0 x .
Similarly,
M 4 1 + 3 2 3 L 1 2 ( 1 ϕ 1 ( ϵ 0 x ) ϵ 0 x ) ϵ 0 y ϵ 0 x
Note that
G 0 ( θ , t ) H 0 ( θ , t ) = L ( X 1 ) L ( X 2 ) ,
where
X 1 = x * + t ( x 0 x * ) + θ x 0 ( 1 + 3 ) + y 0 ( 3 1 ) 2 3 x * t ( x 0 x * ) , X 2 = x * + t ( x 0 x * ) + θ x 0 ( 3 1 ) + y 0 ( 3 + 1 ) 2 3 x * t ( x 0 x * ) .
Then
X 1 X 2 = θ 3 ( x 0 x * ( y 0 x * ) )
So, by (12) and assumption (A3),
M 5 = A 0 1 2 3 L ( x * ) 0 1 0 1 L ( x * ) 1 ( G 0 ( θ , t ) H 0 ( θ , t ) ) d θ d t ( x 0 x * ) 2 L 2 4 3 ( 1 ϕ 1 ( ϵ 0 x ) ϵ 0 x ) 0 1 0 1 X 1 X 2 d θ d t ( ϵ 0 x ) 2 L 2 4 3 ( 1 ϕ 1 ( ϵ 0 x ) ϵ 0 x ) 0 1 0 1 θ 3 ( x 0 x * ( y 0 x * ) ) d θ d t ( ϵ 0 x ) 2 L 2 24 ( 1 ϕ 1 ( ϵ 0 x ) ϵ 0 x ) [ ϵ 0 x + ϵ 0 y ] ( ϵ 0 x ) 2 L 2 24 ( 1 ϕ 1 ( ϵ 0 x ) ϵ 0 x ) [ ϵ 0 x + ϕ ( ϵ 0 x ) ( ϵ 0 x ) 2 ] ( ϵ 0 x ) 2 L 2 24 ( 1 ϕ 1 ( ϵ 0 x ) ϵ 0 x ) [ 1 + ϕ ( ϵ 0 x ) ϵ 0 x ] ( ϵ 0 x ) 3 .
Combining (14)–(19) gives
ϵ 1 x M 1 + M 2 + M 3 + M 4 + M 5 1 + 3 2 3 L 1 ( 1 ϕ 1 ( ϵ 0 x ) ϵ 0 x ) ϕ ( ϵ 0 x ) ( ϵ 0 x ) 3 + L 2 24 ( 1 ϕ 1 ( ϵ 0 x ) ϵ 0 x ) [ 1 + ϕ ( ϵ 0 x ) ϵ 0 x ] ( ϵ 0 x ) 3 1 1 ϕ 1 ( ϵ 0 x ) ϵ 0 x × 1 + 3 2 3 L 1 ϕ ( ϵ 0 x ) + L 2 24 ( 1 + ϕ ( ϵ 0 x ) ϵ 0 x ) ( ϵ 0 x ) 3 ϕ 2 ( r ) ( ϵ 0 x ) 3 .
Therefore, because  ϕ 2 ( r ) r 2 < 1 ,  we have  x 1 x * < r , so the iterate  x 1 B ( x * , r ) .
The induction for (6) and (7) is completed, if one replaces  x 0 , y 0 , x 1  in the above conclusions with  x n , y n , x n + 1 ,  respectively. □

3. Convergence Analysis of (3)

Let  ϕ 3 , h 3 : [ 0 , r 1 ) R  be given as
ϕ 3 ( t ) = L 3 ( ϕ ( t ) + 1 2 ϕ 2 ( t ) t ) ϕ 2 ( t )
and  h 3 ( t ) = ϕ 3 ( t ) t 4 1 .  Then,  h 3 ( 0 ) = 1  and  lim t r 1 h 3 ( t ) = + .  Therefore,  h 3  has a smallest zero  r 2 ( 0 , r 1 ) .  Let
R = min { r , r 2 } .
Then,
0 ϕ 3 ( t ) t 4 < 1 t [ 0 , R ) .
Set  B 1 * = B ( x * , R ) { x * } .  For Method (3), we have the following theorem:
Theorem 2.
Assume (A1)–(A6) hold, and let R be as in (20). Then, the sequence  { x n }  given by the formula (3), for  x 0 B 1 *  exists,  x n B ¯ ( x * , R ) n = 0 , 1 , 2 ,  and  lim n x n = x * .  Furthermore,
ϵ n y ϕ ( R ) ( ϵ n x ) 2 ,
ϵ n z ϕ 2 ( R ) ( ϵ n x ) 3
and
ϵ n + 1 x ϕ 3 ( R ) ( ϵ n x ) 5 .
Proof. 
Note that (21) and (22) follow as in Theorem 1, by letting  r = R  and  x n + 1 = z n  in Theorem 1 and hence  z n B ( x * , R ) .  Observe
x n + 1 x * = z n x * L ( y n ) 1 L ( z n ) = L ( y n ) 1 0 1 [ L ( y n ) L ( x * + t ( z n x * ) ) ] d t ( z n x * ) = 0 1 L ( y n ) 1 [ L ( y n ) L ( x * + t ( z n x * ) ) ] d t ( z n x * ) .
So, by (A5), we get
ϵ n + 1 x L 3 ( ϵ n y + 1 2 ϵ n z ) ϵ n z L 3 ( ϕ ( ϵ n x ) ( ϵ n x ) 2 + 1 2 ϕ 2 ( ϵ n x ) ( ϵ n x ) 3 ) ϕ 2 ( ϵ n x ) ( ϵ n x ) 3 L 3 ( ϕ ( ϵ n x ) + 1 2 ϕ 2 ( ϵ n x ) ϵ n x ) ϕ 2 ( ϵ n x ) ( ϵ n x ) 5 ϕ 3 ( R ) ( ϵ n x ) 5 .

4. Convergence Analysis of (4)

Let  ϕ 4 , h 4 : [ 0 , r 1 ) R  be given by
ϕ 4 ( t ) = L 3 2 ( ϕ 2 ( t ) ) 2
and  h 5 ( t ) = ϕ 4 ( t ) t 5 1 .  Then,  h 4 ( 0 ) = 1  and  lim t r 1 h 4 ( t ) = .  Therefore,  h 4  has a smallest zero  r 3 ( 0 , r 1 ) .
Let
R 1 = min { r , r 3 } .
Then,
0 ϕ 4 ( t ) t 5 < 1 t [ 0 , R 1 ) .
Set  B 2 * = B ( x * , R 1 ) { x * } .  Similarly, for Method (4), we develop:
Theorem 3.
Assume that (A1)–(A6) hold, and let  R 1  be as in (24). Then, the sequence  { x n }  given by the formula (4), for  x 0 B 2 *  exists,  x n B ¯ ( x * , R 1 ) n = 0 , 1 , 2 ,  and  lim n x n = x * .  Furthermore,
ϵ n y ϕ ( R 1 ) ( ϵ n x ) 2 ,
ϵ n z ϕ 2 ( R 1 ) ( ϵ n x ) 3
and
ϵ n + 1 x ϕ 4 ( R 1 ) ( ϵ n x ) 6 .
Proof. 
Observe that (25) and (26) follow as in Theorem 2. Note that
x n + 1 x * = z n x * L ( z n ) 1 L ( z n ) = L ( z n ) 1 0 1 [ L ( z n ) L ( x * + t ( z n x * ) ) ] d t ( z n x * ) = 0 1 L ( z n ) 1 [ L ( z n ) L ( x * + t ( z n x * ) ) ] d t ( z n x * ) .
Hence, by (A5), we have
ϵ n + 1 x L 3 2 ( ϵ n z ) 2 L 3 2 ( ϕ 2 ( ϵ n x ) ) 2 ( ϵ n x ) 6 ϕ 4 ( R 1 ) ( ϵ n x ) 6 .
Next, we provide a uniqueness result of the solution  x * .
Proposition 1.
Suppose:
(1)  x * B ( x * , ρ )  such that  L ( x * ) = 0  for some  ρ > 0  and  K > 0  such that
L ( x * ) 1 ( L ( x * ) L ( x ) ) K x * x
for each  x B ( x * , ρ ) .
(2)  ρ 1 ρ  such that
ρ 1 < 2 K .
Let  S = B ¯ ( x * , ρ 1 ) Ω .  Then, the Equation (1) has a unique solution  x * S .
Proof. 
Let  γ S  be such that  F ( γ ) = 0 .  Let M as  M = 0 1 L ( x * + τ ( γ x * ) ) d τ .  Then, by the conditions (28) and (29), we have
L ( x * ) 1 ( M L ( x * ) ) K 0 1 τ x * γ d τ K 2 ρ 1 < 1 .
So, the linear operator M is invertible and
γ x * = M 1 ( L ( γ ) L ( x * ) ) = M 1 ( 0 ) = 0 ,
so  γ = x * .

5. Semilocal Convergence

Generalized  ω  continuity conditions, as well as real majorizing sequences, are utilized to show the semilocal convergence of the three methods under the same set of conditions [1,2,5].
Suppose: ∃ a nondecreasing and continuous real function  ω 0  defined on  [ 0 , + )  such that the function  ω 0 ( t ) 1  has a minimal root  μ > 0 .  Let  ω  be a nondecreasing and continuous real function on  [ 0 , μ ) .
We shall show that the following scalar functions are majorizing for Method (2), Method (3) and Method (4), respectively, for  α 0 = 0 , β 0 0  and each  n = 0 , 1 ,
q n = ω 0 3 ( α n + β n ) + β n α n 2 3 ω ¯ n = ξ n 1 = ω 3 1 2 3 ( β n α n ) + ω 3 + 1 2 3 ( β n α n ) ξ n 2 = 2 ( q n + ω 0 ( α n ) ) α n + 1 = β n + ω ¯ n ( β n α n ) 2 ( 1 q n ) a n + 1 = ( 1 + ω 0 ( α n ) ) ( α n + 1 β n ) + 0 1 ω ( θ ( α n + 1 α n ) ) d θ ( α n + 1 α n )
and
β n + 1 = α n + 1 + a n + 1 1 ω 0 ( α n + 1 ) ,
γ n = β n + ω ¯ n ( β n α n ) 2 ( 1 q n ) α n + 1 = γ n + p n 1 ω 0 ( β n ) p n = ( 1 + 0 1 ω 0 ( β n + θ ( γ n β n ) ) d θ ) ( γ n β n ) + 0 1 ω ( θ ( β n α n ) ) d θ ( β n α n )
and
β n + 1 = α n + 1 + a n + 1 1 ω 0 ( α n + 1 ) ,
γ n = β n + ω ¯ n ( β n α n ) 2 ( 1 q n ) α n + 1 = γ n + p n 1 ω 0 ( γ n )
and
β n + 1 = α n + 1 + a n + 1 1 ω 0 ( α n + 1 ) .
We choose in practice the smallest version of the possible sequences  ω ¯ n .
Next, the convergence is developed for these sequences.
Lemma 1.
Let  λ [ 0 , μ ]  be such that for each  n = 0 , 1 , 2 ,
ω 0 ( α n ) < 1 , q n < 1 a n d α n < λ .
Then, the scalar sequences generated by the formulas (30), (31) and (32) are convergent to some  λ * [ 0 , λ ] .
Proof. 
It follows from the definitions and the condition (33) that these sequences are nondecreasing and bounded from above by  λ .  Hence, they are convergent to some  λ * [ 0 , λ ] .
Remark 1.
The parameter  λ *  is an upper bound of each of these scalar sequences, and it is not the same in general. Moreover, it is the least and unique.
The functions  ω 0 , ω ,  the parameter  λ *  and the scalar sequences are associated with the operator  F  as follows:
(E1)
x 0 Ω  and a parameter  β 0 0  such that the operator  L ( x 0 ) 1  is well defined and  L ( x 0 ) 1 L ( x 0 ) β 0 .
(E2)
L ( x 0 ) 1 ( L ( y ) L ( x 0 ) ) ω 0 ( y x 0 )  for each  y Ω .
Set  Ω 0 = Ω B ( x 0 , μ ) .
(E3)
L ( x 0 ) 1 ( L ( v 1 ) L ( v 2 ) ) ω ( v 1 v 2 )  for each  v 1 , v 2 Ω 0 .
(E4)
The conditions of Lemma 1 hold and
(E5)
B ¯ ( x 0 , 1 + 3 3 λ * ) Ω .
Next, we first present the convergence of Method (2).
Theorem 4.
Under the conditions (E1)–(E5), the sequence  { x n }  generated by the formula (2) is convergent to some  x * B ¯ ( x 0 , λ * )  satisfying  L ( x * ) = 0  and
x * x n λ * α n .
Proof. 
The assertions
y k x k β k α k
and
x k + 1 y k α k + 1 β k
shall be established by induction. The assertion (35) holds for  k = 0  by (E1) and the first substep of Method (2). Then, we also have that  y 0 B ( x 0 , λ * ) .
Then, as in (11), but using  ω 0 , x 0  for  L , x * ,  respectively, we obtain
( 2 L ( x 0 ) ) 1 ( A k 2 L ( x 0 ) ) 1 2 L ( x 0 ) 1 L ( 1 + 3 ) x k + ( 3 1 ) y k 2 3 L ( x 0 ) + L ( x 0 ) 1 L ( 3 1 ) x k + ( 3 + 1 ) y k 2 3 L ( x 0 ) 1 2 ω 0 3 x k x 0 + 3 y k x 0 + y k x k 2 3 + ω 0 3 x k x 0 + 3 y k x 0 + y k x k 2 3 ω 0 3 ( α k + β k ) + β k α k 2 3 = q k < 1 ,
thus
A k 1 L ( x 0 ) 1 2 ( 1 q k ) ,
where we also used
u k + v k x 0 = 1 2 3 3 ( x k x 0 ) + 3 ( y k x 0 ) + ( y k x 0 ) + ( x 0 x k ) 1 + 3 3 λ *
and
u k v k x 0 = 1 2 3 3 ( x k x 0 ) + 3 ( y k x 0 ) + ( x k x 0 ) + ( x 0 y k ) 1 + 3 3 λ * ,
so  u k + v k , u k v k B ( x 0 , 1 + 3 3 λ * ) Ω .
Similarly, for  u B ( x 0 , λ * )
L ( x 0 ) 1 ( L ( u ) L ( x 0 ) ) ω 0 ( u x 0 ) ω 0 ( λ * ) < 1 ,
thus
L ( u ) 1 L ( x 0 ) 1 1 ω 0 ( u x 0 ) .
It follows from (37) and (38) that  y k , x k + 1  exist. Moreover, we get
x k + 1 y k = ( L ( x k ) 1 2 A k 1 ) L ( x k ) = ( 2 A k 1 L ( x k ) 1 ) L ( x k ) = A k 1 ( 2 L ( x k ) A k ) L ( x k ) 1 L ( x k ) = A k 1 ( 2 L ( x k ) A k ) ( y k x k ) .
We also need the estimate
L ( x 0 ) 1 ( A k 2 L ( x k ) ) L ( x 0 ) 1 ( L ( u k v k ) L ( x k ) ) + L ( x 0 ) 1 ( L ( u k + v k ) L ( x k ) ) L ( x 0 ) 1 L ( 3 + 1 ) x k + ( 3 1 ) y k 2 3 L ( x k ) + L ( x 0 ) 1 L ( 3 1 ) x k + ( 3 + 1 ) y k 2 3 L ( x k ) ξ k 1 ω ¯ k
or
L ( x 0 ) 1 ( A k 2 L ( x k ) ) L ( x 0 ) 1 ( L ( u k v k ) L ( x 0 ) ) + L ( x 0 ) 1 ( L ( u k + v k ) L ( x 0 ) ) + 2 L ( x 0 ) 1 ( L ( x k ) L ( x 0 ) ) ξ k 2 ω ¯ k .
By using (20) and (37)–(41), we get
x k + 1 y k A k 1 L ( x 0 ) L ( x 0 ) 1 ( A k 2 L ( x k ) ) y k x k ω ¯ k ( β k α k ) 2 ( 1 q k ) = α k + 1 β k
and
x k + 1 x 0 x k + 1 y k + y k x 0 α k + 1 β k + β k α 0 = α k + 1 < λ * .
Thus, the iterate  x k + 1 B ( x 0 , λ * )  and the assertions (36) and (38) (for  u = x k ) hold. Furthermore, by the first substep of Method (2), the iterate  y k + 1  is well defined and
y k + 1 x k + 1 = [ L ( x k + 1 ) L ( x 0 ) ] [ L ( x 0 ) 1 L ( x k + 1 ) ] .
Notice that
L ( x 0 ) 1 L ( x k + 1 ) 0 1 L ( x 0 ) 1 ( L ( x k + θ ( x k + 1 x k ) ) L ( x k ) ) d θ ( x k + 1 x k ) + L ( x 0 ) 1 ( L ( x k ) L ( x 0 ) + L ( x 0 ) ) ( x k + 1 y k ) 0 1 ω ( θ x k + 1 x k ) d θ x k + 1 x k + ( 1 + ω 0 ( x k x 0 ) ) x k + 1 y k 0 1 ω ( θ ( α k + 1 α k ) ) d θ ( α k + 1 α k ) + ( 1 + ω 0 ( α k ) ) ( α k + 1 β k ) = a k + 1 .
Then, by (20), (43), (44) and (38) (for  u = x k + 1 ),
y k + 1 x k + 1 L ( x k + 1 ) 1 L ( x 0 ) L ( x 0 ) 1 L ( x k + 1 ) a k + 1 1 ω 0 ( α k + 1 ) = β k + 1 α k + 1
and
y k + 1 x 0 y k + 1 x k + 1 + x k + 1 x 0
β k + 1 α k + 1 + α k + 1 α 0
= β k + 1 < λ * .
Hence, the induction for the assertions (35), (36) is completed and the iterates  x k , y k B ( x 0 , λ * ) .  By the condition (E4), the sequence  { α n }  is Cauchy. It follows from (42) and (45) that the sequence  { x n }  is also fundamental in  T ,  so  x * B ¯ ( x 0 , λ * ) .  In view of (44) and the continuity of the operator  F ,  it follows that  L ( x * ) = 0  (if  k + ). Let  m 0  be an integer. Then, we have the estimate
x k + m x n α k + m α n .
By letting  m +  in (48), we show the assertion (36). □
Similarly, we show the convergence for Method (3) and Method (4), respectively.
Theorem 5.
Under the conditions (E1)–(E5), the sequence given by (3) is convergent to some  x * B ¯ ( x 0 , λ * )  satisfying  L ( x * ) = 0  and
y n x n β n α n ,
z n y n γ n β n ,
x n + 1 z n α n + 1 γ n
and
x * x n λ * α n .
Proof. 
The assertions (49), (50) and (52) are given in Theorem 4. We have from the last substep of Method (3)
x k + 1 z k = L ( y k ) 1 L ( z k ) .
However,
L ( z k ) = L ( z k ) L ( y k ) + L ( y k ) = 0 1 ( L ( y k + θ ( z k y k ) ) L ( x 0 ) + L ( x 0 ) ) d θ ( z k y k ) + L ( y k ) L ( x k ) L ( x k ) ( y k x k ) = 0 1 ( L ( y k + θ ( z k y k ) ) L ( x 0 ) + L ( x 0 ) ) d θ ( z k y k ) + 0 1 L ( x k + θ ( y k x k ) ) d θ L ( x k ) ( y k x k ) ,
so we get
L ( x 0 ) 1 L ( z k ) ( 1 + 0 1 ω 0 ( y k x 0 + θ z k y k ) d θ ) z k y k + 0 1 ω ( θ y k x k ) d θ y k x k = p ¯ k p k .
Consequently, we obtain
x k + 1 z k L ( y k ) 1 L ( x 0 ) L ( x 0 ) 1 L ( z k ) p ¯ k 1 ω 0 ( y k x 0 ) p k 1 ω 0 ( β k ) = α k + 1 γ k .
Theorem 6.
Under the conditions (E1)–(E5), the sequence  { x n }  given by Method (4) is convergent to some  x * B [ x 0 , λ * ]  satisfying  L ( x * ) = 0 ,
y n x n β n α n ,
z n y n γ n β n ,
x n + 1 z n α n + 1 γ n
and
x * x n λ * α n ,
where the sequences  { x n } , { β n } , { γ n }  are given by the formula (32).
Proof. 
The proof is as in the proof of Theorem 4, but we use Method (4) to obtain instead
x k + 1 z k p k ¯ 1 ω 0 ( z k x 0 ) p k 1 ω 0 ( γ k ) = α k + 1 γ k
Next, a region is specified for the solution.
Proposition 2.
Suppose:
(1)  b * B ( x 0 , μ 1 )  such that  L ( b * ) = 0  for some  μ 1 > 0 ;
(2) The condition (E2) holds on the ball  B ( x 0 , μ 1 ) ;
(3) There exists  μ 2 μ 1  such that
0 1 ω 0 ( ( 1 θ ) μ 1 + θ μ 2 ) d θ < 1
Define the region  Ω 1 = Ω B ¯ ( x 0 , μ 2 ) . Then the unique solution of the equation  L ( x ) = 0  in the region  Ω 1  is  b * .
Proof. 
Define the linear operator
G = 0 1 L ( b * + θ ( c * b * ) ) d θ
for some  c * Ω 1  with  L ( c * ) = 0 .  It follows, by the conditions (E2) and (59) in turn, that
L ( x 0 ) 1 ( G L ( x 0 ) ) 0 1 ω 0 ( ( 1 θ ) b * x 0 + θ c * x 0 ) d θ 0 1 ω 0 ( ( 1 θ ) μ 1 + θ μ 2 ) d θ < 1
Therefore, we get
c * b * = G ( L ( c * ) L ( b * ) ) = G ( 0 ) = 0 ,
and consequently,  c * = b * .
Remark 2.
(1) Under all the conditions (E1)–(E5), we can choose  b * = x *  and  μ 1 = λ * .
(2) The limit point  λ *  in the condition (E5) can be replaced by λ or μ given in the Lemma 1.

6. Numerical Examples

This section gives two examples for the verification of parameters used in the above discussed theorems and an example comparison of this method with a Noor–Waseem type method [19] and a Newton–Simpson type method [7].
Example 1.
Let  T = T 1 = R 3 , Ω = B ¯ ( 0 , 1 )  equipped with max norm and  x * = ( 0 , 0 , 0 ) .  Consider  L  on Ω defined for  w = ( x , y , z )  as
L ( w ) = ( s i n x , y 2 5 + y , z ) .
it follows that
L ( w ) = c o s x 0 0 0 2 y 5 + 1 0 0 0 1
and
L ( w ) = s i n x 0 0 0 0 0 0 0 0 0 0 0 0 2 5 0 0 0 0 0 0 0 0 0 0 0 0 0 .
Then, (A1)–(A5) hold if  L = L 2 = L 3 = 1  and  L 1 = 0.84147 .  The parameters are:  ρ = R = R 1 = r = 0.636863 , r 3 = 0.679151 , r 2 = 0.643479 , r 1 = 0.763932
Example 2.
Let  T = T 1 = C [ 0 , 1 ] , Ω = B ¯ ( 0 , 1 ) .  Define the function  L  on Ω by
L ( ψ ) ( x ) = ψ ( x ) 5 0 1 x τ ψ ( τ ) 3 d τ .
We have that
L ( ψ ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x τ ψ ( τ ) 2 ξ ( τ ) d τ , f o r e a c h ξ Ω .
Then, for  x * = 0 ,  we can take  L = L 3 = 15  and  L 2 = 8.5  and  L 1 = 31 .  The parameters are:  R = ρ = R 1 = 0.036784 , r 3 = 0.041032 , r 1 = 0.050929 , r 2 = 0.038704 , r = 0.036784
In the next two examples [17] [15], we compare the Noor–Waseem-type method studied in [19] and the Newton– Simpson type method in [7] with the methods (2), (3) and (4).
Example 3.
Let  T = T 1 = R 2 .  We solve the system
3 t 1 2 t 2 + t 2 2 = 1 t 1 4 + t 1 t 2 3 = 1 .
The solutions are:  ( 0.99277999485112325 , 0.30644044651102043 ) ,   ( 1.0066889708043846 , 0.29942840960301947 )  and  ( 0.42803253976074306 , 1.3118929070441660 ) .
We consider  ( 0.99277999485112325 , 0.30644044651102043 )  for approximating using these methods (2), (3) and (4) with initial guess  ( 2 , 1 ) .  The obtained results are displayed in Table 1, Table 2 and Table 3.
Remark 3.
Note that the columns corresponding to Ratios in the tables show that Methods (2), (3) and (4) are of order  3 , 5  and  6 ,  respectively (ignoring the first few iterates).
Example 4.
Let  T = T 1 = R 2 .  Consider the system of equations
( t 1 1 ) 4 + e t 2 t 2 2 + 3 t 2 + 1 = 0 4 sin ( t 1 1 ) l n ( t 1 2 t 1 + 1 ) t 2 2 = 0 .
We approximate the solution  ( 1.2713843079501316 , 0.8808190731026610 )  using these methods, with the initial guess  ( 1 , 1.5 ) .  The obtained results are displayed in Table 4, Table 5 and Table 6.

7. Basins of Attractions

To obtain the convergence region of Methods (2), (3) and (4), we study the Basins of Attraction (BA) (i.e., the collection of all initial points from which the iterative method converges to a solution of a given equation) and Julia sets (JS) (i.e., the complement of basins of attraction) [18]. In fact, we study the BA associated with the roots of the three systems of equations given in Examples 5–7.
Example 5.
s 1 3 s 2 = 0 s 2 3 s 1 = 0 .
The solutions are  ( 1 , 1 ) , ( 0 , 0 )  and  ( 1 , 1 )
Example 6.
3 s 1 2 s 2 s 2 3 = 0 s 1 3 3 s 1 s 2 2 1 = 0 .
The solutions are  ( 1 2 , 3 2 ) , ( 1 2  and  3 2 ) , ( 1 , 0 ) .
Example 7.
s 1 2 + s 2 2 4 = 0 3 s 1 2 + 7 s 2 2 16 = 0 .
The solutions are  ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 )  and  ( 3 , 1 ) .
The region  R = { ( x , y ) R 2 : 2 x 2 , 2 y 2 }  which contains all the roots of Examples 5–7 is used to plot BA and JS. We choose an equidistant grid of  401 × 401  points in  R  as the initial guess  x 0 , for Methods (2), (3) and (4). A tolerance level of  10 8  and a maximum of 50 iterations are used. A color is assigned to each attracting basin corresponding to each root, and if we do not obtain the desired tolerance with the fixed iterations, we assign the color black (i.e., we decide that the iterative method starting at  x 0  does not converge to any of the roots). In this way, we distinguish each BA by their respective colors for the distinct roots of each method.
Figure 1, Figure 2 and Figure 3 demonstrate the BA corresponding to each root of the above examples (Examples 5–7) for Methods (2), (3) and (4). The JS (black region), which contains all the initial points from which the iterative method does not converge to any of the roots, can easily be observed in the figures.
All the calculations in this paper were performed on a 16-core 64-bit Windows machine with Intel Core i7-10700 CPU @ 2.90GHz, using MATLAB.
In Figure 1 (corresponding to Example 5), the red region is the set of all initial points from which the iterates (2), (3) and (4) converge to  ( 1 , 1 ) ,  the blue region is the set of all initial points from which the iterates (2), (3) and (4) converge to  ( 0 , 0 )  and the green region is the set of all initial points from which the iterates (2), (3) and (4) converge to  ( 1 , 1 ) .  The black region represents the Julia set.
In Figure 2 (corresponding to Example 6), the red region is the set of all initial points from which the iterates (2), (3) and (4) converge to  ( 1 2 , 3 2 ) ,  the blue region is the set of all initial points from which the iterates (2), (3) and (4) converge to  ( 1 2 , 3 2 )  and the green region is the set of all initial points from which the iterates (2), (3) and (4) converge to  ( 1 , 0 ) .  The black region represents the Julia set.
In Figure 3 (corresponding to Example 7), the red region is the set of all initial points from which the iterates (2), (3) and (4) converge to  ( 3 , 1 ) ,  the blue region is the set of all initial points from which the iterates (2), (3) and (4) converge to  ( 3 , 1 ) ,  the green region is the set of all initial points from which the iterates (2), (3) and (4) converge to  ( 3 , 1 )  and the yellow region is the set of all initial points from which the iterates (2), (3) and (4) converge to  ( 3 , 1 ) .  The black region represents the Julia set.

8. Conclusions

In this article, Method (2) is extended to methods with better orders of convergence. The analyses of the local and semilocal convergence criteria for these methods, Methods (2), (3) and (4), are carried out with the assumptions imposed only on the first and second derivatives of the operator involved and without the application of Taylor series expansion. Additionally, the Fatou and Julia sets corresponding to these methods, including appropriate comparisons and examples, are displayed, thus verifying the theoretical approach.

Author Contributions

Conceptualization, R.S., S.G., I.K.A. and J.P.; methodology, R.S., S.G., I.K.A. and J.P.; software, R.S., S.G., I.K.A. and J.P.; validation, R.S., S.G., I.K.A. and J.P.; formal analysis, R.S., S.G., I.K.A. and J.P.; investigation, R.S., S.G., I.K.A. and J.P.; resources, R.S., S.G., I.K.A. and J.P.; data curation, R.S., S.G., I.K.A. and J.P.; writing—original draft preparation, R.S., S.G., I.K.A. and J.P.; writing—review and editing, R.S., S.G., I.K.A. and J.P.; visualization, R.S., S.G., I.K.A. and J.P.; supervision, R.S., S.G., I.K.A. and J.P.; project administration, S.G., J.P., R.S. and I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors Jidesh Padikkal and Santhosh George wish to thank the SERB, Govt. of India, for the Project No. CRG/2021/004776.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  1. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press, Taylor and Francis Group: Boca Raton, FL, USA, 2022. [Google Scholar]
  2. Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Schemes; Elsevier, Academic Press: New York, NY, USA, 2018. [Google Scholar]
  3. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
  4. Cordero, A.; Martínez, E.; Toregrossa, J.R. Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 2012, 231, 541–551. [Google Scholar] [CrossRef]
  5. Magréñan, A.A.; Argyros, I.K.; Rainer, J.J.; Sicilia, J.A. Ball convergence of a sixth-order Newton-like method based on means under weak conditions. J. Math. Chem. 2018, 56, 2117–2131. [Google Scholar] [CrossRef]
  6. Shakhno, S.M.; Gnatyshyn, O.P. On an iterative Method of order 1.839... for solving nonlinear least squares problems. Appl. Math. Applic. 2005, 161, 253–264. [Google Scholar]
  7. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  8. Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
  9. Darvishi, M.T.; Barati, A. A third-order newton-type method to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 187, 630–635. [Google Scholar] [CrossRef]
  10. Frontini, M.; Sormani, E. Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 2004, 149, 771–782. [Google Scholar] [CrossRef]
  11. Homeier, H.H.H. A modified newton method with cubic convergence: The multivariable case. J. Comput. Appl. Math 2004, 169, 161–169. [Google Scholar] [CrossRef]
  12. Khirallah, M.Q.; Hafiz, M.A. Novel three order methods for solving a system of nonlinear equations. Bull. Math. Sci. Appl. 2012, 2, 1–14. [Google Scholar] [CrossRef]
  13. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. J. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef] [Green Version]
  14. Podisuk, M.; Chundang, U.; Sanprasert, W. Single-step formulas and multi-step formulas of integration method for solving the initial value problem of ordinary differential equation. Appl. Math. Comput. 2007, 190, 1438–1444. [Google Scholar] [CrossRef]
  15. Liu, Z.; Zheng, Q.; Huang, C. Third- and fifth-order Newton–Gauss methods for solving nonlinear equations with n variables. Appl. Math. Comput. 2016, 290, 250–257. [Google Scholar] [CrossRef]
  16. Behl, R.; Maroju, P.; Martínez, E.; Singh, S. A study of the local convergence of a fifth order iterative scheme. Indian J. Pure Appl. Math. 2020, 51, 439–455. [Google Scholar]
  17. Iliev, A.; Iliev, I. Numerical method with order t for solving system nonlinear equations. In Proceedings of the Collection of Works from the Scientific Conference Dedicated to 30 Years of FMI, Plovdiv, Bulgaria, 1–3 November 2000; pp. 105–112. [Google Scholar]
  18. Magréñan, A.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
  19. George, S.; Sadananda, R.; Jidesh, P.; Argyros, I.K. On the Order of Convergence of Noor-Waseem Method. Mathematics 2022, 10, 4544. [Google Scholar] [CrossRef]
Figure 1. Dynamics of (2), (3) and (4), respectively, with BA, for Example 5.
Figure 1. Dynamics of (2), (3) and (4), respectively, with BA, for Example 5.
Fractalfract 07 00185 g001
Figure 2. Dynamics of (2), (3) and (4), respectively, with BA, for Example 6.
Figure 2. Dynamics of (2), (3) and (4), respectively, with BA, for Example 6.
Fractalfract 07 00185 g002
Figure 3. Dynamics of (2), (3) and (4), respectively, with BA, for Example 7.
Figure 3. Dynamics of (2), (3) and (4), respectively, with BA, for Example 7.
Fractalfract 07 00185 g003
Table 1. Methods of order 3.
Table 1. Methods of order 3.
kNoor–Waseem Method [19]RatioNewton–Simpson Method [7]RatioNewton–Gauss Method (2)Ratio
x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 3 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 3 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 3
0 ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 )
1 ( 1.264067 , 0.166747 ) 0.052791 ( 1.263927 , 0.166887 ) 0.052792 ( 1.263927 , 0.166887 ) 0.052792
2 ( 1.019624 , 0.265386 ) 0.259247 ( 1.019452 , 0.265424 ) 0.259156(1.019452,0.265424)0.259156
3 ( 0.992854 , 0.306346 ) 1.578713 ( 0.992853 , 0.306348 ) 1.580144(0.992853,0.306348)1.580144
4 ( 0.992780 , 0.306440 ) 1.977941 ( 0.992780 , 0.306440 ) 1.977957(0.992780,0.306440)1.977957
5 ( 0.992780 , 0.306440 ) 1.979028 ( 0.992780 , 0.306440 ) 1.979028(0.992780,0.306440)1.979028
Table 2. Methods of order 5.
Table 2. Methods of order 5.
kNoor–Waseem Method [19]RatioNewton–Simpson Method [7]RatioNewton–Gauss Method (3)Ratio
x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 5 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 5 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 5
0 ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 )
1 ( 1.127204 , 0.054887 ) 0.004363 ( 1.127146 , 0.054883 ) 0.004363(1.127146,0.054883)0.004363
2 ( 0.993331 , 0.305731 ) 0.501551 ( 0.993328 , 0.305734 ) 0.501670(0.993328,0.305734)0.501670
3 ( 0.992780 , 0.306440 ) 3.889725 ( 0.992780 , 0.306440 ) 3.889832(0.992780,0.306440)3.889832
4 ( 0.992780 , 0.306440 ) 3.916553 ( 0.992780 , 0.306440 ) 3.916553(0.992780,0.306440)3.916553
Table 3. Methods of order 6.
Table 3. Methods of order 6.
kNoor–Waseem Method [19]RatioNewton–Simpson Method [7]RatioNewton–Gauss Method (4)Ratio
x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 6 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 6 x k = ( t 1 k , t 2 k ) ϵ k + 1 x ( ϵ k x ) 6
0 ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 )
1 ( 1.067979 , 0.174843 ) 0.001211 ( 1.067906 , 0.174885 ) 0.001211(1.067906,0.174885)0.001211
2 ( 0.992784 , 0.306436 ) 1.383068 ( 0.992784 , 0.306436 ) 1.384152(0.992784,0.306436)1.384152
3 ( 0.992780 , 0.306440 ) 5.509412 ( 0.992780 , 0.306440 ) 5.509414(0.992780,0.306440)5.509414
Table 4. Method—Order 3.
Table 4. Method—Order 3.
kNoor–Waseem Method [19]Newton–Simpson Method [7]Newton–Gauss Method (2)
x k = ( t 1 k , t 2 k ) x k = ( t 1 k , t 2 k ) x k = ( t 1 k , t 2 k )
0(1.000000000000000, −1.500000000000000)(1.000000000000000, −1.500000000000000)(1.000000000000000, −1.500000000000000)
1(1.316634871110971, −0.905663824661460)(1.314989088706198, −0.903348274351101)(1.315156227449432, −0.903583355982947)
2(1.271411313380937, −0.880832720130447)(1.271407142827536, −0.880830551463514)(1.271407525251500, −0.880830738549647)
3(1.271384307950135, −0.880819073102663)(1.271384307950134, −0.880819073102662)(1.271384307950134, −0.880819073102662)
4(1.271384307950131, −0.880819073102661)(1.271384307950131, −0.880819073102661)(1.271384307950131, −0.880819073102661)
Table 5. Method—Order 5.
Table 5. Method—Order 5.
kNoor–Waseem Method [19]Newton–Simpson Method [7]Newton–Gauss Method (3)
x k = ( t 1 k , t 2 k ) x k = ( t 1 k , t 2 k ) x k = ( t 1 k , t 2 k )
0(1.000000000000000, −1.500000000000000)(1.000000000000000, −1.500000000000000)(1.000000000000000, −1.500000000000000)
1(1.282088857420137, −0.883233404186709)(1.281438557013089, −0.883099635427437)(1.281504327445928, −0.883113109114400)
2(1.271384307959147, −0.880819073106927)(1.271384307956672, −0.880819073105759)(1.271384307956891, −0.880819073105862)
3( 1.271384307950131, −0.880819073102661)(1.271384307950131, −0.880819073102661)(1.271384307950131, −0.880819073102661)
Table 6. Method—Order 6.
Table 6. Method—Order 6.
kNoor–Waseem Method [19]Newton–Simpson Method [7]Newton–Gauss Method (4)
x k = ( t 1 k , t 2 k ) x k = ( t 1 k , t 2 k ) x k = ( t 1 k , t 2 k )
0(1.000000000000000, −1.500000000000000)(1.000000000000000, −1.500000000000000)(1.000000000000000, −1.500000000000000)
1(1.270238276431529, −0.880218508041528)(1.270356433740484, −0.880274526138580)(1.270344819162619, −0.880269006195813)
2(1.271384307950131, −0.880819073102661)(1.271384307950131, −0.880819073102661)( 1.271384307950131, −0.880819073102661)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sadananda, R.; George, S.; Argyros, I.K.; Padikkal, J. Order of Convergence and Dynamics of Newton–Gauss-Type Methods. Fractal Fract. 2023, 7, 185. https://doi.org/10.3390/fractalfract7020185

AMA Style

Sadananda R, George S, Argyros IK, Padikkal J. Order of Convergence and Dynamics of Newton–Gauss-Type Methods. Fractal and Fractional. 2023; 7(2):185. https://doi.org/10.3390/fractalfract7020185

Chicago/Turabian Style

Sadananda, Ramya, Santhosh George, Ioannis K. Argyros, and Jidesh Padikkal. 2023. "Order of Convergence and Dynamics of Newton–Gauss-Type Methods" Fractal and Fractional 7, no. 2: 185. https://doi.org/10.3390/fractalfract7020185

APA Style

Sadananda, R., George, S., Argyros, I. K., & Padikkal, J. (2023). Order of Convergence and Dynamics of Newton–Gauss-Type Methods. Fractal and Fractional, 7(2), 185. https://doi.org/10.3390/fractalfract7020185

Article Metrics

Back to TopTop