Next Article in Journal
Human Strategic Decision Making in Parametrized Games
Next Article in Special Issue
Common Attractive Point Results for Two Generalized Nonexpansive Mappings in Uniformly Convex Banach Spaces
Previous Article in Journal
Algebraic Representation of Topologies on a Finite Set
Previous Article in Special Issue
On Numerical Analysis of Bio-Ethanol Production Model with the Effect of Recycling and Death Rates under Fractal Fractional Operators with Three Different Kernels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel n-Point Newton-Type Root-Finding Method of High Computational Efficiency

School of Mathematical Sciences, Bohai University, Jinzhou 121000, China
Mathematics 2022, 10(7), 1144; https://doi.org/10.3390/math10071144
Submission received: 10 March 2022 / Revised: 29 March 2022 / Accepted: 31 March 2022 / Published: 2 April 2022
(This article belongs to the Special Issue New Trends and Developments in Numerical Analysis)

Abstract

:
A novel Newton-type n-point iterative method with memory is proposed for solving nonlinear equations, which is constructed by the Hermite interpolation. The proposed iterative method with memory reaches the order ( 2 n + 2 n 1 1 + 2 2 n + 1 + 2 2 n 2 + 2 n + 1 ) / 2 by using n variable parameters. The computational efficiency of the proposed method is higher than that of the existing Newton-type methods with and without memory. To observe the stability of the proposed method, some complex functions are considered under basins of attraction. Basins of attraction show that the proposed method has better stability and requires a lesser number of iterations than various well-known methods. The numerical results support the theoretical results.

1. Introduction

Finding a solution a of nonlinear equation f ( x ) = 0 , where f : D R R is a sufficiently differentiable function in an open set D, is a difficult problem in the field of numerical analysis. Multipoint iterative methods with high computational efficiency were introduced by Traub [1] and Petković [2], which are very suitable for finding the solution of nonlinear equations. The design methods for multipoint iterative method include: the weight function method [3,4,5], the interpolation method [6,7], the rational function method [8,9], the undetermined coefficient method [10], the inverse interpolation function method [11,12] and the symbolic computation method [13]. Using these methods, many efficient multipoint iterative methods have been proposed for solving nonlinear equations, see [14,15,16,17,18,19,20] and the references therein. In those methods, the n-point iterative method is worth studying because of its high computational efficiency. The Kung–Traub’s method [14], Zheng’s method [15], Petković–Džunić’s method [16] and Wang–Zhang’s method [17] are well-known derivative-free n-point iterative methods. Furthermore, some efficient n-point Newton-type iterative methods with and without memory have been proposed. Kung and Traub [14] proposed an optimal 2 n th order Newton-type method as follows:
y k , 1 = t k f ( t k ) / f ( t k ) , y k , j = S j ( 0 ) , j = 2 , , n , t k + 1 = y k , n ,
where the inverse interpolating polynomial is S j ( y ) such that S j ( f ( t k ) ) = t k ,   S j ( f ( t k ) ) = 1 / f ( t k ) ,   S j ( f ( y k , j ) ) = y k , j ,   ( j = 2 , , n ) . Petković [18] derived the following n-point Newton-type method:
ϕ 1 ( t ) = t f ( t ) f ( t ) , ϕ 2 ( t ) = ψ f ( t ) , ϕ 3 ( t ) = ϕ 2 ( t ) f ( ϕ 2 ( t ) ) h ( 2 ) ( ϕ 2 ( t ) ) , , ϕ n ( t ) = ϕ n 1 ( t ) f ( ϕ n 1 ( t ) ) h ( n 1 ) ( ϕ ( n 1 ) ( t ) ) ,
where h i ( t ) , ( i = 2 , 3 , , n 1 . ) is the Hermite interpolation polynomial satisfying the conditions f ( ϕ 0 ) = h m + 1 ( ϕ 0 ) , f ( ϕ j ) = h m + 1 ( ϕ j )   a n d   ( j = 0 , 1 , , n 1 . ) , and ψ f ( t ) is an iterative function with an optimal order of 4. Cordero et al. [19] studied the stability of method (2) for n = 2 , 3 . In [20], we obtained the following one-parameter n-point Newton-type method without memory:
t k , 1 = t k , 0 f ( t k , 0 ) f ( t k , 0 ) + L f ( t k , 0 ) , t k , 2 = t k , 1 f ( t k , 1 ) f [ t k , 1 , t k , 0 ] + f [ t k , 1 , t k , 0 , t k , 0 ] ( t k , 1 t k , 0 ) , , t k , n = t k , n 1 f ( t k , n 1 ) N ( t k , n 1 , t k , n 2 , , t k , 1 , t k , 0 ) ,
where N ( t k , n 1 , t k , n 2 , , t k , 1 , t k , 0 ) = f [ t k , n 1 , t k , n 2 ] + + f [ t k , n 1 , t k , n 2 , , t k , 1 , t k , 0 , t k , 0 ] ( t k , n 1 t k , n 2 ) ( t k , n 1 t k , 0 ) , t k , 0 = t k , and L R is a constant. By replacing parameter L in (3) with a variable parameter L k , method (3) can be transformed to the following one-parameter n-point Newton-type iterative method with memory:
t k , 1 = t k , 0 f ( t k , 0 ) f ( t k , 0 ) + L k f ( t k , 0 ) , t k , 2 = t k , 1 f ( t k , 1 ) f [ t k , 1 , t k , 0 ] + f [ t k , 1 , t k , 0 , t k , 0 ] ( t k , 1 t k , 0 ) , , t k , n = t k , n 1 f ( t k , n 1 ) N ( t k , n 1 , t k , n 2 , , t k , 1 , t k , 0 ) ,
where N ( t k , n 1 , t k , n 2 , , t k , 1 , t k , 0 ) = f [ t k , n 1 , t k , n 2 ] + + f [ t k , n 1 , t k , n 2 , , t k , 1 , t k , 0 , t k , 0 ] ( t k , n 1 t k , n 2 ) ( t k , n 1 t k , 0 ) , t k , 0 = t k , L k = H 4 ( y k , 0 ) / ( 2 f ( y k , 0 ) ) and H 4 ( x ) is a Hermite’s interpolating polynomial with a degree of 4. Method (4) improved the convergence order of method (3) without any additional functional evaluations, which implied that the variable parameter can improve the computational efficiency of the Newton-type iterative method. Thus, we concluded that the convergence order of n-point Newton-type iterative method can be improved further by improving the number of variable parameters in the iterative scheme.
The aims of this work are to improve the computational efficiency and convergence order of n-point Newton-type method and produce a general n-point Newton-type iterative method with memory for solving nonlinear equations. The paper is organized as follows: In Section 2, we propose a general n-point Newton-type iterative method with optimal order 2 n by using Hermite’s interpolation polynomial. Based on the optimal n-point Newton-type iterative method, a general n-point Newton-type iterative method with memory is proposed by using n variable parameters in Section 3. The convergence order of the n-point Newton-type iterative method with memory is analyzed, which is higher than that of the existing Newton-type iterative methods. In Section 4, the stability of the presented iterative method is analyzed with the help of the basins of attraction. Several numerical tests are made to confirm the theoretical results in Section 5. Conclusions are given in Section 6.

2. The n-Parameter n-Point Newton-Type Method with Optimal Order 2n

Combining the first step of method (3) with Newton’s method [21,22], we construct the following one-parameter Newton-type method:
y k = t k f ( t k ) L 1 f ( t k ) + f ( t k ) , t k + 1 = y k f ( y k ) f ( y k ) ,
where L 1 R is a real parameter. To reduce the computational cost of method (5), we approximate f ( t ) by Hermite’s interpolation polynomial H 3 ( t ) . Interpolation polynomial H 3 ( t ) is given by:
H 3 ( t ) = f ( y k ) + f [ y k , t k ] ( t y k ) + f [ y k , t k , t k ] ( t y k ) ( t t k ) + L 2 ( t y k ) ( t t k ) 2 ,
such that H 3 ( t k ) = f ( t k ) , H 3 ( y k ) = f ( y k ) , H 3 ( t k ) = f ( t k ) and L 2 R . The derivative of H 3 ( t ) at y k is:
H 3 ( y k ) = f [ y k , t k ] + f [ y k , t k , t k ] ( y k t k ) + L 2 ( y k t k ) 2 .
Replacing f ( y k ) with H 3 ( y k ) in (5), we obtain a two-parameter Newton-type method:
y k = t k f ( t k ) L 1 f ( t k ) + f ( t k ) , t k + 1 = y k f ( y k ) f [ y k , t k ] + f [ y k , t k , t k ] ( y k t k ) + L 2 ( y k t k ) 2 ,
where f [ y k , t k , t k ] = f [ y k , t k ] f ( t k ) y k t k and L 2 R are real parameters. Combining method (8) with Newton’s method, we obtain the following three-step method:
y k = t k f ( t k ) L 1 f ( t k ) + f ( t k ) , z k = y k f ( y k ) f [ y k , t k ] + f [ y k , t k , t k ] ( y k t k ) + L 2 ( y k t k ) 2 , t k + 1 = z k f ( z k ) f ( z k ) .
We approximate f ( t ) in (9) by the following interpolation polynomial H 4 ( t ) of a degree of four:
H 4 ( t ) = f ( z k ) + f [ z k , y k ] ( t z k ) + f [ z k , y k , t k ] ( t z k ) ( t y k )
+ f [ z k , y k , t k , t k ] ( t z k ) ( t y k ) ( t t k ) + L 3 ( t z k ) ( t y k ) ( t t k ) 2 .
H 4 ( t ) satisfies interpolation conditions H 4 ( t k ) = f ( t k ) , H 4 ( y k ) = f ( y k ) , H 4 ( z k ) = f ( z k ) and H 4 ( t k ) = f ( t k ) . The derivative of H 4 ( t ) at z k is:
H 4 ( z k ) = f [ z k , y k ] + f [ z k , y k , t k ] ( z k y k ) + f [ z k , y k , t k , t k ] ×
( z k y k ) ( z k t k ) + L 3 ( z k y k ) ( z k t k ) 2 ,
where L 3 R . Replacing f ( z k ) with H 4 ( z k ) in (9), we obtain the following three-parameter method:
y k = t k f ( t k ) L 1 f ( t k ) + f ( t k ) , z k = y k f ( y k ) f [ y k , t k ] + f [ y k , t k , t k ] ( y k t k ) + L 2 ( y k t k ) 2 , t k + 1 = z k f ( z k ) H 4 ( z k ) ,
where H 4 ( z k ) = f [ z k , y k ] + f [ z k , y k , t k ] ( z k y k ) + f [ z k , y k , t k , t k ] ( z k y k ) ( z k t k ) + L 3 ( z k y k ) ( z k t k ) 2 and L i R , ( i = 1 , 2 , 3 ) .
Furthermore, we construct the following n-parameter n-point Newton-type method without memory:
t k , 1 = t k , 0 f ( t k , 0 ) L 1 f ( t k , 0 ) + f ( t k , 0 ) , t k , 2 = t k , 1 f ( t k , 1 ) f [ t k , 1 , t k , 0 ] + f [ t k , 1 , t k , 0 , t k , 0 ] ( t k , 1 t k , 0 ) + L 2 ( t k , 1 t k , 0 ) 2 , t k , 3 = t k , 2 f ( t k , 2 ) H 4 ( t k , 2 ) , , t k , n = t k , n 1 f ( t k , n 1 ) H n + 1 ( t k , n 1 ) ,
where
H n + 1 ( t k , n 1 ) = f [ t k , n 1 , t k , n 2 ] + f [ t k , n 1 , t k , n 2 , , t k , 1 , t k , 0 , t k , 0 ] j = 2 n ( t k , n 1 t k , n j )
+ j = 3 n f [ t k , n 1 , t k , n 2 , , t k , n j ] i = 2 j 1 ( t k , n 1 t k , n i ) + L n 1 ( t k , n 1 t k , 0 ) t = 2 n ( t k , n 1 t k , n t ) ,
t k , 0 = t k and L i R , ( i = 1 , 2 , n ) . The Petković’s method (2) and Wang’s method (3) are two special cases of method (13) for L i = 0 , ( i = 1 , 2 , n ) and L i = 0 , ( i = 2 , 3 , n ) , respectively.
Theorem 1.
Let a R be a zero of a sufficiently differentiable function f : D R R in an open set D. Assume that initial approximation t 0 is sufficiently close to a. Then, method (8) reaches an optimal convergence order four and satisfies the following:
e k + 1 = ( c 2 + L 1 ) [ c 2 ( c 2 + L 1 ) c 3 + L 2 f ( a ) ] e k 4 + O ( e k 5 ) .
where c n = ( 1 / n ! ) f ( n ) ( a ) / f ( a ) , n 2 .
Proof. 
Let e k = t k a , e y k = y k a , e z k = z k a and c n = ( 1 / n ! ) f ( n ) ( a ) / f ( a ) , n 2 . Using the Taylor expansion of f at a, we obtain:
f ( t k ) = f ( a ) ( e k + c 2 e k 2 + c 3 e k 3 + c 4 e k 4 + c 5 e k 5 + c 6 e k 6 + c 7 e k 7 + c 8 e k 8 + O ( e k 9 ) ) ,
and
f ( t k ) = f ( a ) ( 1 + 2 c 2 e k + 3 c 3 e k 2 + 4 c 4 e k 3 + 5 c 5 e k 4 + 6 c 6 e k 5 + 7 c 7 e k 6 + 8 c 8 e k 7 + O ( e k 8 ) ) .
According to (12), (15) and (16), we have:
e y k = ( c 2 + L 1 ) e k 2 + ( 2 c 2 2 + 2 c 3 2 c 2 L 1 L 1 2 ) e k 3 + ( 4 c 2 3 7 c 2 c 3
+ 3 c 4 + 5 c 2 2 L 1 4 c 3 L 1 + 3 c 2 L 1 2 + L 1 3 ) e k 4 + O ( e k 5 ) .
Using the Taylor expansion of f ( y k ) at a, we obtain:
f ( y k ) = f ( a ) ( e y k + c 2 e y k 2 + c 3 e y k 3 + c 4 e y k 4 + c 5 e y k 5 + c 6 e y k 6 + c 7 e y k 7 + c 8 e y k 8 + + O ( e y k 9 ) ) ,
and
f [ y k , t k ] = f ( a ) [ 1 + c 2 e k + ( c 2 2 + c 3 + c 2 L 1 ) e k 2 + ( 2 c 2 3 + 3 c 2 c 3 + c 4 2 c 2 2 L 1 + c 3 L 1 c 2 L 1 2 ) e k 3
+ ( 4 c 2 4 + 2 c 3 2 + c 5 + 5 c 2 3 L 1 + c 4 L 1 + c 2 2 ( 8 c 3 + 3 L 1 2 ) + c 2 ( 4 c 4 4 c 3 L 1 + L 1 3 ) ) e k 4 ] + O ( e k 5 ) .
From (16) and (19), we obtain:
f [ y k , t k , t k ] = f [ y k , t k ] f ( t k ) y k t k
= f ( a ) ( c 2 + 2 c 3 e k + ( c 2 c 3 + 3 c 4 + c 3 L 1 ) e k 2 + ( 2 c 2 2 c 3 + 2 c 3 2
+ 4 c 5 + 2 c 4 L 1 c 3 L 1 2 + 2 c 2 ( c 4 c 3 L 1 ) ) e k 3 ) + O ( e k 4 ) .
Using (8) and (18)–(20), we attain:
e z k = ( c 2 + L 1 ) [ c 2 ( c 2 + L 1 ) + L 2 f ( a ) c 3 ] e k 4 + O ( e k 5 ) .
This means that the convergence order of method (8) is four. This concludes the proof. □
For method (12), we can obtain the following convergence theorem.
Theorem 2.
Let a R be a zero of a sufficiently differentiable function f : D R R in an open set D. Assume that initial approximation t 0 is sufficiently close to a. Then, method (12) arrives the optimal order eight and satisfies the following error equation:
e k + 1 = ( c 2 + L 1 ) 2 [ c 2 ( c 2 + L 1 ) c 3 + L 2 f ( a ) ] { c 2 [ c 2 ( c 2 + L 1 ) c 3 + L 2 f ( a ) ] + c 4 L 3 f ( a ) } e k 8 + O ( e k 9 ) ,
where c n = ( 1 / n ! ) f ( n ) ( a ) / f ( a ) , n 2 .
Proof. 
Let e k = t k a , e y k = y k a , e z k = z k a and c n = ( 1 / n ! ) f ( n ) ( a ) / f ( a ) , n 2 . Using the Taylor expansion of f ( z k ) at a, we obtain:
f ( z k ) = f ( a ) ( e z k + c 2 e z k 2 + c 3 e z k 3 + c 4 e z k 4 + c 5 e z k 5 + O ( e z k 6 ) ) ,
f [ z k , y k ] = f ( a ) + c 2 f ( a ) ( c 2 + L 1 ) e k 2 c 2 f ( a ) ( 2 c 2 2 2 c 3 + 2 c 2 L 1 + L 1 2 ) e k 3
+ ( 5 c 2 4 f ( a ) + 7 c 2 3 f ( a ) L 1 + c 3 f ( a ) L 1 2 + c 2 2 ( 7 c 3 f ( a ) + 4 f ( a ) L 1 2
+ L 2 ) + c 2 ( 3 c 4 f ( a ) + L 1 ( 3 c 3 f ( a ) + f ( a ) L 1 2 + L 2 ) ) ) e k 4 + O ( e k 5 ) .
From (19) and (24), we obtain:
f [ z k , y k , t k ] = f [ z k , y k ] f [ y k , t k ] z k t k
= f ( a ) ( c 2 + c 3 e k + ( c 2 c 3 + c 4 + c 3 L 1 ) e k 2 + ( 2 c 2 2 c 3
+ 2 c 3 2 + c 5 + c 4 L 1 c 3 L 1 2 + c 2 ( c 4 2 c 3 L 1 ) ) e k 3 ) + O ( e k 4 ) .
Using (20) and (25), we have
f [ z k , y k , t k , t k ] = f [ z k , y k , t k ] f [ y k , t k , t k ] z k t k
= f ( a ) ( c 3 + 2 c 4 e k + ( c 2 c 4 + 3 c 5 + c 4 L 1 ) e k 2 + ( 2 c 2 2 c 4 + 2 c 3 c 4
+ 4 c 6 + 2 c 5 L 1 c 4 L 1 2 + 2 c 2 ( c 5 c 4 L 1 ) ) e k 3 ) + O ( e k 4 ) .
Therefore, from (12), (25) and (26), we obtain:
e k + 1 = ( c 2 + L 1 ) 2 [ c 2 ( c 2 + L 1 ) c 3 + L 2 f ( a ) ]
× { c 2 [ c 2 ( c 2 + L 1 ) c 3 + L 2 f ( a ) ] + c 4 L 3 f ( a ) } e k 8 + O ( e k 9 ) .
This means that method (12) arrives at the optimal order eight. This concludes the proof. □
According to the above study, we can obtain the following convergence theorem.
Theorem 3.
Let a R be a zero of a sufficiently differentiable function f : D R R in an open set D. Assume that initial approximation t 0 is sufficiently close to a. Then, the n-parameter n-point Newton-type iterative scheme (13) without memory reaches the optimal order 2 n and its error relation is
e k + 1 = e k , n = t k , n α = q n e k , 0 i = 0 n 1 e k , i = q n q n 1 q n 2 2 q 1 2 n 2 q 0 2 n 1 e k 2 n + O ( e k 2 n + 1 ) ,
where e k , j = t k , j a , ( j = 0 , 1 , 2 , , n ) , e k = e k , 0 = t k , 0 a , q 0 = 1 , q 1 = c 2 + L 1 and q n = c n q n 1 + ( 1 ) n 1 c n + 1 + ( 1 ) n L n f ( a ) , n = 2 , 3 , .
Proof. 
Induction method is used to prove this Theorem. Form Theorem 1, we know that the Theorem is valid for n = 2 and n = 3 . Suppose that Equation (28) is true for n = N 1 , then we have the error relation
e k , N 1 = t k , N 1 a = q N 1 e k , 0 i = 0 N 2 e k , i
= q N 1 q N 2 q N 3 2 q 1 2 N 3 q 0 2 N 2 e k 2 N 1 + O ( e k 2 N 1 + 1 ) .
Noting that e k , 0 e k , 0 e k , 1 e k , 2 e k , N 1 = O ( e k 1 + 1 + 2 + 2 2 + + 2 N 1 ) = O ( e k 2 N ) and taking n = N , we obtain:
e k + 1 = e k , N = t k , N a = e k , N 1 f [ t k , N 1 , a ] e k , N 1 H N ( t k , N 1 ) = e k , N 1 H N ( t k , N 1 ) f [ t k , N 1 , a ] H N ( t k , N 1 )
= e k , N 1 ( f [ t k , N 1 , t k , N 2 ] + f [ t k , N 1 , t k , N 2 , t k , N 3 ] ( t k , N 1 t k , N 2 ) +
+ f [ t k , N 1 , t k , N 2 , , t k , 1 , t k , 0 , t k , 0 ] ( t k , N 1 t k , N 2 ) ( t k , N 1 t k , N 3 ) ( t k , N 1 t k , 0 )
+ ( 1 ) N L N 1 e k , N 2 e k , N 3 e k , 0 2 f [ t k , N 1 , a ] ) [ f ( a ) + O ( e k ) ] 1
= e k , N 1 { f [ t k , N 2 , t k , N 1 , a ] e k , N 2 + f [ t k , N 1 , t k , N 2 , t k , N 3 ] e k , N 1
f [ t k , N 1 , t k , N 2 , t k , N 3 ] e k , N 2 + + ( 1 ) N 1 f [ t k , N 1 , t k , N 2 , , t k , 1 , t k , 0 , t k , 0 ]
× e k , N 2 e k , N 3 e k , 0 + ( 1 ) N L N 1 e k , N 2 e k , N 3 e k , 0 2 } f a + O ( e k ) 1
= e k , N 1 { f [ t k , N 1 , t k , N 2 , t k , N 3 ] e k , N 1 + ( 1 ) N 1 f [ t k , N 1 , t k , N 2 , , t k , 1 , t k , 0 , t k , 0 , a ]
× e k , N 2 e k , N 3 e k , 0 2 + ( 1 ) N L N 1 e k , N 2 e k , N 3 e k , 0 2 + O ( e k 2 N 1 + 1 ) } f a + O ( e k ) 1
= e k , N 1 [ c 2 q N 1 e k , 0 i = 0 N 2 e k , i + ( 1 ) N 1 c N + 1 e k , 0 i = 0 N 2 e k , i + ( 1 ) N L N 1 f a e k , 0 i = 0 N 2 e k , i + O ( e k 2 N 1 + 1 ) ]
= e k , 0 i = 0 N 1 e k , i [ c 2 q N 1 + ( 1 ) N 1 c N + 1 + ( 1 ) N L N 1 f a ] + O ( e k 2 N + 1 )
= q N e k , 0 i = 0 N 1 e k , i + O ( e k 2 N + 1 ) .
Hence, we obtain:
e k + 1 = e k , n = t k , n a = q n e k , 0 i = 0 n 1 e k , i = q n q n 1 q n 2 2 q 1 2 n 2 q 0 2 n 1 e k 2 n + O ( e k 2 n + 1 ) .
The proof is completed. □

3. A General n -Point Newton-Type Multipoint Iterative Method with Memory

Theorem 3 shows that the n-parameter n-point method (13) without memory reaches the optimal order 2 n . Taking L 1 = c 2 and L i = f ( i + 1 ) ( a ) ( i + 1 ) ! , ( i = 2 , 3 , , n ) in (13), the convergence order of method (13) can be improved. In this section, we replace the constant parameters L i , ( i = 1 , 2 , , n ) of method (13) with the variable parameters L k , i and obtain a new n-parameter n-point method with memory. Variable parameters L k , i are constructed by the iterative sequences from current and previous iterations and satisfy the conditions l i m k L k , 1 = c 2 = f ( a ) 2 f ( a ) and l i m k L k , i = f ( i + 1 ) ( a ) ( i + 1 ) ! . To get the maximal order of convergence of the n-parameter n-point Newton-type method, we design the following variable parameters L k , i , ( i = 1 , , n ) by using Hermite’s interpolation polynomial
L k , 1 = H n + 2 ( t k , 0 ) 2 f ( t k , 0 ) ,
and
L k , i + 1 = H n + i + 2 ( i + 2 ) ( t k , i ) ( i + 2 ) ! , ( 0 i n ) ,
where
H n + 2 ( t ) = H n + 2 t : t k , 0 , t k , 0 , t k 1 , n 1 , , t k 1 , 0 , t k 1 , 0 ,
and
H n + i + 2 ( t ) = H n + i + 2 t : t k , i , t k , i 1 , , t k , 1 , t k , 0 , t k , 0 , t k 1 , n 1 , , t k 1 , 0 , t k 1 , 0 .
Replacing the constant parameter L i with variable parameter L k , i in (13), we obtain: a general n-parameter n-point method with memory as follows:
t k , 1 = t k , 0 f ( t k , 0 ) L k , 1 f ( t k , 0 ) + f ( t k , 0 ) , t k , 2 = t k , 1 f ( t k , 1 ) f t k , 1 , t k , 0 + f t k , 1 , t k , 0 , t k , 0 ( t k , 1 t k , 0 ) + L k , 2 ( t k , 1 t k , 0 ) 2 , , t k , n = t k , n 1 f ( t k , n 1 ) H n ( t k , n 1 ) ,
where L k , i , ( i = 1 , , n ) are the variable parameters constructed by (32)–(33) and H n ( t k , n 1 ) = f [ t k , n 1 , t k , n 2 ] + j = 3 n { f t k , n 1 , t k , n 2 , , t k , n j i = 2 j 1 ( t k , n 1 t k , n i ) } + ( t k , n 1 t k , 0 ) L k , n 1 t = 2 n ( t k , n 1 t k , n t ) + f t k , n 1 , t k , n 2 , , t k , 1 , t k , 0 , t k , 0 j = 2 n ( t k , n 1 t k , n j ) , ( n 2 ) .
From (14), (22) and (28), the error relations of method (34) can be obtained
e k , 1 = ( c 2 + L k , 1 ) e k 2 + O ( e k 3 ) ,
e k , 2 = ( c 2 + L k , 1 ) [ c 2 ( c 2 + L k , 1 ) c 3 + L k , 2 f ( a ) ] e k 4 + O ( e k 5 ) ,
and
e k , j = t k , j a = q k , j e k , 0 i = 0 j 1 e k , i = q k , j q k , j 1 q k , j 2 2 q k , 1 2 j 2 q k , 0 2 j 1 e k 2 j + O ( e k 2 j + 1 ) , 2 j n ,
where e k , j = t k , j a , ( j = 0 , 1 , 2 , , n ) , e k = e k , 0 = t k , 0 a , q k , 0 = 1 , q k , 1 = c 2 + L k , 1 , q k , n = ( 1 ) n 1 c n + 1 + ( 1 ) n L k , n f ( a ) + c 2 q k , n 1 , ( n 2 ) .
The above consideration leads to the following Lemma.
Lemma 1.
Let H n + i + 2 ( t ) , ( i = 0 , 1 , 2 , , n ) be the Hermite interpolation polynomial of degree n + i + 2 satisfying H n + i + 2 ( t k , l ) = f ( t k , l ) , ( l = 0 , 1 , , i ) , H n + i + 2 ( t k 1 , n j ) = f ( t k 1 , n j ) , ( j = 1 , , n ) , H n + i + 2 ( t k , 0 ) = f ( t k , 0 ) and H n + i + 2 ( t k 1 , 0 ) = f ( t k 1 , 0 ) . Let f ( t ) and its derivative f ( n + i + 2 ) be continuous in interval D. Nodes t k , i , , t k , 1 , t k , 0 , t k , 0 , t k 1 , n 1 , , t k 1 , 1 , t k 1 , 0 , t k 1 , 0 are contained in interval D, which are sufficiently close to the simple zero a of f ( t ) . Define the error e k , l = t k , l a and assume that the condition e k , l = O ( e k 1 , n e k 1 , 0 e k 1 , 0 ) , ( 0 l n ) holds. Then
H n + i + 2 ( i + 2 ) ( t k , i ) ( i + 2 ) ! f ( a ) c i + 2 ( 1 ) n + 1 c n + i + 3 e k 1 , 0 j = 1 n e k 1 , n j , ( 0 i n ) .
Proof. 
The error relation of the Hermite’s interpolation polynomial H n + i + 2 ( t ) is given by
f ( t ) H n + i + 2 ( t ) = f ( n + i + 3 ) ( η ) n + i + 3 ! [ ( t t k , 0 ) l = 0 i ( t t k , l ) ] [ ( t t k 1 , 0 ) j = 1 n ( t t k 1 , n j ) ] , η D .
Differentiating (39) at t, we obtain:
f ( i + 2 ) ( t ) H n + i + 2 ( i + 2 ) ( t ) = 1 ( n + i + 3 ) ! { [ ( t t k , 0 ) l = 0 i ( t t k , l ) ] [ ( t t k 1 , 0 ) j = 1 n ( t t k 1 , n j ) ]
× d i + 2 d t i + 2 [ f ( n + i + 3 ) ( η ) ] + m = 1 i + 1 C i + 2 m [ ( t t k , 0 ) l = 0 i ( t t k , l ) ( t t k 1 , 0 ) j = 1 n ( t t k 1 , n j ) ] ( m )
× d i + 2 m d t i + 2 m [ f ( n + i + 3 ) ( η ) ] + [ ( t t k , 0 ) l = 0 i ( t t k , l ) ( t t k 1 , 0 ) j = 1 n ( t t k 1 , n j ) ] ( i + 2 ) f ( n + i + 3 ) ( η ) } ,
and
f ( i + 2 ) ( t ) H n + i + 2 ( i + 2 ) ( t ) = 1 ( n + i + 3 ) ! { ( t t k , 0 ) l = 0 i ( t t k , l ) ( t t k 1 , 0 ) j = 1 n ( t t k 1 , n j )
× d i + 2 d t i + 2 [ f ( n + i + 3 ) ( η ) ] + m = 1 i + 1 C i + 2 m [ ( t t k , 0 ) l = 0 i ( t t k , l ) ( t t k 1 , 0 ) j = 1 n ( t t k 1 , n j ) ] ( m )
× d i + 2 m d t i + 2 m [ f ( n + i + 3 ) ( η ) ] + ( i + 2 ) ! [ ( t t k 1 , 0 ) j = 1 n ( t t k 1 . n j ) + P n + 1 ( t ) ] f ( n + i + 3 ) ( η ) } , η D ,
where P n + 1 ( t ) is a polynomial of degree n + 1 . Taylor’s series of f ( t ) and its derivatives f ( i ) ( t ) at the points t k , i about zero a is
f ( t k , i ) = [ 1 + 2 c 2 e k , i + 3 c 3 e k , i 2 + O ( e k , i 3 ) ] f ( a ) ,
f ( i + 2 ) ( t k , i ) = [ ( i + 2 ) ! c i + 2 + ( i + 3 ) ! c i + 3 e k , i + O ( e k , i 2 ) ] f ( a ) ,
and
f ( n + i + 3 ) ( η ) = [ ( n + i + 3 ) ! c n + i + 3 + ( n + i + 4 ) ! c n + i + 4 e η + O ( e η 2 ) ] f ( a ) ,
where t k , i D , η D and e η = η a .
Taking t = t k , i in (41) and using (44), we obtain:
H n + i + 2 ( i + 2 ) ( t k , i ) f ( i + 2 ) ( t k , i ) ( i + 2 ) ! f ( n + i + 3 ) ( η ) ( n + i + 3 ) ! [ P n + 1 ( t k , i ) + ( t k , i t k 1 , 0 ) j = 1 n ( t k , i t k 1 , n j ) ]
( i + 2 ) ! f ( a ) [ c i + 2 ( 1 ) n + 1 c n + i + 3 e k 1 , 0 j = 1 n e k 1 , n j ] .
The proof is completed. □
From (33) and (45), we obtain:
L k , i + 1 = H n + i + 2 ( i + 2 ) ( t k , i ) ( i + 2 ) ! f ( a ) [ c i + 2 ( 1 ) n + 1 c n + i + 3 e k 1 , 0 j = 1 n e k 1 , n j ] ,
and
c i + 2 L k , i + 1 f ( a ) ( 1 ) n + 1 c n + i + 3 e k 1 , 0 j = 1 n e k 1 , n j .
If sequence { t n } converges to the zero a with order r, we can get
e k + 1 = e k + 1 , 0 = e k , n Q k , n e k , 0 r ,
where e k , n = t k , n a , and Q k , n is the asymptotic error constant. Suppose sequence { t k , j } has the convergence order r j , we obtain:
e k , j Q k , j e k , 0 r j , 1 j n 1 ,
and
c i + 2 L k , i + 1 f ( a ) ( 1 ) n + 1 c n + i + 3 e k 1 , 0 j = 1 n e k 1 , n j
( 1 ) n + 1 c n + i + 3 e k 1 , 0 r 0 + r 0 + r 1 + + r n 1 j = 1 n Q k 1 , n j
T k , i + 1 e k 1 , 0 R ,
where r 0 = 1 , R = r 0 + i = 0 n 1 r i and T k , i + 1 = ( 1 ) n + 1 c n + i + 3 j = 1 n Q k 1 , n j .
From (32) and (35), we obtain:
q k , 1 = c 2 + L k , 1 T k , 1 e k 1 , 0 R ,
where T k , 1 = ( 1 ) n + 1 c n + 3 j = 1 n Q k 1 , n j .
e k , 1 q k , 1 e k , 0 2 T k , 1 e k , 0 2 e k 1 , 0 R .
Lemma 2.
Let a I be a simple zero of a sufficiently differentiable function f, then the q k , j in (37) such that
q k , j M k , j e k 1 , 0 R , ( 2 j n ) ,
where r 0 = 1 , R = r 0 + i = 0 n 1 r i , T k , i = ( 1 ) n + 1 c n + i + 2 j = 1 n Q k 1 , n j and M k , j = i = 1 j ( 1 ) i 1 c 2 j i T k , i .
Proof. 
We proof the lemma by induction. Using (50)–(51) and taking j = 2 in (37), we obtain:
e k , 2 q k , 2 e k , 1 e k , 0 2
{ c 2 q k , 1 c 3 + L k , 2 f ( a ) } e k , 1 e k , 0 2
( c 2 T k , 1 T k , 2 ) e k , 1 e k , 0 2 e k 1 , 0 R .
Let M k , 2 = c 2 T k , 1 T k , 2 in (54), we obtain:
e k , 2 q k , 2 e k , 1 e k , 0 2 M k , 2 e k , 1 e k , 0 2 e k 1 , 0 R ,
and
q k , 2 M k , 2 e k 1 , 0 R .
Using (50) and (51) and taking j = 3 in (37), we get
e k , 3 q k , 3 e k , 2 e k , 1 e k , 0 2
{ c 2 q k , 2 + c 4 L k , 2 f ( a ) } e k , 2 e k , 1 e k , 0 2
( c 2 ( c 2 T k , 1 T k , 2 ) + T k , 3 ) e k , 2 e k , 1 e k , 0 2 e k 1 , 0 R .
Let M k , 3 = c 2 ( c 2 T k , 1 T k , 2 ) + T k , 3 , we obtain:
e k , 3 q k , 3 e k , 2 e k , 1 e k , 0 2
M K , 3 e k , 2 e k , 1 e k , 0 2 e k 1 , 0 R ,
and
q k , 3 M k , 3 e k 1 , 0 R .
Assume that (53) holds for j = n 1 , we obtain
q k , n 1 M k , n 1 e k 1 , 0 R .
Using (50) and (51) and taking j = n in (37), we get
e k , n q k , n e k , 0 j = 0 n 1 e k , i
{ c 2 q k , n 1 + ( 1 ) n 1 c n + 1 + ( 1 ) n L k , n 1 f ( a ) } e k , 0 j = 0 n 1 e k , i
( T k , n + c 2 M k , n 1 ) e k , 0 j = 0 n 1 e k , i e k 1 , 0 R
M k , n e k , 0 j = 0 n 1 e k , i e k 1 , 0 R ,
and
q k , n M k , n e k 1 , 0 R .
The proof is completed. □
According to the Lemma 1 and Lemma 2, we obtain: the convergence theorem as follows:
Theorem 4.
If t 0 is sufficiently close to a simple zero a of the sufficiently differentiable function f ( t ) and the variable parameters L k , 1 and L k , i , ( 2 i n ) of method (34) are calculated by (32) and (33), respectively. Then, the n-parameter n-point Newton-type iterative method (34) with memory reaches order ( 2 n + 2 n 1 1 + 2 2 n + 1 + 2 2 n 2 + 2 n + 1 ) / 2 .
Proof. 
From (48) and (49), we obtain:
e k + 1 = e k , n Q k , n ( Q k 1 , n e k 1 , 0 r ) r Q k , n Q k 1 , n r e k 1 , 0 r 2 , e k , n = x k , n a ,
e k , j Q k , j ( Q k 1 , n e k 1 , 0 r ) r j Q k , j Q k 1 , n r j e k 1 , 0 r r j , 1 j n 1 .
Using (37) and (49), we arrive at:
e k , j = t k , j a q k , j e k , 0 i = 0 j 1 e k , i
q k , j e k , 0 2 i = 1 j 1 ( Q k , i e k , 0 r i )
q k , j S k , j e k , 0 2 + r 1 + r 2 + r j 1
M k , j e k 1 , 0 R S k , j e k , 0 2 + r 1 + r 2 + r j 1
M k , j e k 1 , 0 R S k , j Q k 1 , n 2 + r 1 + r 2 + r j 1 e k 1 , 0 r ( 2 + r 1 + r 2 + r j 1 )
M k , j S k , j Q k 1 , n 2 + r 1 + r 2 + r j 1 e k 1 , 0 R + r ( 2 + r 1 + r 2 + r j 1 ) ,
where S k , j = i = 1 j 1 Q k , i , 2 j n .
According to (48) and (53), we have:
e k , 1 q k , 1 e k , 0 2 T k , 1 e k 1 , 0 R ( Q k 1 , n e k 1 , 0 r ) 2
T k , 1 Q k 1 , n 2 e k 1 , 0 2 r + R .
Comparing the exponents of error e k 1 , 0 in pairs of ((64), (66)) for j = 1 , ((63), (65)) for j = n and ((64), (65)) for 2 j n 1 , we obtain:
r 2 = r ( 2 + r 1 + r 2 + + r n 1 ) + R = r R + R , r r n 1 = r ( 2 + r 1 + r 2 + + r n 2 ) + R , r r j = r ( 2 + r 1 + r 2 + + r j 1 ) + R , ( j = 2 , , n 2 ) , r r 1 = R + 2 r .
According to (67), we obtain:
r 2 1 + r = R = 2 + r 1 + r 2 + r n 1 ,
r j = 2 j 1 r 1 , ( j = 2 , , n 1 ) ,
r = 2 r n 1 ,
r 1 = 2 + r 1 + r .
From (68) and (71), we obtain:
r 2 ( 2 n + 2 n 1 1 ) r 2 n = 0 .
The solution of the Equation (72) is ( 2 n + 2 n 1 1 + 2 2 n + 1 + 2 2 n 2 + 2 n + 1 ) / 2 . Thus, the n-parameter n-point Newton-type method with memory has the convergence order ( 2 n + 2 n 1 1 + 2 2 n + 1 + 2 2 n 2 + 2 n + 1 ) / 2 . □
Remark 1.
According to Theorem 4, we conclude that the maximal order of n-parameter n-point Newton-type iterative method (34) with memory is ( 2 n + 2 n 1 1 + 2 2 n + 1 + 2 2 n 2 + 2 n + 1 ) / 2 . The variable parameters L k , i of method (34) can be designed by simple interpolation polynomial, but this will decrease the order of convergence of method (34). Therefore, we do not discuss the low-order interpolation polynomial in this paper. For n = 2 , the order of method (34) with memory is r = ( 5 + 41 ) / 2 5.701 . For n = 3 , the order of (34) with memory is r = ( 11 + 153 ) / 2 11.684 .

4. Basins of Attraction

The basins of attraction can be applied to analyze the stability of the multipoint iterative method [23,24,25,26,27,28,29,30,31,32], which will help us to select the iterative schemes whose behaviors are better qualitatively. Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5 show the basins of attraction of different methods. Our methods (13) (n = 3) and (34) (n = 2,3) are compared with methods (2) (n = 3), (3) (n = 3), (4) (n = 3), (73) and (76) for solving complex equations z n 1 = 0 , ( n = 2 , 3 , 4 , 5 , 6 ) . The field D = [ 5.0 , 5.0 ] × [ 5.0 , 5.0 ] C is divided into a grid of 500 × 500 in Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5. The initial point z 0 will be painted with black after 25 iterations. If the sequence generated by the iterative method reaches a zero of the polynomial, the initial point z 0 will be painted in a color previously selected for this zero. In the same basin of attraction, the number of iterations needed to achieve the solution is shown in darker or brighter colors (the less iterations, the brighter the color). The tolerance is | z z * | < 10 3 in programs. Table 1, Table 2, Table 3, Table 4 and Table 5 show the average number of iterations(ANI) and the percentage of points (POP) which guarantee the convergence to the roots of complex equations z n 1 = 0 , ( n = 2 , 3 , 4 , 5 , 6 ) .
Figure 1, Figure 2, Figure 3, Figure 4 and Figure 5 show that iterative methods without memory (2), (3) and (13) have similar convergence behavior. The black areas of basins of attraction for methods (73) and (76) are larger than in other methods. This implies that the convergence behavior of iterative methods with memory (73) and (76) are poor. The basins of attraction for method (34) with memory are brighter than the other methods with and without memory. This means that iterative method (34) requires a lesser number of iterations than other methods. Table 1, Table 2, Table 3, Table 4 and Table 5 show that, compared with other methods, our method (34) (n = 3) has the highest percentage of points which guarantee the convergence to the roots of complex equations and requires less number of iterations than various well-known methods. Thus, our method (34) (n = 3) has good stability for solving simple nonlinear equations.

5. Numerical Examples

The general n-parameter n-point Newton-type methods (13) and (34) are compared with Petković’s two-step Newton-type method with memory [12] (73), Wang’s Newton-type method with memory [33] (76), Petković’s n-point Newton-type method (2) without memory and Wang’s n-point Newton-type methods (3) and (4) for solving nonlinear functions.
Petković’s two-step Newton-type iterative method with memory is calculated as follows [12]:
y k = ϕ ( y k 1 ) f ( t k ) 2 + M ( t k ) , t k + 1 = ϕ ( y k ) f ( t k ) 2 + M ( t k ) ,
where
M ( t k ) = t k f ( t k ) f ( t k ) ,
ϕ ( t ) = t t k f ( t ) f ( t k ) 1 f ( t k ) 1 f ( t ) f ( t k ) .
Wang’s Newton-type iterative method with memory is calculated as follows [33]:
z k = t k f ( t k ) f ( t k ) , y k = t k ( z k t k ) 1 L k ( z k t k ) , t k + 1 = y k f ( y k ) f ( t k ) f [ t k , y k ] 2 ,
where L k = t k z k 1 ( z k t k 1 ) ( t k t k 1 ) .
Iterative methods are applied to solve the following nonlinear equations:
f 1 ( t ) = cos ( t ) + t 2 t e t = 0 , a 0.63915409633200758 ,   t 0 = 0.5 ,
f 2 ( t ) = t 4 + l o g ( t ) 5 = 0 , a 1.4658939193282127 ,   t 0 = 1.6 ,
f 3 ( t ) = sin ( t ) + 9 t 2 1 = 0 , a 0.3918469070026 ,     t 0 = 0.3 .
f 4 ( t ) = t π t 3 sin ( t ) = 0 , a 3.1415926535898 ,     t 0 = 2.9 .
The initial parameters L = L 1 = 0.01 and L k , i = 0.01 , ( i = 0 , 1 , 2 ) are used in the the iterative methods (3), (4), (13), (34) (73) and (76). Table 6, Table 7, Table 8 and Table 9 show the absolute error | t k a | for the first four steps and the approximate computational order of convergence (ACOC) [34]:
R = l n ( | t n + 1 t n | / | t n t n 1 | ) l n ( | t n t n 1 | / | t n 1 t n 2 | ) .
Table 6, Table 7, Table 8 and Table 9 show that the numerical results coincide with the theory developed in this paper. Our n-parameter n-point Newton-type method (34) has higher convergence order and computational accuracy than the existing Newton-type methods. Method (34) greatly improves the convergence order of method (13) by using n variable parameters. The variable parameters L k , i in method (34) are designed by using iteration sequences from current and previous iterations, which do not increase the computational cost of the iterative method. This implies that our method (34) with memory posses a very high computational efficiency.

6. Conclusions

In order to improve the convergence order of Newton-type multipoint iterative method, a general n-parameter n-point Newton-type iterative method with memory is designed in this paper. Firstly, an n-parameter n-point Newton-type multipoint method (13) with optimal order 2 n is proposed. Based on method (13), a general n-parameter n-point Newton-type multipoint iterative method (34) with n variable parameters is proposed for solving nonlinear equations. The maximal order of method (34) is superior to the existing Newton-type iterative methods with and without memory. The basins of attraction show that the proposed method (34) has the highest percentage of points, which guarantees the convergence to the roots of complex equations. The ANI of the proposed method (34) is less than that of other methods in Table 6, Table 7, Table 8 and Table 9. This implies that the proposed method (34) has good stability. The numerical results shows that the proposed method (34) greatly improves the computational efficiency and convergence order of the Newton-type iterative method.

Funding

This research was supported by the National Natural Science Foundation of China (No. 61976027), the Liaoning Revitalization Talents Program (XLYC2008002), the Educational Commission Foundation of Liaoning Province of China (No. LJ2020015) and the University-Industry Collaborative Education Program (No. 202102030037).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  2. Petković, M.S.; Neta, B.; Petković, L.; Džunić, J. Multipoint Methods for Solving Nonlinear Equtions; Academic Press: Boston, MA, USA, 2013. [Google Scholar]
  3. Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
  4. Soleymani, F.; Sharifi, M.; Mousavi, B.S. An improvement of Ostrowski’s and King’s techniques with optimal convergence order eight. J. Optim. Theo. Appl. 2012, 153, 225–236. [Google Scholar] [CrossRef]
  5. Geum, Y.H.; Kim, Y.I. A uniparametric family of three-step eighth-order multipoint iterative methods for simple roots. Appl. Math. Lett. 2011, 24, 929–935. [Google Scholar] [CrossRef]
  6. Petković, M.S. On a general class of multipoint root-finding methods of high computational efficiency. Siam. J. Numer. Anal. 2010, 47, 4402–4414. [Google Scholar] [CrossRef]
  7. Ren, H.; Wu, Q.; Bi, W. A class of two-step Steffensen type methods with fourth-order convergence. Appl. Math. Comput. 2009, 209, 206–210. [Google Scholar] [CrossRef]
  8. Wang, X.; Zhu, M. Two Iterative Methods with Memory Constructed by the Method of Inverse Interpolation and Their Dynamics. Mathematics 2020, 8, 1080. [Google Scholar] [CrossRef]
  9. Sharma, J.R.; Sharma, R. A new family of modified Ostrowski’s methods with accelerated eighth order convergence. Numer. Algor. 2010, 54, 445–458. [Google Scholar] [CrossRef]
  10. Chun, C.; Neta, B. Certain improvements of Newton’s method with fourth-order convergence. Appl. Math. Comput. 2009, 215, 821–828. [Google Scholar] [CrossRef] [Green Version]
  11. Neta, B.; Petković, M.S. Construction of optimal order nonlinear solvers usnig inverse interplation. Appl. Math. Comput. 2010, 217, 2448–2455. [Google Scholar]
  12. Petković, M.S.; Džunić, J.; Neta, B. Interpolatory multipoint methods with memory for solving nonlinear equations. Appl. Math. Comput. 2011, 218, 2533–2541. [Google Scholar] [CrossRef]
  13. Petković, L.D.; Petković, M.S.; Džunić, J. A class of three-point root-solvers of optimal order of convergence. Appl. Math. Comput. 2010, 216, 671–676. [Google Scholar] [CrossRef]
  14. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Math. 1974, 21, 634–651. [Google Scholar] [CrossRef]
  15. Zheng, Q.; Li, J.; Huang, F. An optimal Steffensen-type family for solving nonlinear equations. Appl. Math. Comput. 2011, 217, 9592–9597. [Google Scholar] [CrossRef]
  16. Džunić, J.; Petković, M.S. On generalized biparametric multipoint root finding methods with memory. J. Comput. Appl. Math. 2014, 255, 362–375. [Google Scholar] [CrossRef]
  17. Wang, X.; Zhang, T. Efficient n-point iterative methods with memory for solving nonlinear equations. Numer. Algor. 2015, 70, 357–375. [Google Scholar] [CrossRef]
  18. Petković, M.S. Remarks on “On a general class of multipoint root-finding methods of high computational efficiency”. Siam. J. Numer. Anal. 2011, 49, 1317–1319. [Google Scholar]
  19. Cordero, A.; Torregrosa, J.R.; Triguero-Navarro, P. A general optimal iterative scheme with arbitrary order of convergence. Symmetry 2021, 13, 884. [Google Scholar] [CrossRef]
  20. Wang, X.; Qin, Y.; Qian, W.; Zhang, S.; Fan, X. A family of newton type iterative methods for solving nonlinear equations. Algorithms 2015, 8, 786–798. [Google Scholar] [CrossRef] [Green Version]
  21. Jelley, C.T. Solving Nonlinear Equations with Newton’s Method; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
  22. Ortega, J.M.; Rheinbolt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  23. Wang, X.; Chen, X. The Dynamical Analysis of a Biparametric Family of Six-Order Ostrowski-Type Method under the Möbius Conjugacy Map. Fractal Fract. 2022, 6, 174. [Google Scholar] [CrossRef]
  24. Wang, X.; Chen, X. Derivative-Free Kurchatov-Type Accelerating Iterative Method for Solving Nonlinear Systems: Dynamics and Applications. Fractal Fract. 2022, 6, 59. [Google Scholar] [CrossRef]
  25. Sharma, D.; Argyros, I.K.; Parhi, S.K.; Sunanda, S.K. Local Convergence and Dynamical Analysis of a Third and Fourth Order Class of Equation Solvers. Fractal Fract. 2021, 5, 27. [Google Scholar] [CrossRef]
  26. Behl, R.; Cordero, A.; Torregrosa, J.R. High order family of multivariate iterative methods: Convergence and stability. J. Comput. Appl. Math. 2020, 405, 113053. [Google Scholar] [CrossRef]
  27. Neta, B.; Scott, M.; Chun, C. Basin attrators for various methods for multiple roots. Appl. Math. Comput. 2012, 218, 5043–5066. [Google Scholar]
  28. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and their Dynamics with Applications: A Contemporary Study; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  29. Galilea, V.; Gutiéreez, J.M. A Characterization of the Dynamics of Schröder’s Method for Polynomials with Two Roots. Fractal Fract. 2021, 5, 25. [Google Scholar] [CrossRef]
  30. Susanto, H.; Karjanto, N. Newton’s method’s basins of attraction revisited. Appl. Math. Comput. 2009, 215, 1084–1090. [Google Scholar] [CrossRef] [Green Version]
  31. Mallawi, F.O.; Behl, R.; Maroju, P. On Global Convergence of Third-Order Chebyshev-Type Method under General Continuity Conditions. Fractal Fract. 2022, 6, 46. [Google Scholar] [CrossRef]
  32. Ardelean, G. A comparison between iterative methods by using the basins of attraction. Appl. Math. Comput. 2011, 218, 88–95. [Google Scholar] [CrossRef]
  33. Wang, X. A new accelerating technique applied to a variant of Cordero-Torregrosa method. J. Comput. Appl. Math. 2018, 330, 695–709. [Google Scholar] [CrossRef]
  34. Cordero, A.; Torregrosa, J.R. Variants of Newton’s Method using fifth-order quadrature foumulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar]
Figure 1. Dynamical planes for z 2 1 = 0 .
Figure 1. Dynamical planes for z 2 1 = 0 .
Mathematics 10 01144 g001
Figure 2. Dynamical planes for z 3 1 = 0 .
Figure 2. Dynamical planes for z 3 1 = 0 .
Mathematics 10 01144 g002
Figure 3. Dynamical planes for z 4 1 = 0 .
Figure 3. Dynamical planes for z 4 1 = 0 .
Mathematics 10 01144 g003
Figure 4. Dynamical planes for z 5 1 = 0 .
Figure 4. Dynamical planes for z 5 1 = 0 .
Mathematics 10 01144 g004
Figure 5. Dynamical planes for z 6 1 = 0 .
Figure 5. Dynamical planes for z 6 1 = 0 .
Mathematics 10 01144 g005
Table 1. Numerical results of different methods for z 2 1 = 0 .
Table 1. Numerical results of different methods for z 2 1 = 0 .
Methods(2)(3)(4)(13) (n = 3)(73)(76)(34) (n = 2)(34) (n = 3)
POP99.80%100%100%100%99.80%96.02%100%100%
ANI2.27812.24151.14302.22333.410711.4011.88061.1371
Table 2. Numerical results of different methods for z 3 1 = 0 .
Table 2. Numerical results of different methods for z 3 1 = 0 .
Methods(2)(3)(4)(13) (n = 3)(73)(76)(34) (n = 2)(34) (n = 3)
POP100%99.99%100%100%99.62%77.53%100%100%
ANI2.63542.64161.35912.63864.452315.6692.19941.3430
Table 3. Numerical results of different methods for z 4 1 = 0 .
Table 3. Numerical results of different methods for z 4 1 = 0 .
Methods(2)(3)(4)(13) (n = 3)(73)(76)(34) (n = 2)(34) (n = 3)
POP99.60%100%100%100%97.10%63.42%100%100%
ANI3.50183.42082.19523.42006.327518.6672.59311.9959
Table 4. Numerical results of different methods for z 5 1 = 0 .
Table 4. Numerical results of different methods for z 5 1 = 0 .
Methods(2)(3)(4)(13) (n = 3)(73)(76)(34) (n = 2)(34) (n = 3)
POP99.83%98.34%99.98%98.35%95.75%61.57%99.99%100%
ANI4.21324.31153.52734.31196.965919.9202.77462.2169
Table 5. Numerical results of different methods for z 6 1 = 0 .
Table 5. Numerical results of different methods for z 6 1 = 0 .
Methods(2)(3)(4)(13) (n = 3)(73)(76)(34) (n = 2)(34) (n = 3)
POP99.60%99.98%98.09%99.98%94.27%53.35%99.92%99.99%
ANI4.65304.53375.10564.53477.880421.4283.35632.4707
Table 6. Numerical results for f 1 ( t ) .
Table 6. Numerical results for f 1 ( t ) .
Method t 1 a t 2 a t 3 a t 4 a ACOC
(2) n = 20.51126  × 10 4 0.97721  × 10 18 0.13043  × 10 72 0.41401  × 10 292 4.0000003
(3) n = 20.54528  × 10 4 0.13358  × 10 17 0.48098  × 10 72 0.80864  × 10 290 4.0000000
(13) n = 20.52925  × 10 4 0.11597  × 10 17 0.26735  × 10 72 0.75510  × 10 291 4.0000000
(73)0.44088  × 10 5 0.64006  × 10 25 0.44087  × 10 115 0.32433  × 10 526 4.5599449
(76)0.27347  × 10 3 0.29224  × 10 16 0.13268  × 10 70 0.60206  × 10 301 4.2386945
(34) n = 20.52925  × 10 4 0.21668  × 10 27 0.48274  × 10 159 0.86928  × 10 910 5.7024880
(2) n = 30.40835  × 10 8 0.24321  × 10 68 0.38502  × 10 550 8.0008692
(3) n = 30.45332  × 10 8 0.61330  × 10 68 0.68830  × 10 547 8.0000000
(13) n = 30.44026  × 10 8 0.47630  × 10 68 0.89387  × 10 548 8.0000000
(4) n = 30.45332  × 10 8 0.14236  × 10 85 0.62560  × 10 861 10.004219
(34) n = 30.44026  × 10 8 0.29405  × 10 104 0.86439  × 10 1222 11.619740
Table 7. Numerical results for f 2 ( t ) .
Table 7. Numerical results for f 2 ( t ) .
Method t 1 a t 2 a t 3 a t 4 a ACOC
(2) n = 20.11793  × 10 3 0.84626  × 10 16 0.22447  × 10 64 0.11110  × 10 258 4.0000000
(3) n = 20.12130  × 10 3 0.97704  × 10 16 0.41138  × 10 64 0.12928  × 10 257 4.0000000
(13) n = 20.12145  × 10 3 0.98370  × 10 16 0.42339  × 10 64 0.14529  × 10 257 4.0000000
(73)0.62678  × 10 5 0.88468  × 10 24 0.18867  × 10 109 0.41780  × 10 500 4.5599373
(76)0.31113  × 10 3 0.69098  × 10 15 0.29916  × 10 64 0.23336  × 10 273 4.2360780
(34) n = 20.12145  × 10 3 0.42142  × 10 24 0.33957  × 10 142 0.70177  × 10 815 5.6961912
(2) n = 30.15535  × 10 7 0.72017  × 10 63 0.15361  × 10 505 8.0000000
(3) n = 30.16390  × 10 7 0.11729  × 10 62 0.80666  × 10 504 8.0000000
(13) n = 30.16405  × 10 7 0.11831  × 10 62 0.86592  × 10 504 8.0000000
(4) n = 30.16390  × 10 7 0.46896  × 10 79 0.17682  × 10 794 9.9998487
(34) n = 30.16405  × 10 7 0.65259  × 10 95 0.93155  × 10 1120 11.725876
Table 8. Numerical results for f 3 ( t ) .
Table 8. Numerical results for f 3 ( t ) .
Method t 1 a t 2 a t 3 a t 4 a ACOC
(2) n = 20.42633  × 10 3 0.10988  × 10 12 0.48611  × 10 51 0.18620  × 10 204 4.0000000
(3) n = 20.43209  × 10 3 0.11749  × 10 12 0.64401  × 10 51 0.58130  × 10 204 4.0000000
(13) n = 20.43241  × 10 3 0.11792  × 10 12 0.65398  × 10 51 0.61858  × 10 204 4.0000000
(73)0.59903  × 10 4 0.25674  × 10 18 0.13907 × 10 83 0.35294  × 10 381 4.5597146
(76)0.10103  × 10 2 0.69855  × 10 12 0.23785  × 10 50 0.22107  × 10 213 4.2381255
(34) n = 20.43241  × 10 3 0.48374  × 10 19 0.18668  × 10 114 0.20222  × 10 656 5.6801734
(2) n = 30.27207  × 10 6 0.50023  × 10 51 0.65317  × 10 409 8.0000000
(3) n = 30.27947  × 10 6 0.63668  × 10 51 0.46203  × 10 408 8.0000000
(13) n = 30.27977  × 10 6 0.64293  × 10 51 0.50004  × 10 408 8.0000000
(4)n=30.27947  × 10 6 0.14545  × 10 67 0.21179  × 10 680 10.000013
(34) n = 30.27977  × 10 6 0.19008  × 10 76 0.75443  × 10 913 11.920006
Table 9. Numerical results for f 4 ( t ) .
Table 9. Numerical results for f 4 ( t ) .
Method t 1 a t 2 a t 3 a t 4 a ACOC
(2) n = 20.45791  × 10 2 0.29033  × 10 9 0.47507  × 10 38 0.34058  × 10 153 4.0000000
(3) n = 20.46989  × 10 2 0.32946  × 10 9 0.80651  × 10 38 0.28961  × 10 152 4.0000000
(13) n = 20.47013  × 10 2 0.33026  × 10 9 0.81472  × 10 38 0.30172  × 10 152 4.0000000
(73)0.21843  × 10 2 0.71702  × 10 12 0.63014  × 10 55 0.30506  × 10 251 4.5595182
(76)0.13208  × 10 1 0.35938  × 10 8 0.23138  × 10 35 0.10762  × 10 150 4.2415301
(34) n = 20.47013  × 10 2 0.78035  × 10 14 0.22231  × 10 82 0.33152  × 10 472 5.6871333
(2) n = 30.16926  × 10 4 0.22742  × 10 38 0.24157  × 10 309 8.0000000
(3) n = 30.17852  × 10 4 0.36590  × 10 38 0.11400  × 10 307 8.0000000
(13) n = 30.17864  × 10 4 0.36802  × 10 38 0.11943  × 10 307 7.9999986
(4) n = 30.17852  × 10 4 0.56501  × 10 49 0.55091  × 10 494 10.115357
(34) n = 30.17864  × 10 4 0.38690  × 10 57 0.18244  × 10 682 11.873804
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X. A Novel n-Point Newton-Type Root-Finding Method of High Computational Efficiency. Mathematics 2022, 10, 1144. https://doi.org/10.3390/math10071144

AMA Style

Wang X. A Novel n-Point Newton-Type Root-Finding Method of High Computational Efficiency. Mathematics. 2022; 10(7):1144. https://doi.org/10.3390/math10071144

Chicago/Turabian Style

Wang, Xiaofeng. 2022. "A Novel n-Point Newton-Type Root-Finding Method of High Computational Efficiency" Mathematics 10, no. 7: 1144. https://doi.org/10.3390/math10071144

APA Style

Wang, X. (2022). A Novel n-Point Newton-Type Root-Finding Method of High Computational Efficiency. Mathematics, 10(7), 1144. https://doi.org/10.3390/math10071144

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop