Next Article in Journal
Forecasting Efficient Risk/Return Frontier for Equity Risk with a KTAP Approach—A Case Study in Milan Stock Exchange
Next Article in Special Issue
Proposal for the Identification of Information Technology Services in Public Organizations
Previous Article in Journal
Geodesic Chord Property and Hypersurfaces of Space Forms
Previous Article in Special Issue
Directional Thermodynamic Formalism
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Class of Weighted-Newton Multiple Root Solvers with Seventh Order Convergence

1
Department of Mathematics, Sant Longowal Institute of Engineering & Technology, Longowal, Sangrur 148106, India
2
Engineering School (DEIM), University of Tuscia, 01100 Viterbo, Italy
3
Ton Duc Thang University, Ho Chi Minh City (HCMC) 758307, Vietnam
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(8), 1054; https://doi.org/10.3390/sym11081054
Submission received: 27 June 2019 / Revised: 7 August 2019 / Accepted: 13 August 2019 / Published: 16 August 2019
(This article belongs to the Special Issue Symmetry and Complexity 2019)

Abstract

:
In this work, we construct a family of seventh order iterative methods for finding multiple roots of a nonlinear function. The scheme consists of three steps, of which the first is Newton’s step and last two are the weighted-Newton steps. Hence, the name of the scheme is ‘weighted-Newton methods’. Theoretical results are studied exhaustively along with the main theorem describing convergence analysis. Stability and convergence domain of the proposed class are also demonstrated by means of using a graphical technique, namely, basins of attraction. Boundaries of these basins are fractal like shapes through which basins are symmetric. Efficacy is demonstrated through numerical experimentation on variety of different functions that illustrates good convergence behavior. Moreover, the theoretical result concerning computational efficiency is verified by computing the elapsed CPU time. The overall comparison of numerical results including accuracy and CPU-time shows that the new methods are strong competitors for the existing methods.

1. Introduction

Finding numerically a root of an equation is an interesting and challenging problem. It is also very important in many diverse areas such as Mathematical Biology, Physics, Chemistry, Economics and Engineering, to name a few [1,2,3,4]. This is due to the fact that many problems from these disciplines are ultimately reduced to finding the root of an equation. Researchers are using iterative methods for approximating root since closed form solutions cannot be obtained in general. In particular, here we consider the problem of computing multiple roots of equation f ( x ) = 0 by iterative methods. A root (say, α ) of f ( x ) = 0 is called multiple root with multiplicity m, if f ( j ) ( α ) = 0 , j = 0 , 1 , 2 , , m 1 and f ( m ) ( α ) 0 .
A basic and widely used iterative method is the well-known modified Newton’s method
x n + 1 = x n m f ( x n ) f ( x n ) n = 0 , 1 , 2 , .
This method efficiently locates the required multiple root with quadratic order of convergence provided that the initial value x 0 is sufficiently close to root [5]. In terms of Traub’s classification (see [1]), Newton’s method (1) is called one-point method. Some other important methods that belong to this class have been developed in [6,7,8,9].
Recently, numerous higher order methods, either independent or based on the modified Newton’s method (1), have been proposed and analyzed in the literature, see e.g., [10,11,12,13,14,15,16,17,18,19,20,21,22,23] and references cited therein. Such methods belong to the category of multipoint methods [1]. Multipoint iterative methods compute new approximations to root α by sampling the function f ( x ) , and its derivatives at several points of the independent variable, per each step. These methods have the strategy similar to Runge–Kutta methods for solving differential equations and Gaussian quadrature integration rules in the sense that they possess free parameters which can be used to ensure that the convergence speed is of a certain order, and that the sampling is done at some suitable points.
In particular, Geum et al. in [22,23] have proposed two- and three-point Newton-like methods with convergence order six for finding multiple roots. The two-point method [22], applicable for m > 1 , is given as
y n = x n m f ( x n ) f ( x n )
x n + 1 = y n Q ( u , s ) f ( y n ) f ( y n )
where u = f ( y n ) f ( x n ) 1 m and s = f ( y n ) f ( x n ) 1 m 1 and Q : C 2 C is a holomorphic function in some neighborhood of origin ( 0 , 0 ) . The three-point method [23] for m 1 is given as
y n = x n m f ( x n ) f ( x n ) z n = x n m Q f ( u ) f ( x n ) f ( x n ) x n + 1 = x n m K f ( u , v ) f ( x n ) f ( x n )
wherein u = f ( y n ) f ( x n ) 1 m and v = f ( z n ) f ( x n ) 1 m . The function Q f : C C is analytic in a neighborhood of 0 and K f : C 2 C is holomorphic in a neighborhood of ( 0 , 0 ) . Both schemes (2) and (3) require four function evaluations to obtain sixth order convergence with the efficiency index (see [24]), 6 1 / 4 1.565 .
The goal and motivation in constructing iterative methods is to attain convergence of order as high as possible by using function evaluations as small as possible. With these considerations, here we propose a family of three-point methods that attain seventh order of convergence for locating multiple roots. The methodology is based on Newton’s and weighted-Newton iterations. The algorithm requires four evaluations of function per iteration and, therefore, possesses the efficiency index 7 1 / 4 1.627 . This shows that the proposed methods have better efficiency ( 1.627 ) than the efficiency ( 1.565 ) of existing methods (2) and (3). Theoretical results concerning convergence order and computational efficiency are verified by performing numerical tests. In the comparison of numerical results with existing techniques, the proposed methods are observed computationally more efficient since they require less computing time (CPU-time) to achieve the solution of required accuracy.
Contents of the article are summarized as follows. In Section 2, we describe the approach to develop new methods and prove their seventh order convergence. In Section 3, stability of the methods is checked by means of using a graphical technique called basins of attraction. In Section 4, some numerical tests are performed to verify the theoretical results by implementing the methods on some examples. Concluding remarks are reported in Section 5.

2. Formulation of Method

Let m 1 be the multiplicity of a root of the equation f ( x ) = 0 . To compute the root let us consider the following three-step iterative scheme:
y n = x n m f ( x n ) f ( x n ) z n = y n m u H ( u ) f ( x n ) f ( x n ) x n + 1 = z n m v G ( u , w ) f ( x n ) f ( x n )
where u = f ( y n ) f ( x n ) 1 m , v = f ( z n ) f ( x n ) 1 m , w = f ( z n ) f ( y n ) 1 m , and the function H : C C is analytic in some neighborhood of 0 and G : C 2 C is holomorphic in a neighborhood of ( 0 , 0 ) . Notice that the first step is Newton iteration (1) whereas second and third steps are weighted by employing the factors H ( u ) and G ( u , w ) , and so we call the algorithm (4) by the name weighted-Newton method. Factors H and G are called weight factors or more appropriately weight functions.
In the sequel we shall find conditions under which the algorithm (4) achieves high convergence order. Thus, the following theorem is stated and proved:
Theorem 1.
Assume that f : C C is an analytic function in a domain enclosing a root α with multiplicity m. Suppose that initial point x 0 is closer enough to the root α, then the iterative formula defined by (4) has seventh order of convergence, if the functions H ( u ) and G ( u , w ) verify the conditions: H ( 0 ) = 1 , H ( 0 ) = 2 , H ( 0 ) = 2 , G ( 0 , 0 ) = 1 , G 10 ( 0 , 0 ) = 2 , G 01 ( 0 , 0 ) = 1 , G 20 ( 0 , 0 ) = 0 , | H ( 0 ) | < and | G 11 ( 0 , 0 ) | < , where G i j ( 0 , 0 ) = i + j u i w j G ( u , w ) | ( 0 , 0 ) .
Proof. 
Let e n = x n α be the error at n-th iteration. Taking into account that f ( j ) ( α ) = 0 , j = 0 , 1 , 2 , , m 1 , we have by the Taylor’s expansion of f ( x n ) about α
f ( x n ) = f ( m ) ( α ) m ! e n m + f ( m + 1 ) ( α ) ( m + 1 ) ! e n m + 1 + f ( m + 2 ) ( α ) ( m + 2 ) ! e n m + 2 + f ( m + 3 ) ( α ) ( m + 3 ) ! e n m + 3 + f ( m + 4 ) ( α ) ( m + 4 ) ! e n m + 4 + f ( m + 5 ) ( α ) ( m + 5 ) ! e n m + 5 + f ( m + 6 ) ( α ) ( m + 6 ) ! e n m + 6 + f ( m + 7 ) ( α ) ( m + 7 ) ! e n m + 7 + O e n m + 8
or:
f ( x n ) = f ( m ) ( α ) m ! e n m 1 + C 1 e n + C 2 e n 2 + C 3 e n 3 + C 4 e n 4 + C 5 e n 5 + C 6 e n 6 + C 7 e n 7 + O ( e n 8 )
where C k = m ! ( m + k ) ! f ( m + k ) ( α ) f ( m ) ( α ) for k N .
also
f ( x n ) = f ( m ) ( α ) m ! e n m 1 ( m + C 1 ( m + 1 ) e n + C 2 ( m + 2 ) e n 2 + C 3 ( m + 3 ) e n 3 + C 4 ( m + 4 ) e n 4 + C 5 ( m + 5 ) e n 5 + C 6 ( m + 6 ) e n 6 + C 7 ( m + 7 ) e n 7 + O ( e n 8 ) ) ,
where C k = m ! ( m + k ) ! f ( m + k ) ( α ) f ( m ) ( α ) for k N .
Using (5) and (6) in first step of (4), it follows that
y n α = C 1 m e n 2 + i = 1 5 ω i e n i + 2 + O ( e n 8 ) ,
where ω i = ω i ( m , C 1 , C 2 , , C 7 ) are given in terms of m , C 1 , C 2 , , C 7 with explicitly written two coefficients ω 1 = 2 m C 2 ( m + 1 ) C 1 2 m 2 , ω 2 = 1 m 3 3 m 2 C 3 + ( m + 1 ) 2 C 1 3 m ( 4 + 3 m ) C 1 C 2 . Here, rest of the expressions of ω i are not being produced explicitly since they are very lengthy.
Expansion of f ( y n ) about α yields
f ( y n ) = f ( m ) ( α ) m ! C 1 m m e n 2 m ( 1 + 2 C 2 m C 1 2 ( m + 1 ) C 1 e n + 1 2 m C 1 2 ( ( 3 + 3 m + 3 m 2 + m 3 ) C 1 4 2 m ( 2 + 3 m + 2 m 2 ) C 1 2 C 2 + 4 ( 1 + m ) m 2 C 2 2 + 6 m 2 C 1 C 3 ) e n 2 + i = 1 4 ω ¯ i e n i + 2 + O ( e n 8 ) ) ,
where ω ¯ i = ω ¯ i ( m , C 1 , C 2 , , C 7 ) .
Using (5) and (8) in the expression of u, it follows that
u = C 1 m e n + 2 C 2 m C 1 2 ( m + 2 ) m 2 e n 2 + i = 1 5 η i e n i + 2 + O ( e n 8 ) ,
where η i = η i ( m , C 1 , C 2 , , C 7 ) with explicitly written one coefficient η 1 = 1 2 m 3 C 1 3 ( 2 m 2 + 7 m + 7 ) + 6 C 3 m 2 2 C 2 C 1 m ( 3 m + 7 ) .
Developing weight function H ( u ) in neighborhood 0,
H ( u ) H ( 0 ) + u H ( 0 ) + 1 2 ! u 2 H ( 0 ) + 1 2 ! u 3 H ( 0 ) .
Inserting Equations (5), (8) and (10) in the second step of (4), after some simplifications we have that
z n α = A m C 1 e n 2 + 1 m 2 2 m A C 2 + C 1 2 ( 1 + m A + 3 H ( 0 ) H ( 0 ) ) e n 3 + 1 2 m 3 ( 6 A m 2 C 3 + 2 m C 1 C 2 ( 4 + 3 A m + 11 H ( 0 ) 4 H ( 0 ) ) + C 1 3 ( 2 2 A m 2 13 H ( 0 ) + 10 H ( 0 ) + m ( 4 11 H ( 0 ) + 4 H ( 0 ) ) H ( 0 ) ) ) e n 4 + i = 1 3 γ i e n i + 4 + O ( e n 8 ) ,
where A = 1 + H ( 0 ) and γ i = γ i ( m , H ( 0 ) , H ( 0 ) , H ( 0 ) , H ( 0 ) , C 1 , C 2 , , C 7 ) .
In order to accelerate convergence, the coefficients of e n 2 and e n 3 should be equal to zero. That is possible only if we have
H ( 0 ) = 1 , H ( 0 ) = 2 .
By using the above values in (11), we obtain that
z n α = 2 m C 1 C 2 + C 1 3 ( 9 + m H ( 0 ) ) 2 m 3 e n 4 + i = 1 3 γ i e n i + 4 + O ( e n 8 ) .
Expansion of f ( z n ) about α yields
f ( z n ) = f ( m ) ( α ) m ! ( z n α ) m 1 + C 1 ( z n α ) + C 2 ( z n α ) 2 + O ( ( z n α ) 3 ) .
From (5), (8) and (14), we obtain forms of v and w as
v = ( 9 + m ) C 1 3 2 m C 1 C 2 2 m 3 e n 3 + i = 1 4 τ i e n i + 3 + O ( e n 8 ) ,
where τ i = τ i ( m , H ( 0 ) , H ( 0 ) , C 1 , C 2 , , C 7 ) and
w = ( 9 + m H ( 0 ) ) C 1 2 2 m C 2 2 m 3 e n 2 + i = 1 5 ς i e n i + 2 + O ( e n 8 ) ,
where ς i = ς i ( m , H ( 0 ) , H ( 0 ) , C 1 , C 2 , , C 7 ) .
Expanding G ( u , w ) in neighborhood of origin ( 0 , 0 ) by Taylor series, it follows that
G ( u , w ) G 00 ( 0 , 0 ) + u G 10 ( 0 , 0 ) + 1 2 u 2 G 20 ( 0 , 0 ) + w ( G 01 ( 0 , 0 ) + u G 11 ( 0 , 0 ) ) ,
where G i j ( 0 , 0 ) = i + j u i w j G ( u , w ) | ( 0 , 0 ) .
Then by substituting (5), (6), (15)–(17) into the last step of scheme (4), we obtain that
e n + 1 = 1 2 m 3 ( 1 + G 00 ( 0 , 0 ) ) C 1 2 m C 1 ( 9 + m H ( 0 ) ) C 1 2 e n 4 + i = 1 3 ξ i e n i + 4 + O ( e n 8 ) ,
where ξ i = ξ i ( m , H ( 0 ) , H ( 0 ) , G 00 ( 0 , 0 ) , G 10 ( 0 , 0 ) , G 20 ( 0 , 0 ) , G 01 ( 0 , 0 ) , G 11 ( 0 , 0 ) , C 1 , C 2 , , C 7 ) .
From Equation (18) it is clear that we can obtain at least fifth order convergence when G 00 ( 0 , 0 ) = 1 . In addition, using this value in ξ 1 = 0 , we will obtain that
G 10 ( 0 , 0 ) = 2 .
By using G 00 = 1 and (19) in ξ 2 = 0 , the following equation is obtained
C 1 2 m C 2 C 1 2 ( 9 + m H ( 0 ) ) ( 2 m C 2 ( 1 + G 01 ( 0 , 0 ) ) + C 1 2 ( 11 + m ( 1 + G 01 ( 0 , 0 ) ) ( 9 + H ( 0 ) ) G 01 ( 0 , 0 ) + G 20 ( 0 , 0 ) ) ) = 0 ,
which further yields
G 01 ( 0 , 0 ) = 1 , G 20 ( 0 , 0 ) = 0 and H ( 0 ) = 2 .
Using the above values in (18), we obtain the error equation
e n + 1 = 1 360 m 6 ( 360 m 3 ( 39 + 5 m ) C 2 3 6 m C 3 2 10 m C 2 C 4 + 120 m 3 C 1 ( ( 515 + 78 m ) C 2 C 3 12 m C 5 ) 60 m 2 C 1 3 C 3 1383 + 845 m + 78 m 2 + 12 H ( 0 ) + 10 m C 1 4 C 2 ( 21571 + 8183 m 2 + 558 m 3 + 515 H ( 0 ) + 324 G 11 ( 0 , 0 ) + 36 m 667 + 6 H ( 0 ) + G 11 ( 0 , 0 ) ) 60 m 2 C 1 2 6 m ( 55 + 9 m ) C 4 + C 2 2 2619 + 1546 m + 135 m 2 + 24 H ( 0 ) + 6 G 11 ( 0 , 0 ) C 1 6 ( 55017 + 17005 m + 978 m 4 + 2775 H ( 0 ) + 7290 G 11 ( 0 , 0 ) + 15 m 2 ( 4463 + 40 H ( 0 ) + 6 G 11 ( 0 , 0 ) ) + 5 m ( 21571 + 515 H ( 0 ) + 324 G 11 ( 0 , 0 ) ) ) ) e n 7 + O ( e n 8 ) .
Thus, the seventh order convergence is established. □
Based on the conditions on H ( u ) and G ( u , w ) as shown in Theorem 1, we can generate numerous methods of the family (4). However, we restrict to the following simple forms:

2.1. Some Concrete Forms of H ( u )

Case 1.Considering H ( u ) a polynomial function, i.e.,
H ( u ) = A 0 + A 1 u + A 2 u 2 .
Using the conditions of Theorem 1, we get A 0 = 1 , A 1 = 2 and A 2 = 1 . Then
H ( u ) = 1 + 2 u u 2 .
Case 2.When H ( u ) is a rational function, i.e.,
H ( u ) = 1 + A 0 u A 1 + A 2 u .
Using the conditions of Theorem 1, we get that A 0 = 5 2 , A 1 = 1 and A 2 = 1 2 . So
H ( u ) = 2 + 5 u 2 + u .
Case 3.Consider H ( u ) as another rational weight function, e.g.,
H ( u ) = 1 + A 0 u + A 1 u 2 1 + A 2 u .
Using the conditions of Theorem 1, we obtain A 0 = 3 , A 1 = 2 and A 2 = 1 . Then H ( u ) becomes
H ( u ) = 1 + 3 u + u 2 1 + u .
Case 4.When H ( u ) is a yet another rational function of the form
H ( u ) = 1 + A 0 u 1 + A 1 u + A 2 u 2 .
Using the conditions of Theorem 1, we have A 0 = 1 , A 1 = 1 and A 2 = 1 . Then
H ( u ) = 1 + u 1 u + 3 u 2 .

2.2. Some Concrete Forms of G ( u , w )

Case 5.Considering G ( u , w ) a polynomial function, e.g.,
G ( u , w ) = A 0 + A 1 u + A 2 u 2 + A 3 w .
From the conditions of Theorem 1, we get A 0 = 1 , A 1 = 2 , A 2 = 0 and A 3 = 1 . So
G ( u , w ) = 1 + 2 u + w .
Case 6.Considering G ( u , w ) a sum of two rational functions, that is
G ( u , w ) = A 0 + 2 u 1 + A 1 u + B 0 1 + B 1 w .
By using the conditions of Theorem 1, we find that A 0 = 0 , A 1 = 0 , B 0 = 1 and B 1 = 1 . G ( u , w ) becomes
G ( u , w ) = 2 u + 1 1 w .
Case 7.When G ( u , w ) is a product of two rational functions, that is
G ( u , w ) = 1 + A 0 u 1 + A 1 u × B 0 1 + B 1 w .
Then the conditions of Theorem 1 yield A 0 = 2 , A 1 = 0 , B 0 = 1 and B 1 = 1 . So
G ( u , w ) = 1 + 2 u 1 w .

3. Complex Dynamics of Methods

Here we analyze the complex dynamics of new methods based on a graphical tool ‘the basins of attraction’ of the roots of a polynomial p ( z ) in Argand plane. Analysis of the basins gives an important information about the stability and convergence region of iterative methods. Wider is the convergence region (i.e., basin), better is the stability. The idea of complex dynamics was introduced initially by Vrscay and Gilbert [25]. In recent times, many authors have used this concept in their work, see, for example [26,27] and references therein. We consider some of the cases corresponding to the previously obtained forms of H ( u ) and G ( u , w ) ) of family (4) to assess the basins of attraction. Let us select the combinations: cases 1 and 2 of H ( u ) with the cases 5, 6 and 7 of G ( u , w ) ) in the scheme (4), and denote the corresponding methods by NM-i(j), i = I, II and j = a, b, c.
To start with we take the initial point z 0 in a rectangular region R C that contains all the roots of a polynomial p ( z ) . The iterative method when starts from point z 0 in a rectangle either converges to the root P ( z ) or eventually diverges. The stopping criterion for convergence is considered to be 10 3 up to a maximum of 25 iterations. If the required accuracy is not achieved in 25 iterations, we conclude that the method with initial point z 0 does not converge to any root. The strategy adopted is as follows: A color is allocated to each initial point z 0 lying in the basin of attraction of a root. If the iteration initiating at z 0 converges, it represents the attraction basin painted with assigned color to it, otherwise, the non-convergent cases are painted by the black color.
To view the geometry in complex plane, we characterize attraction basins associated with the methods NM-I(a–c) and NM-II(a–c) considering the following four polynomials:
Problem 1.
Consider the polynomial p 1 ( z ) = ( z 2 1 ) 3 , which has roots { 1 , 1 } with multiplicity three. We use a grid of 400 × 400 points in a rectangle R C of size [ 3 , 3 ] × [ 3 , 3 ] and assign red color to each initial point in the attraction basin of root 1 and green color to each point in the attraction basin of root 1. The basins so plotted for NM-I(a–c) and NM-II(a–c) are displayed in Figure 1. Looking at these graphics, we conclude that the method NM-II(c) possesses better stability followed by NM-I(c) and NM-II(b). Black zones in the figures show the divergent nature of a method when it starts assuming initial point from such zones.
Problem 2.
Let p 2 ( z ) = ( z 3 1 ) 2 that has three roots { 0.5 ± 0.866025 i , 1 } each with multiplicity two. To plot the graphics, we use a grid of 400 × 400 points in a rectangle R C of size [ 3 , 3 ] × [ 3 , 3 ] and assign the colors blue, green and red corresponding to each point in the basins of attraction of 1, 0.5 + 0.866025 i and 0.5 0.866025 i . Basins drawn for the methods NM-I(a–c) and NM-II(a–c) are shown in Figure 2. As can be observed from the pictures, the method NM-I(c) and NM-II(c) possess a small number of divergent points and therefore have better convergence than the remaining methods.
Problem 3.
Let p 3 ( z ) = ( z 6 1 ) 3 with six roots { ± 1 , 0.5 ± 0.866025 i , 0.5 ± 0.866025 i } each with multiplicity m = 3 . Basins obtained for the considered methods are presented in Figure 3. To draw the pictures, the red, blue, green, pink, cyan and magenta colors have been assigned to the attraction basins of the six roots. We observe from the graphics that the method NM-I(c) and NM-II(c) have better convergence behavior since they have lesser number of divergent points. On the other hand NM-I(a) and NM-II(a) contain large black regions followed by NM-I(b) and NM-II(b) indicating that the methods do not converge in 25 iterations starting at those points.
Problem 4.
Consider the polynomial p 4 ( z ) = z 4 6 z 2 + 8 that has four simple roots { ± 2 , ± 1.414 } . In this case also, we use a grid of 400 × 400 points in a rectangle R C of size [ 3 , 3 ] × [ 3 , 3 ] and allocate the red, blue, green and yellow colors to the basins of attraction of these four roots. Basins obtained for the methods are shown in Figure 4. Observing the basins, we conclude that the method NM-II(c) possesses better stability followed by NM-I(c). Remaining methods show chaotic nature along the boundaries of the attraction basins.
Looking at the graphics, one can easily judge the stable behavior and so the better convergence of any method. We reach to a root, if we start the iteration choosing z 0 anywhere in the basin of that root. However, if we choose an initial guess z 0 in a region wherein different basins of attraction meet each other, it is difficult to predict which root is going to be attained by the iterative method that starts from z 0 . So, the choice of z 0 in such a region is not a good one. Both black regions and the regions with different colors are not suitable to assume the initial guess as z 0 when we are required to achieve a particular root. The most intricate geometry is between the basins of attraction, which corresponds to the cases where the method is more demanding with respect to the initial point. From the basins, one can conclude that the method NM-II(c) possesses better stability followed by NM-I(c) than the remaining methods.

4. Numerical Tests

In this section, we apply the special cases NM-i(j), i = I, II and j = a, b, c of the scheme (4), corresponding to the combinations of H ( u ) : cases 1 and 2 with that of G ( u , w ) ): cases 5, 6 and 7, to solve some nonlinear equations for validation of the theoretical results that we have derived. The theoretical seventh order of convergence is verified by calculating the computational order of convergence (COC)
COC = ln | ( x n + 1 α ) / ( x n α ) | ln | ( x n α ) / ( x n 1 α ) | ,
which is given in (see [28]). Comparison of performance is also done with some existing methods such as the sixth order methods by Geum et al. [22,23], which are already expressed by (2) and (3). To represent Q f ( u , s ) , we choose the following four special cases in the formula (2) and denote the respective methods by GKN-I(j), j = a, b, c, d:
(a)
Q f ( u , s ) = m ( 1 + 2 ( m 1 ) ( u s ) 4 u s + s 2 ) .
(b)
Q f ( u , s ) = m ( 1 + 2 ( m 1 ) ( u s ) u 2 2 u s ) .
(c)
Q f ( u , s ) = m + a u 1 + b u + c s + d u s , where a = 2 m m 1 , b = 2 2 m , c = 2 ( 2 2 m + m 2 ) m 1 , d = 2 m ( m 1 ) .
(d)
Q f ( u , s ) = m + a 1 u 1 + b 1 u + c 1 u 2 1 1 + d 1 s , where a 1 = 2 m ( 4 m 4 16 m 3 + 31 m 2 30 m + 13 ( m 1 ) ( 4 m 2 8 m + 7 ) , b 1 = 4 ( 2 m 2 4 m + 3 ) ( m 1 ) ( 4 m 2 8 m + 7 ) ,
c 1 = 4 m 2 8 m + 3 4 m 2 8 m + 7 , d 1 = 2 ( m 1 ) .
For the formula (3), considering the following four combinations of the functions Q f ( u ) and K f ( u , v ) , and denoting the corresponding methods by GKN-II(j), j = a, b, c, d:
(a)
Q f ( u ) = 1 + u 2 1 u , K f ( u , v ) = 1 + u 2 v 1 u + ( u 2 ) v .
(b)
Q f ( u ) = 1 + u + 2 u 2 , K f ( u , v ) = 1 + u + 2 u 2 + ( 1 + 2 u ) v .
(c)
Q f ( u ) = 1 + u 2 1 u , K f ( u , v ) = 1 + u + 2 u 2 + 2 u 3 + 2 u 4 + ( 2 u + 1 ) v .
(d)
Q f ( u ) = ( 2 u 1 ) ( 4 u 1 ) 1 7 u + 13 u 2 , K f ( u , v ) = ( 2 u 1 ) ( 4 u 1 ) 1 7 u + 13 u 2 ( 1 6 u ) v .
Computational work is compiled in the programming package of Mathematica software using multiple-precision arithmetic. Numerical results as displayed in Table 1, Table 2, Table 3, Table 4 and Table 5 contain: (i) number of iterations ( n ) needed to converge to desired solution, (ii) last three successive errors e n = | x n + 1 x n | , (iii) computational order of convergence (COC) and (iv) CPU-time (CPU-t) in seconds elapsed during the execution of a program. Required iteration ( n ) and elapsed CPU-time are computed by selecting | x n + 1 x n | + | f ( x n ) | < 10 350 as the stopping condition.
For numerical tests we select seven problems. The first four problems are of practical interest where as last three are of academic interest. In the problems we need not to calculate the root multiplicity m and it is set a priori, before running the algorithm.
Example 1 (Eigen value problem).
Finding Eigen values of a large sparse square matrix is a challenging task in applied mathematics and engineering sciences. Calculating the roots of a characteristic equation of matrix of order larger than 4 is even a big job. We consider the following 9 × 9 matrix.
A = 1 8 12 0 0 19 19 76 19 18 437 64 24 0 24 24 64 8 32 376 16 0 24 4 4 16 4 8 92 40 0 0 10 50 40 2 20 242 4 0 0 1 41 4 1 0 25 40 0 0 18 18 104 18 20 462 84 0 0 29 29 84 21 42 501 16 0 0 4 4 16 4 16 92 0 0 0 0 0 0 0 0 24 .
We calculate the characteristic polynomial of the matrix ( A ) as
f 1 ( x ) = x 9 29 x 8 + 349 x 7 2261 x 6 + 8455 x 5 17663 x 4 + 15927 x 3 + 6993 x 2 24732 x + 12960 .
This function has a multiple root α = 3 with multiplicity 4. We select initial value x 0 = 2.25 and obtain the numerical results as shown in Table 1.
Example 2 (Manning equation for fluid dynamics).
Next, the problem of isentropic supersonic flow around a sharp expansion corner is chosen (see [2]). Relation among the Mach number before the corner (say M 1 ) and after the corner (say M 2 ) is given by
δ = b 1 / 2 tan 1 M 2 2 1 b 1 / 2 tan 1 M 1 2 1 b 1 / 2 tan 1 ( M 2 2 1 ) 1 / 2 tan 1 ( M 1 2 1 ) 1 / 2 ,
where b = γ + 1 γ 1 and γ is the specific heat ratio of gas.
For a specific case, the above equation is solved for for M 2 , given that M 1 = 1.5 , γ = 1.4 and δ = 10 0 . Then, we have that
tan 1 5 2 tan 1 ( x 2 1 ) + 6 tan 1 x 2 1 6 tan 1 1 2 5 6 11 63 = 0
where x = M 2 .
Let us consider this particular case seven times using same values of the involved parameters and then obtain the nonlinear function
f 2 ( x ) = tan 1 5 2 tan 1 ( x 2 1 ) + 6 tan 1 x 2 1 6 tan 1 1 2 5 6 11 63 7 .
The above function has one root at α = 1.8411027704 of multiplicity 7 with initial approximations x 0 = 1.50 . Computed numerical results are shown in Table 2.
Example 3 (Beam designing model).
We consider the problem of beam positioning (see [4]) where a beam of length r unit leans against the edge of a cubical box of sides 1 unit distance each, such that one end of the beam touches the wall and the other end touches the floor, as depicted in Figure 5.
The problem is: What will be the distance alongside the floor from the base of wall to the bottom of beam? Suppose that y is distance along the beam from the floor to the edge of the box and x is the distance from the bottom of box to the bottom of beam. For a given r, we can obtain the equation
f 3 ( x ) = x 4 + 4 x 3 24 x 2 + 16 x + 16 = 0 .
One of the roots of this equation is the double root x = 2 . We select the initial guess x 0 = 3 to find the root. Numerical results by various methods are shown in Table 3.
Example 4 (van der Waals equation).
Consider the Van der Waals equation
P + a 1 n 2 V 2 ( V n a 2 ) = n R T ,
that describes nature of a real gas by adding in the ideal gas equation two parameters, a 1 and a 2 , which are specific for each gas. To find the volume V in terms of rest of the parameters one requires to solve the equation
P V 3 ( n a 2 P + n R T ) V 2 + a 1 n 2 V = a 1 a 2 n 3 .
Given a set of values of a 1 and a 2 of a particular gas, one can find values for n, P and T, so that this equation has three real roots. Using a particular set of values (see [3]), we have the equation
f 4 ( x ) = x 3 5 . 22 x 2 + 9 . 0825 x 5 . 2675 = 0 ,
where x = V . This equation has a multiple root α = 1.75 with multiplicity 2. The initial guess chosen to obtain the root 1.75 is x 0 = 2 . Numerical results are shown in Table 4.
Example 5.
Consider now the standard nonlinear test function (see [23])
f 5 ( x ) = 9 2 x 2 x 4 + cos 2 x 5 x x 4 sin 2 x .
The root α = 1.29173329244360 of multiplicity 2 is computed with initial guess x 0 = 1.5 . Numerical results are displayed in Table 5.
Example 6.
Let us assume another nonlinear test function given as (see [22])
f 6 ( x ) = x 3 x 3 cos π x 6 + 1 x 2 + 1 11 5 + 4 3 ( x 2 ) 4 .
The root α = 2 of this function is of multiplicity 5. This root is calculated assuming the initial approximation x 0 = 1.5 . Results so obtained are shown in Table 6.
Example 7.
Lastly, consider the test function
f 7 ( x ) = x 2 + 1 2 x e x 2 + 1 + x 3 x cosh 2 π x 2
The function has multiple root α = i of multiplicity 4. We choose the initial approximations x 0 = 1.25 i for obtaining the root of the function. The results computed by various methods are shown in Table 7.
From the numerical values of errors we observe the increasing accuracy in the values of successive approximations as the iteration proceed, which points to the stable nature of the methods. Like the existing methods, the convergence behavior of new methods is also consistent. At the stage when stopping criterion | x n + 1 x n | + | f ( x n ) | < 10 350 has been satisfied we display the value ‘0’ of | x n + 1 x n | . From the calculation of computational order of convergence shown in the penultimate column in each table, we verify the theoretical convergence of seventh order. The entries of last column in each table show that the new methods consume less CPU-time during the execution of program than the time taken by existing methods. This confirms the computationally more efficient nature of the new methods. Among the new methods, the better performers (in terms of accuracy) are NM-I(c) and NM-II(c) since they produce approximations of the root with small error. However, this is not true when execution time is taken into account because if one method is better in some situations, then the other is better in some other situation. The main purpose of implementing the new methods for solving different type of nonlinear equations is purely to illustrate the better accuracy of the computed solution and the better computational efficiency than existing techniques. Similar numerical experimentation, performed on a variety of numerical problems of different kinds, confirmed the above remarks to a large extent.

5. Conclusions

In the present work, we have constructed a class of seventh order methods for solving nonlinear equations containing multiple roots. Analysis of the convergence has been carried out, which proves the seventh order of convergence under standard conditions of the function whose zeros we are looking for. Some particular cases of the family are presented. The stability of these cases are tested by means of visual display of the basins of attraction when the methods are applied on different polynomials. The methods are also implemented to solve nonlinear equations including those arising in practical problems. The performance is compared with existing methods in numerical testing. Superiority of proposed methods over the known techniques is endorsed by the numerical tests including the elapsed CPU-time in execution of program.

Author Contributions

Methodology, J.R.S.; Formal analysis, J.R.S.; Investigation, D.K.; Data Curation, D.K.; Conceptualization, C.C.; Writing—review & editing, C.C.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  2. Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
  3. Constantinides, A.; Mostoufi, N. Numerical Methods for Chemical Engineers with MATLAB Applications; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  4. Zachary, J.L. Introduction to Scientific Programming: Computational Problem Solving Using Maple and C; Springer: New York, NY, USA, 2012. [Google Scholar]
  5. Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef]
  6. Chun, C.; Neta, B. A third order modification of Newton’s method for multiple roots. Appl. Math. Comput. 2009, 211, 474–479. [Google Scholar] [CrossRef]
  7. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
  8. Neta, B. New third order nonlinear solvers for multiple roots. App. Math. Comput. 2008, 202, 162–170. [Google Scholar] [CrossRef] [Green Version]
  9. Osada, N. An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef] [Green Version]
  10. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef] [Green Version]
  11. Liu, B.; Zhou, X. A new family of fourth-order methods for multiple roots of nonlinear equations. Non. Anal. Model. Cont. 2013, 18, 143–152. [Google Scholar] [Green Version]
  12. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
  13. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef]
  14. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
  15. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
  16. Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1983, 12, 329–335. [Google Scholar] [CrossRef]
  17. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Math. Appl. 2011, 235, 4199–4206. [Google Scholar] [CrossRef] [Green Version]
  18. Soleymani, F.; Babajee, D.K.R. Computing multiple zeros using a class of quartically convergent methods. Alex. Eng. J. 2013, 52, 531–541. [Google Scholar] [CrossRef] [Green Version]
  19. Zhou, X.; Chen, X.; Song, Y. Families of third and fourth order methods for multiple roots of nonlinear equations. Appl. Math. Comput. 2013, 219, 6030–6038. [Google Scholar] [CrossRef]
  20. Thukral, R. A new family of fourth-order iterative methods for solving nonlinear equations with multiple roots. J. Numer. Math. Stoch. 2014, 6, 37–44. [Google Scholar]
  21. Hueso, J.L.; Martínez, E.; Teruel, C. Determination of multiple roots of nonlinear equations and applications. J. Math. Chem. 2015, 53, 880–892. [Google Scholar] [CrossRef]
  22. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef] [Green Version]
  23. Geum, Y.H.; Kim, Y.I.; Neta, B. A sixth–order family of three–point modified Newton–like multiple–root finders and the dynamics behind their extraneous fixed points. Appl. Math. Comput. 2016, 283, 120–140. [Google Scholar] [CrossRef]
  24. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  25. Vrscay, E.R.; Gilbert, W.J. Extraneous fixed points, basin boundaries and chaotic dynamics for Schröder and König rational iteration functions. Numer. Math. 1988, 52, 1–16. [Google Scholar] [CrossRef]
  26. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
  27. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  28. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
Figure 1. Basins of attraction for NM-I(ac) and NM-II(ac) in polynomial p 1 ( z ) .
Figure 1. Basins of attraction for NM-I(ac) and NM-II(ac) in polynomial p 1 ( z ) .
Symmetry 11 01054 g001
Figure 2. Basins of attraction for NM-I(ac) and NM-II(ac) in polynomial p 2 ( z ) .
Figure 2. Basins of attraction for NM-I(ac) and NM-II(ac) in polynomial p 2 ( z ) .
Symmetry 11 01054 g002
Figure 3. Basins of attraction for NM-I(ac) and NM-II(ac) in polynomial p 3 ( z ) .
Figure 3. Basins of attraction for NM-I(ac) and NM-II(ac) in polynomial p 3 ( z ) .
Symmetry 11 01054 g003
Figure 4. Basins of attraction for NM-I(ac) and NM-II(ac) in polynomial p 4 ( z ) .
Figure 4. Basins of attraction for NM-I(ac) and NM-II(ac) in polynomial p 4 ( z ) .
Symmetry 11 01054 g004
Figure 5. Beam positioning problem.
Figure 5. Beam positioning problem.
Symmetry 11 01054 g005
Table 1. Comparison of the numerical results for Example 1.
Table 1. Comparison of the numerical results for Example 1.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-t (s)
GKN-I(a)4 1.06 × 10 9 3.86 × 10 56 9.03 × 10 335 6.00000.1567
GKN-I(b)4 1.06 × 10 9 3.91 × 10 56 9.85 × 10 335 6.00000.1583
GKN-I(c)4 1.06 × 10 9 4.34 × 10 56 2.02 × 10 334 6.00000.1525
GKN-I(d)4 1.07 × 10 9 1.17 × 10 55 2.02 × 10 331 6.00000.1600
GKN-II(a)4 1.19 × 10 6 5.39 × 10 38 4.56 × 10 226 5.99990.1835
GKN-II(b)4 1.20 × 10 6 1.61 × 10 37 9.49 × 10 223 5.99990.1640
GKN-II(c)4 1.20 × 10 6 1.12 × 10 37 7.51 × 10 224 5.99990.1718
GKN-II(d)4 1.20 × 10 6 1.87 × 10 37 2.76 × 10 222 5.99990.1680
NM-I(a)3 9.83 × 10 8 4.34 × 10 51 07.00000.1562
NM-I(b)3 1.16 × 10 9 1.38 × 10 64 07.00000.1170
NM-I(c)3 6.30 × 10 10 7.75 × 10 67 07.00000.1485
NM-II(a)3 9.83 × 10 8 4.41 × 10 51 07.00000.1367
NM-II(b)3 1.16 × 10 9 1.40 × 10 64 07.00000.1562
NM-II(c)3 6.30 × 10 10 8.07 × 10 67 07.00000.1405
Table 2. Comparison of the numerical results for Example 2.
Table 2. Comparison of the numerical results for Example 2.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-t (s)
GKN-I(a)4 2.17 × 10 8 4.61 × 10 25 1.01 × 10 152 6.00001.4218
GKN-I(b)4 2.17 × 10 8 4.60 × 10 25 2.27 × 10 151 6.00001.4923
GKN-I(c)4 2.11 × 10 8 4.21 × 10 25 1.03 × 10 150 6.00001.4532
GKN-I(d)4 1.77 × 10 8 2.48 × 10 25 2.68 × 10 151 6.00001.4960
GKN-II(a)4 4.83 × 10 7 1.36 × 10 41 6.84 × 10 249 6.00001.3867
GKN-II(b)4 4.90 × 10 7 2.89 × 10 41 1.21 × 10 246 6.00001.3790
GKN-II(c)4 4.88 × 10 7 2.22 × 10 41 1.98 × 10 247 6.00001.4110
GKN-II(d)4 4.89 × 10 7 3.22 × 10 41 2.62 × 10 246 6.00001.3982
NM-I(a)3 1.65 × 10 8 2.82 × 10 58 07.00001.1367
NM-I(b)3 7.69 × 10 9 1.35 × 10 60 07.00001.1915
NM-I(c)3 3.65 × 10 9 3.19 × 10 63 07.00001.1407
NM-II(a)3 1.65 × 10 9 2.86 × 10 58 07.00001.1290
NM-II(b)3 7.69 × 10 9 1.36 × 10 60 07.00001.2540
NM-II(c)3 3.65 × 10 9 3.27 × 10 63 07.00001.1445
Table 3. Comparison of numerical results for Example 3.
Table 3. Comparison of numerical results for Example 3.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-t (s)
GKN-I(a)4 1.29 × 10 3 5.18 × 10 20 2.19 × 10 118 6.00000.0313
GKN-I(b)4 1.48 × 10 3 1.63 × 10 19 2.19 × 10 115 5.99980.0390
GKN-I(c)4 1.45 × 10 3 1.76 × 10 19 5.56 × 10 115 5.99970.0352
GKN-I(d)4 1.97 × 10 3 1.80 × 10 18 1.07 × 10 108 5.99960.0428
GKN-II(a)4 5.67 × 10 4 1.20 × 10 22 1.06 × 10 134 5.99990.0314
GKN-II(b)4 2.39 × 10 3 5.78 × 10 18 1.16 × 10 105 5.99960.0396
GKN-II(c)4 1.70 × 10 3 4.26 × 10 19 1.08 × 10 112 5.99970.0392
GKN-II(d)4 1.55 × 10 2 5.18 × 10 13 7.23 × 10 76 6.00000.0354
NM-I(a)4 1.13 × 10 4 6.52 × 10 23 1.41 × 10 157 6.99980.0275
NM-I(b)4 9.26 × 10 4 1.63 × 10 23 8.75 × 10 162 6.99980.0313
NM-I(c)4 4.64 × 10 4 4.44 × 10 26 3.23 × 10 180 6.99980.0275
NM-II(a)4 1.13 × 10 4 6.83 × 10 23 2.00 × 10 157 6.99980.0316
NM-II(b)4 9.33 × 10 4 1.77 × 10 23 1.58 × 10 161 6.99980.0275
NM-II(c)4 4.78 × 10 4 5.86 × 10 26 2.43 × 10 179 6.99980.0354
Table 4. Comparison of numerical results for Example 4.
Table 4. Comparison of numerical results for Example 4.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-t (s)
GKN-I(a)5 1.90 × 10 5 9.03 × 10 22 1.05 × 10 119 6.00000.0471
GKN-I(b)5 2.31 × 10 5 3.69 × 10 21 6.14 × 10 116 6.00000.0472
GKN-I(c)5 2.18 × 10 5 3.18 × 10 21 3.14 × 10 116 6.00000.0465
GKN-I(d)5 3.58 × 10 5 1.01 × 10 19 5.02 × 10 107 6.00000.0483
GKN-II(a)5 3.00 × 10 6 4.91 × 10 27 9.51 × 10 152 6.00000.0474
GKN-II(b)5 4.78 × 10 5 5.42 × 10 19 1.17 × 10 102 6.00000.0472
GKN-II(c)5 2.51 × 10 5 6.82 × 10 21 2.75 × 10 114 6.00000.0481
GKN-II(d)7 3.85 × 10 11 1.78 × 10 55 1.75 × 10 321 6.00000.0625
NM-I(a)5 1.06 × 10 5 4.09 × 10 26 5.33 × 10 169 7.00000.0368
NM-I(b)5 5.10 × 10 6 2.51 × 10 28 1.73 × 10 184 7.00000.0322
NM-I(c)5 1.15 × 10 6 2.55 × 10 33 6.75 × 10 220 7.00000.0327
NM-II(a)5 1.05 × 10 5 4.13 × 10 23 5.89 × 10 169 7.00000.0316
NM-II(b)5 5.16 × 10 6 2.76 × 10 23 3.48 × 10 184 7.00000.0323
NM-II(c)5 1.20 × 10 6 3.65 × 10 26 9.09 × 10 219 7.00000.0314
Table 5. Comparison of the numerical results for Example 5.
Table 5. Comparison of the numerical results for Example 5.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-t (s)
GKN-I(a)4 1.12 × 10 4 5.78 × 10 24 1.10 × 10 139 6.00000.2772
GKN-I(b)4 1.55 × 10 4 7.30 × 10 23 8.07 × 10 133 6.00000.2462
GKN-I(c)4 1.39 × 10 4 4.40 × 10 23 4.43 × 10 134 6.00000.2497
GKN-I(d)4 2.32 × 10 4 1.95 × 10 21 6.85 × 10 124 6.00000.2812
GKN-II(a)4 3.36 × 10 5 8.72 × 10 28 2.66 × 10 163 6.00000.3397
GKN-II(b)4 3.39 × 10 5 2.19 × 10 20 1.57 × 10 117 6.00000.2695
GKN-II(c)4 2.16 × 10 5 7.70 × 10 22 1.58 × 10 126 6.00000.2460
GKN-II(d)4 3.51 × 10 3 3.25 × 10 14 2.03 × 10 80 6.00000.2342
NM-I(a)4 1.52 × 10 4 8.45 × 10 26 1.41 × 10 174 6.99990.1445
NM-I(b)4 1.25 × 10 4 2.22 × 10 26 1.23 × 10 178 6.99990.1522
NM-I(c)4 5.26 × 10 4 1.58 × 10 29 3.54 × 10 201 6.99990.1640
NM-II(a)4 1.52 × 10 4 9.05 × 10 26 2.36 × 10 174 6.99990.1482
NM-II(b)4 1.27 × 10 4 2.49 × 10 26 2.84 × 10 178 6.99990.1492
NM-II(c)4 5.54 × 10 4 2.51 × 10 29 9.82 × 10 200 6.99990.1642
Table 6. Comparison of the numerical results for Example 6.
Table 6. Comparison of the numerical results for Example 6.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-t (s)
GKN-I(a)4 1.20 × 10 5 6.82 × 10 31 2.31 × 10 182 6.00000.6797
GKN-I(b)4 1.20 × 10 5 6.86 × 10 31 2.40 × 10 182 6.00000.6680
GKN-I(c)4 1.21 × 10 5 7.72 × 10 31 5.18 × 10 182 6.00000.6992
GKN-I(d)4 1.58 × 10 5 1.00 × 10 29 6.51 × 10 175 6.00000.6720
GKN-II(a)4 3.17 × 10 5 1.64 × 10 28 3.21 × 10 168 6.00000.8047
GKN-II(b)4 3.50 × 10 5 6.90 × 10 28 4.05 × 10 164 6.00000.8280
GKN-II(c)4 3.41 × 10 5 4.42 × 10 28 2.09 × 10 165 6.00000.7967
GKN-II(d)4 3.54 × 10 5 8.45 × 10 28 1.56 × 10 163 6.00000.8242
NM-I(a)4 5.14 × 10 6 4.35 × 10 38 1.35 × 10 262 7.00000.5625
NM-I(b)4 3.45 × 10 6 2.68 × 10 39 4.53 × 10 271 7.00000.5782
NM-I(c)4 2.05 × 10 6 2.95 × 10 41 3.76 × 10 285 7.00000.5277
NM-II(a)4 5.14 × 10 6 4.42 × 10 38 1.53 × 10 262 7.00000.4805
NM-II(b)4 3.45 × 10 6 2.73 × 10 39 5.24 × 10 271 7.00000.4725
NM-II(c)4 2.05 × 10 6 3.07 × 10 41 5.17 × 10 285 7.00000.4610
Table 7. Comparison of the numerical results for Example 7.
Table 7. Comparison of the numerical results for Example 7.
Methodsn | e n 3 | | e n 2 | | e n 1 | COCCPU-t (s)
GKN-I(a)4 2.53 × 10 6 3.79 × 10 35 4.32 × 10 208 6.00001.1564
GKN-I(b)4 2.53 × 10 6 3.92 × 10 35 5.33 × 10 208 6.00001.1577
GKN-I(c)4 2.68 × 10 6 6.07 × 10 35 8.23 × 10 207 6.00001.1415
GKN-I(d)4 4.80 × 10 6 5.34 × 10 33 1.01 × 10 194 6.00001.0473
GKN-II(a)4 5.04 × 10 6 1.82 × 10 33 4.04 × 10 198 6.00001.0212
GKN-II(b)4 7.15 × 10 6 4.23 × 10 32 1.81 × 10 189 6.00001.1215
GKN-II(c)4 6.39 × 10 6 1.51 × 10 32 2.64 × 10 192 6.00001.2035
GKN-II(d)4 8.22 × 10 6 1.41 × 10 31 8.09 × 10 187 6.00001.1416
NM-I(a)4 1.08 × 10 6 6.96 × 10 43 3.13 × 10 296 7.00000.5787
NM-I(b)4 9.01 × 10 7 1.91 × 10 43 3.71 × 10 300 7.00000.5632
NM-I(c)4 4.64 × 10 7 7.44 × 10 46 2.01 × 10 317 7.00000.5586
NM-II(a)4 1.09 × 10 6 7.21 × 10 43 4.10 × 10 296 7.00000.5478
NM-II(b)4 9.04 × 10 7 2.00 × 10 43 5.10 × 10 300 7.00000.5946
NM-II(c)4 4.68 × 10 7 8.21 × 10 46 4.20 × 10 317 7.00000.5644

Share and Cite

MDPI and ACS Style

Sharma, J.R.; Kumar, D.; Cattani, C. An Efficient Class of Weighted-Newton Multiple Root Solvers with Seventh Order Convergence. Symmetry 2019, 11, 1054. https://doi.org/10.3390/sym11081054

AMA Style

Sharma JR, Kumar D, Cattani C. An Efficient Class of Weighted-Newton Multiple Root Solvers with Seventh Order Convergence. Symmetry. 2019; 11(8):1054. https://doi.org/10.3390/sym11081054

Chicago/Turabian Style

Sharma, Janak Raj, Deepak Kumar, and Carlo Cattani. 2019. "An Efficient Class of Weighted-Newton Multiple Root Solvers with Seventh Order Convergence" Symmetry 11, no. 8: 1054. https://doi.org/10.3390/sym11081054

APA Style

Sharma, J. R., Kumar, D., & Cattani, C. (2019). An Efficient Class of Weighted-Newton Multiple Root Solvers with Seventh Order Convergence. Symmetry, 11(8), 1054. https://doi.org/10.3390/sym11081054

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop