Next Article in Journal
About Granular Rough Computing—Overview of Decision System Approximation Techniques and Future Perspectives
Previous Article in Journal
On Classical Solutions for A Kuramoto–Sinelshchikov–Velarde-Type Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Beyond Newton: A New Root-Finding Fixed-Point Iteration for Nonlinear Equations

1
Glasgow Computational Engineering Centre, James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK
2
Zienkiewicz Centre for Computational Engineering, College of Engineering, Swansea University, Swansea SA1 8EN, UK
*
Author to whom correspondence should be addressed.
Algorithms 2020, 13(4), 78; https://doi.org/10.3390/a13040078
Submission received: 3 March 2020 / Revised: 26 March 2020 / Accepted: 27 March 2020 / Published: 29 March 2020

Abstract

:
Finding roots of equations is at the heart of most computational science. A well-known and widely used iterative algorithm is Newton’s method. However, its convergence depends heavily on the initial guess, with poor choices often leading to slow convergence or even divergence. In this short note, we seek to enlarge the basin of attraction of the classical Newton’s method. The key idea is to develop a relatively simple multiplicative transform of the original equations, which leads to a reduction in nonlinearity, thereby alleviating the limitation of Newton’s method. Based on this idea, we derive a new class of iterative methods and rediscover Halley’s method as the limit case. We present the application of these methods to several mathematical functions (real, complex, and vector equations). Across all examples, our numerical experiments suggest that the new methods converge for a significantly wider range of initial guesses. For scalar equations, the increase in computational cost per iteration is minimal. For vector functions, more extensive analysis is needed to compare the increase in cost per iteration and the improvement in convergence of specific problems.

1. Introduction

Newton–Raphson, often referred to as Newton’s, fixed-point iteration method has been the gold standard for numerically solving equations for several centuries. To set the symbols and nomenclature, we start by defining a generic problem:
Find   x = x   such   that   r ( x ) = 0
for a given differentiable function r : K K ( K = R or C ) . The condition of differentiability is the same as that for Newton’s method. To solve this problem, fixed-point iterations are commonly used; starting with a guess to the solution x n = x 0 , the guess is iteratively updated using x n + 1 = ϕ ( x n ) , where ϕ ( x ) depends on r ( x ) and the chosen numerical scheme. It is required that x n x as n for the numerical scheme to converge to the true solution, and, in practice, the fastest possible convergence is desired. If we expand r ( x ) in finite Taylor’s series up to second order about point x n and evaluate it at x = x , we get
r ( x ) = r ( x n ) + d r d x x n ( x x n ) + 1 2 d 2 r d x 2 ζ ( x x n ) 2 = 0 ,
for some ζ [ x n , x ] . The last term is called the remainder and can be written in other forms [1]. Since ζ is unknown, in the standard Newton’s method, we neglect the second order term and approximate x with an updated guess to the solution: x n + 1 . Thus, we obtain Newton’s fixed-point iteration (using notation r = d r / d x ):
x n + 1 = x n + Δ x N = x n r ( x n ) r ( x n ) .
Using (2) and (3), one can rearrange to get the evolution of error e n = | x x n | :
e n + 1 = N ( ζ , x n ) e n 2 ,
where N ( ζ , x n ) : = r ( ζ ) 2 r ( x n ) is the measure of nonlinearity (This nonlinearity measure N is similar to the γ number in Smale’s α -theory [2]). The above step proves (at least) quadratic convergence of Newton’s method close to the solution when r ( x ) has only simple roots. This quadratic convergence has led to a wide adoption of Newton’s method in solving problems in all scientific disciplines.
Despite the aforementioned attractive convergence of Newton’s method, it can be seen from (4) that to achieve quadratic convergence, the current guess x n must be “sufficiently close” to the solution. When the initial guess is not close to the solution, the convergence can be slower than quadratic, or, worse, the iterations may diverge if N ( ζ , x n ) e n > 1 . As an example, if we consider r ( x ) = e x 500 and solve it using (3) with x 0 = 0 , iterations diverge until we get numerical overflow (Figure 1a, solid red line). Instead, if we start with x 0 = 3 , the error decreases sub-quadratically before achieving quadratic convergence closer to the solution (Figure 1a, dashed red line).

2. Methods and Results

The basin of attraction for a root x is defined as the set of initial guesses from which a given fixed-point iteration converges to x (this can be seen as the domain of convergence for initial guesses). Naturally, it is desirable to have a large basin of attraction for the root so that convergence is achieved for a wide range of initial guesses. For convergence we need e n + 1 < e n , i.e., N ( ζ , x n ) e n < 1 using (4). Thus, size of the basin depends on nonlinearity N ( ζ , x n ) .
We pose the following question: can we pre-multiply the original equation by a function so that we obtain a larger (or, if possible, infinite) basin of attraction? That is, instead of solving (1), can we solve
Find   x = x   such   that   P ( x ) r ( x ) = 0 ,
still using Newton’s method? The idea is to choose a P ( x ) that decreases the nonlinearity N and, hence, gives a larger basin of attraction, while retaining at least quadratic convergence close to the root. For (5), the measure of nonlinearity is N ( ζ , x n ) = P ( x ) r ( x ) | x = ζ / P ( x ) r ( x ) | x = x n . To minimize N, we equate N = 0 , which gives
P ( x ) = α x c r ( x ) ,
for arbitrary integration constants α and c. We note that if P ( x ) is not a function of x, we do not change the nonlinearity of the problem, and this case would fall under the purview of the highly developed field of linear preconditioning. Thus, without loss of generality, we set α = 1 . However, multiplying the above form of P ( x ) by r ( x ) has two problems: it (1) eliminates the original root x = x and (2) introduces a new root x = c . To solve these problems, we add another constant κ 0 in the denominator of P ( x ) in (6), P ( x ) = x c r ( x ) + κ . This avoids the elimination of the x root. In addition, we eliminate the undesirable new root x = c by choosing κ such that r ( c ) + κ = 0 , i.e., κ = r ( c ) and P ( x ) = x c r ( x ) r ( c ) . Thus, instead of applying Newton’s method to r ( x ) = 0 , if we apply Newton’s method to our new equation
x c r ( x ) r ( c ) r ( x ) = 0 ,
we get a new fixed-point iteration
x n + 1 = x n + Δ x EN = x n ( x n c ) r ( x n ) r ( x n ) ( x n c ) r ( x n ) r ( c ) r ( x n ) r ( c ) .
The superscript EN denotes “Extended Newton” method.

2.1. Choice of c

For the r ( x ) = e x 500 example, the new equation to be solved is ( x c ) ( e x 500 ) / ( e x e c ) = 0 . Surprisingly, we find that our Extended Newton method, (8), converges starting from x 0 = 0 (Figure 1a, gray lines), irrespective of the choice of c in a wide range c ( 50 , 50 ) . Given the simplicity of the formulation, this is a remarkable result showing the potential of the EN method in comparison to the classical Newton’s method.
The choice of c makes Extended Newton method a family of fixed-point iterations. We note that if c = x such that r ( c ) = 0 , we have x n + 1 = x n + Δ x EN = x n + ( c x n ) = c = x , i.e., we will find the solution in a single iteration irrespective of the starting point x 0 and function r. This is not surprising; if one could choose c = x , the solution is already known, making the basin of attraction of EN method infinite.
We solve the problem e x H = 0 for varying values of c, x 0 , and H and find a significantly improved convergence using the proposed fixed-point iterations (Figure 1b). To verify the generality of the new scheme, we solve several different nonlinear equations and provide the results in the Supporting information (SI). We find that irrespective of the choice of c, the performance of EN is always better than the standard Newton’s method. Therefore, we propose that the choice of c can be seen as another initial guess that the user needs to provide. We find that even if we choose c close to the starting guess x 0 , the convergence is greatly improved. Therefore, when no prior information on the choice of c is available, we recommend choosing c = x 0 + ϵ for small ϵ . This random perturbation works well even in the case of multiple unknowns (see Section 2.3) and is proposed primarily due to convenience as provision of a guess for c is eliminated.

2.2. Limiting Case of c x n

Next, we note that we must choose c x n . Choosing c = x n makes x n a stationary point of the fixed-point iteration (8), and the iteration gets “stuck” at this undesired point. This is because, even though lim x c P ( x ) r ( x ) 0 , P ( c ) r ( c ) is indeterminate. Practically, this does not pose a problem as long as c x 0 .
Nevertheless, we look at the limiting case when c x n . Rewriting (8) and taking this limit we rediscover the classical Halley’s fixed-point iteration [3] (The same limit on our modified Equation (7) gives
lim c x ( x c ) r ( x ) r ( x ) r ( c ) = r ( x ) r ( x ) = 0 .
If we apply Newton’s method to the above equation, we get
Δ x = r ( x n ) / r ( x n ) 1 r ( x n ) r ( x n ) / r ( x n ) 2 ,
which is similar to Δ x H except for a factor of 2 in one term. This difference is likely due to the order in which the Newton iteration (which discards higher-order terms) and the limit are taken. Empirically, we have verified that (9) performs better than the above. )
lim c x n Δ x EN = r ( x n ) / r ( x n ) 1 r ( x n ) r ( x n ) / 2 r ( x n ) 2 Δ x H .
The advantage of this limit case is that there is no arbitrary constant involved; however, we have to pay with the price of calculating the second derivative of r ( x ) .
If we expand r ( x ) using the Taylor’s series up to second order, using Δ x = x n + 1 x n , we get
r ( x n ) + r ( x n ) + r ( x n ) 2 Δ x Δ x = 0 .
If we substitute Δ x within the parentheses as the one from standard Newton (3) and rearrange, we get (9), i.e., (9) can be thought of as a correction to Newton’s equation using second derivative r ( x ) . Thus, we can write:
r ( x n ) + r ( x n ) + r ( x n ) 2 Δ x N Δ x H = 0 .
This property is useful in extending the Halley’s method to vector functions.

2.3. Extension to Multiple Unknowns

Extending to m unknowns x i and m equations r i ( x 1 , , x m ) = 0 for i = 1 , , m , our problem is
Find   x i = x i   such   that   r i ( x ) = 0 i = 1 , , m ,
where all m unknowns are collectively written as x . Applying the standard Newton’s method gives us a system of m linear equations to be solved for step Δ x j N
j r i , j ( x ) Δ x j N = r i ( x ) ,
where r i , j = r i / x j and we have dropped the subscript n to denote the current iteration value of x as that is implied.
There could be several different ways of extending EN method (7) to vector functions. If it is possible to recognize a subset of the equations that are most nonlinear, one may use Extended Newton method for that subset of equations and standard Newton’s method for the rest. Here, we consider only the case where all equations are transformed. Thus, one straightforward extension is the following modified set of equations:
( x i c i ) r i ( x ) r i ( x ) r i ( x c i ) = 0
for chosen values of c i ( i = 1 , , m ), and x c i is the vector x with i-th element replaced by c i . Applying Newton’s method we get our new system of linear equations
j δ i j r i ( x ) r i ( x ) r i ( x c i ) + ( x i c i ) r i ( x ) r i ( x c i ) 2 r i , j ( x c i ) r i ( x ) ( 1 δ i j ) r i , j ( x ) r i ( x c i ) Δ x j EN = ( x i c i ) r i ( x ) r i ( x ) r i ( x c i ) ,
where δ i j = 1 when i = j and 0 otherwise. For the single unknown case, we have to calculate r ( c ) only once and there is no need for computing r ( c ) . However, in the above extension to multiple unknowns, r i ( x c i ) needs to be evaluated at every iteration along with r i , j ( x c i ) . Thus, the EN method for the case with multiple unknowns leads to a cumbersome set of calculations.
On the other hand, extension of the limit case of EN, the Halley’s method (11), to multiple unknowns is simpler:
j r i , j + 1 2 k r i , j k Δ x k N Δ x j H = r i ,
where Δ x j N is calculated from (13) and r i , j k = 2 r i x j x k . Thus, (13) and (16) can be solved successively.
To avoid solving two systems of m equations and instead obtain only one system of m equations, we multiply both sides of (16) by r i , i . Using an approximation that r i , i r i , j k r i , k r i , j i and using (13), we get a simplified expression
j r i , i r i , j 1 2 r i , j i r i Δ x j QH = r i r i , i .
This equation gives us the step Δ x j QH by solving only one system of m equations, and we call it the quasi-Halley method (QH). In numerical methods, computing the Jacobian r i , j is common, but calculating another gradient r i , j k for (16) may not be desirable. However, in (17), we note that if we have an expression for r i , j , r i , j i can be computed using finite difference in only O ( m ) operations and can be easily parallelized (unlike the iterations of Newton’s method).
As an extension of the exponential example from scalar function, we solve the following two equations:
e x 1 e x 2 x 1 , e x 2 x 1 1 = 0 , 500 .
Similar to the single equation, the standard Newton’s method fails to converge for initial guess x 1 = x 2 = 0 , whereas the Extended Newton (for c 2 < 0.5 ) and Halley’s methods converge in less than 10 iterations (Figure 2a).
As another example, we consider the Easom’s function of two variables
f ( x , y ) = cos ( x ) cos ( y ) exp ( x 2 y 2 )
and solve its derivative equal to zero:
f x , f y = 0 , 0 .
The solution of (20) lies at ( x , y ) = ( 0 , 0 ) , and the basin of attraction is relatively small using the standard Newton’s method. Our proposed methods increase that basin significantly (Figure 2b). Surprisingly, the improvement using quasi-Halley’s method is even greater than the Halley’s method. Furthermore, Figure 2c demonstrates that choosing c as a perturbation to the initial guess x 0 also provides a significant improvement over the classical Newton’s method.

3. Discussion

A significant amount of work has focused on developing methods that approximate Newton’s method, largely through approximations of r i , j leading to the so-called quasi-Newton schemes with superlinear convergence [4]. On the other hand, super-quadratically convergent methods have been proposed using higher-order derivatives. For example, Halley’s method, Cauchy’s method, Chebyshev method [3], Householder’s method with arbitrary convergence [5], and other variants [6,7,8,9]. Newer higher-order methods are still being proposed [10], some of which are derivative-free [11]. However, the convergence of all these methods is examined when the starting guess is “sufficiently close” to the solution, also known as local convergence.
Different methods have been developed to analyze the semi-local convergence of iterative methods, such as by Kantorovich [12], Smale [2], and others [13,14,15]. Kantorovich method has been studied and extended by many [16,17,18]. However, these approaches only analyzed the semi-local convergence of existing methods, rather than developing new ones.
The objective of this article is to go beyond the classical approaches by decreasing the system nonlinearity in view of improving convergence properties far from the solution. Starting with a scalar equation, we premultiplied the original equation and formulated a new equation with the same roots (7). Applying Newton’s method to this equation, we proposed a new fixed-point iteration, which we call the Extended Newton (EN) method (8). The proposed extensions are summarized in Table 1 and discussed next.

3.1. Extended-Newton Method

The application of EN method to a sample of scalar equations provided incredible improvements. The equation e x H = 0 could be solved for extremely high H in less than 10 iterations. Similar improvements were found for other examples (see SI). We also note that EN method provides at least quadratic convergence close to the root, because it is a Newton’s method applied to P ( x ) r ( x ) . For scalar functions, EN method does not increase the computational cost of each iteration and is a promising candidate. It should be noted that decreasing the nonlinearity can also be viewed as making the Jacobian Lipschitz, which is one of the parameters in the Kantorovich theorem [12]. The functions tested here are not natively Lipschitz in their domain; the proposed transformation makes them Lipschitz thus improving their semi-local convergence.
For a specific (known) equation form, instead of pre-multiplying, a transformation may be more suitable to decrease its nonlinearity. For example, for equation e x H = 0 , one may write the equation as e x = H and then take a logarithm on both sides resulting in a linear equation that will always be solved in one iteration. However, this approach is problem-specific, requires identifying the two sides of the equation, and has been proposed by us for nonlinear elasticity [19]. On the other hand, the present approach is completely general and works for all kinds of equations and nonlinearities.
Although we used a constant value of c during iteration, there are other options of choosing its value at every iteration based on the current guess x n , previous guess x n 1 , and/or the function form r ( x ) . In fact, if we choose c = x n 1 , the EN can be viewed as a combination of secant method and Newton’s method.

3.2. Rediscovery of Halley’s Method

Taking the limit c x n , we rediscovered the Halley’s method (11), which is an extensively studied cubic-convergent method, yet rarely used in practice. Interestingly, although there are several cubic-convergent methods reported in the literature, only Halley’s method emerges naturally out of EN. One drawback of Halley’s method is that it needs the second derivative r ( x ) . However, with classical approximation approaches such as the finite difference method, and the development of newer methods based on symbolic computation [20] and automatic differentiation [21], this limitation is becoming less important.

3.3. Vector Equations

Extension of EN method to vector functions can be accomplished in multiple ways. We presented one option (15) and found that the convergence improved significantly. However, the presented approach entails additional computation (unlike for scalar functions). The extension of Halley’s method to vector functions was straightforward and unique (16). Even though this approach worked perfectly, it has two computational disadvantages: calculation and storage of r i , j k with m 3 terms and solving two linear systems – first for the standard Newton (13) and then for the correction (16).
As a simplification, we proposed its approximation (17), the quasi-Halley method. This approach only requires calculation of one diagonal of r i , j k , i.e., r i , j i and solving only one linear system of equations, making it an attractive option. To our knowledge, the quasi-Halley method has no hitherto reported counterpart. We believe that one of the reasons why these higher-order methods (Halley’s or other higher-order methods) are not widely used in practice is that their faster convergence rates are offset by their higher computational cost per iteration. In this respect, the quasi-Halley method may provide a good balance between improved convergence properties and computational expense.

3.4. Future Directions

We believe there are many opportunities of extending this approach to other problems and end our discussion by mentioning a few potential avenues. For extending the EN method to boundary value problems, one could consider the premultiplication of the different (strong/weak/local conservation) forms and testing the convergence using different discretization methods. For extension to vector functions, approaches to design efficient P ( x ) , which is a matrix, need to be developed. Similarly, this work motivates the need to explore quasi-Halley methods. In general, the extension of such root-finding methods to multiple-variables is a broad area, which also takes into account additional factors, such as sparsity and symmetry of the system, into account. The idea of multiplying with a function of x instead of a constant, can be used in the development of preconditioners. Unlike linear preconditioners, which are well-understood, this will lead to nonlinear preconditioning, which is a relatively under-explored area [22]. Application of a similar approach to nonlinear least squares problem will likely improve its convergence. Similarly, in optimization problems with multiple unknowns, a line search is commonly used, which is akin to solving a scalar equation. An approach similar to EN method may provide an improved line search.

3.5. Conclusions

It took several decades to develop methods based on the classical Newton’s method. We hope that the new approach presented in this work, which resulted in some new and some known results, will spark the interest of the scientific community to fully explore the properties of these approaches, their applications, and the opportunities they present.

Supplementary Materials

The following are available online at https://www.mdpi.com/1999-4893/13/4/78/s1.

Author Contributions

Conceptualization, A.A. and S.P.; Formal analysis, A.A.; Investigation, A.A. and S.P.; Writing—original draft preparation, A.A.; Writing—review and editing, A.A. and S.P.; Funding acquisition, A.A. and S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Welsh Government and Higher Education Funding Council for Wales through the Sêr Cymru National Research Network in Advanced Engineering and Materials (Grant No. F28 to A.A.), and the Engineering and Physical Sciences Research Council of the UK (Grant No. EP/P018912/1 to A.A. and Grant No. EP/R010811/1 to S.P.).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Apostol, T.M. Calculus; Jon Wiley & Sons: New York, NY, USA, 1967. [Google Scholar]
  2. Smale, S. Algorithms for solving equations. In Proceedings of the International Congress of Mathematicians, Berkeley, CA, USA, 3–11 August 1986; pp. 1263–1286. [Google Scholar]
  3. Traub, J.F. Iterative Methods for the Solution of Equations; American Mathematical Soc.: Washington, WA, USA, 1982. [Google Scholar]
  4. Kelley, C.T. Iterative Methods for Linear and Nonlinear Equations; SIAM: Philadelphia, PA, USA, 1995. [Google Scholar]
  5. Householder, A. The Numerical Treatment of a Single Nonlinear Equation; McGraw-Hill: New York, NY, USA, 1970. [Google Scholar]
  6. Osada, N. An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef] [Green Version]
  7. Amat, S.; Busquier, S.; Gutiérrez, J. Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 2003, 157, 197–205. [Google Scholar] [CrossRef] [Green Version]
  8. Kou, J. Some variants of Cauchy’s method with accelerated fourth-order convergence. J. Comput. Appl. Math. 2008, 213, 71–78. [Google Scholar] [CrossRef] [Green Version]
  9. Cordero, A.; Torregrosa, J.R.; Vindel, P. Dynamics of a family of Chebyshev–Halley type methods. Appl. Math. Comput. 2013, 219, 8568–8583. [Google Scholar] [CrossRef] [Green Version]
  10. Sahu, D.R.; Agarwal, R.P.; Singh, V.K. A Third Order Newton-Like Method and Its Applications. Mathematics 2018, 7, 31. [Google Scholar] [CrossRef] [Green Version]
  11. Li, J.; Wang, X.; Madhu, K. Higher-Order Derivative-Free Iterative Methods for Solving Nonlinear Equations and Their Basins of Attraction. Mathematics 2019, 7, 1052. [Google Scholar] [CrossRef] [Green Version]
  12. Kantorovich, L. Functional analysis and applied mathematics. Uspehi Mat. Nauk 1948, 3, 89–185. [Google Scholar]
  13. Dedieu, J.P.; Priouret, P.; Malajovich, G. Newton’s method on Riemannian manifolds: Covariant alpha theory. IMA J. Numer. Anal. 2003, 23, 395–419. [Google Scholar] [CrossRef]
  14. Argyros, I.K.; Magreñán, Á.A.; Orcos, L.; Sarría, Í. Advances in the Semilocal Convergence of Newton’s Method with Real-World Applications. Mathematics 2019, 7, 299. [Google Scholar] [CrossRef] [Green Version]
  15. Yong, Z.; Gupta, N.; Jaiswal, J.P.; Madhu, K. On the Semilocal Convergence of the Multi–Point Variant of Jarratt Method: Unbounded Third Derivative Case. Mathematics 2019, 7, 540. [Google Scholar] [CrossRef] [Green Version]
  16. Dennis, J., Jr. On the Kantorovich hypothesis for Newton’s method. SIAM J. Numer. Anal. 1969, 6, 493–507. [Google Scholar] [CrossRef]
  17. Ortega, J.M. The newton-kantorovich theorem. Am. Math. Mon. 1968, 75, 658–660. [Google Scholar] [CrossRef]
  18. Tapia, R. The Kantorovich theorem for Newton’s method. Am. Math. Mon. 1971, 78, 389–392. [Google Scholar] [CrossRef]
  19. Mei, Y.; Hurtado, D.E.; Pant, S.; Aggarwal, A. On improving the numerical convergence of highly nonlinear elasticity problems. Comput. Methods Appl. Mech. Eng. 2018, 337, 110–127. [Google Scholar] [CrossRef] [Green Version]
  20. Buchberger, B. Symbolic computation: Computer algebra and logic. In Frontiers of Combining Systems; Baader, F., Schulz, K.U., Eds.; Springer: Dordrecht, The Netherlands, 1996; Volume 3, pp. 193–219. [Google Scholar]
  21. Rall, L. Automatic Differentiation: Techniques and Applications. Lect. Notes. Comput. Sci. 1981, 120, 173. [Google Scholar]
  22. Cai, X.C.; Keyes, D.E. Nonlinearly preconditioned inexact Newton algorithms. SIAM J. Sci. Comput. 2002, 24, 183–200. [Google Scholar] [CrossRef] [Green Version]
Figure 1. (a) Convergence of the standard Newton’s method (red lines), the Extended Newton method with c ( 50 , 50 ) (gray lines), and the Halley’s method (blue line) for solving e x 500 = 0 (solid curves with initial guess x 0 = 0 and dashed curve with initial guess x 0 = 3 ). (b) (From left) Iterations taken to converge for the standard Newton’s method, the Extended Newton method, and the Halley’s method for solving r ( x ) = e x H = 0 : with H = 500 and varying initial guess x 0 (top), and with x 0 = 0 and varying H = e x (bottom)
Figure 1. (a) Convergence of the standard Newton’s method (red lines), the Extended Newton method with c ( 50 , 50 ) (gray lines), and the Halley’s method (blue line) for solving e x 500 = 0 (solid curves with initial guess x 0 = 0 and dashed curve with initial guess x 0 = 3 ). (b) (From left) Iterations taken to converge for the standard Newton’s method, the Extended Newton method, and the Halley’s method for solving r ( x ) = e x H = 0 : with H = 500 and varying initial guess x 0 (top), and with x 0 = 0 and varying H = e x (bottom)
Algorithms 13 00078 g001
Figure 2. (a) Convergence for solving (18) with initial guess x 1 = x 2 = 0 using the standard Newton’s method (red line), the Extended Newton method with c 1 = c 2 = 0.5 (gray line), and the Halley’s method (blue line). The inset shows convergence of EN with the same initial condition and varying c 1 and c 2 . (b) Convergence in finding the minimum of the two-variable Easom’s function by solving its derivative equal to zero (20): (top) for the standard Newton’s, Halley’s and quasi-Halley methods with varying initial conditions, and (bottom) for Extended Newton method with varying c and three initial conditions. (c) Convergence in finding the minimum of the two-variable Easom’s function with varying initial guesses when c i is a perturbation of initial guess, i.e., c 1 = x 0 + ϵ 1 and c 2 = y 0 + ϵ 2 , where ϵ 1 , 2 ( 1 2 10 4 , 1 2 10 4 ) .
Figure 2. (a) Convergence for solving (18) with initial guess x 1 = x 2 = 0 using the standard Newton’s method (red line), the Extended Newton method with c 1 = c 2 = 0.5 (gray line), and the Halley’s method (blue line). The inset shows convergence of EN with the same initial condition and varying c 1 and c 2 . (b) Convergence in finding the minimum of the two-variable Easom’s function by solving its derivative equal to zero (20): (top) for the standard Newton’s, Halley’s and quasi-Halley methods with varying initial conditions, and (bottom) for Extended Newton method with varying c and three initial conditions. (c) Convergence in finding the minimum of the two-variable Easom’s function with varying initial guesses when c i is a perturbation of initial guess, i.e., c 1 = x 0 + ϵ 1 and c 2 = y 0 + ϵ 2 , where ϵ 1 , 2 ( 1 2 10 4 , 1 2 10 4 ) .
Algorithms 13 00078 g002
Table 1. Summary of Newton’s method and its proposed extensions
Table 1. Summary of Newton’s method and its proposed extensions
MethodScalar Function UpdateVector Function Update
N Δ x N = r ( x n ) r ( x n ) j r i , j ( x ) Δ x j N = r i ( x )
EN Δ x EN = ( x n c ) r ( x n ) r ( x n ) ( x n c ) r ( x n ) r ( c ) r ( x n ) r ( c ) j δ i j r i ( x ) r i ( x ) r i ( x c i ) + ( x i c i ) r i ( x ) r i ( x c i ) 2 r i , j ( x c i ) r i ( x ) ( 1 δ i j ) r i , j ( x ) r i ( x c i ) Δ x j EN = ( x i c i ) r i ( x ) r i ( x ) r i ( x c i )
H Δ x H = r ( x n ) / r ( x n ) 1 r ( x n ) r ( x n ) / 2 r ( x n ) 2 j r i , j + 1 2 k r i , j k Δ x k N Δ x j H = r i
QH j r i , i r i , j 1 2 r i , j i r i Δ x j QH = r i r i , i

Share and Cite

MDPI and ACS Style

Aggarwal, A.; Pant, S. Beyond Newton: A New Root-Finding Fixed-Point Iteration for Nonlinear Equations. Algorithms 2020, 13, 78. https://doi.org/10.3390/a13040078

AMA Style

Aggarwal A, Pant S. Beyond Newton: A New Root-Finding Fixed-Point Iteration for Nonlinear Equations. Algorithms. 2020; 13(4):78. https://doi.org/10.3390/a13040078

Chicago/Turabian Style

Aggarwal, Ankush, and Sanjay Pant. 2020. "Beyond Newton: A New Root-Finding Fixed-Point Iteration for Nonlinear Equations" Algorithms 13, no. 4: 78. https://doi.org/10.3390/a13040078

APA Style

Aggarwal, A., & Pant, S. (2020). Beyond Newton: A New Root-Finding Fixed-Point Iteration for Nonlinear Equations. Algorithms, 13(4), 78. https://doi.org/10.3390/a13040078

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop