Next Article in Journal
Multicriteria Correlation Preference Information (MCCPI)-Based Ordinary Capacity Identification Method
Next Article in Special Issue
Improving the Computational Efficiency of a Variant of Steffensen’s Method for Nonlinear Equations
Previous Article in Journal
A Reputation-Enhanced Hybrid Approach for Supplier Selection with Intuitionistic Fuzzy Evaluation Information
Previous Article in Special Issue
Study of a High Order Family: Local Convergence and Dynamics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advances in the Semilocal Convergence of Newton’s Method with Real-World Applications

1
Department of Mathematics Sciences Lawton, Cameron University, Lawton, OK 73505, USA
2
Departamento de Matemáticas y Computación, Universidad de La Rioja, 26006 Logroño, Spain
3
Departamento de Matemática Aplicada, Universidad Politècnica de València, 46022 València, Spain
4
Escuela Superior de Ingeniería y Tecnología, Universidad Internacional de La Rioja, 26006 Logroño; Spain
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(3), 299; https://doi.org/10.3390/math7030299
Submission received: 3 March 2019 / Revised: 20 March 2019 / Accepted: 21 March 2019 / Published: 24 March 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
The aim of this paper is to present a new semi-local convergence analysis for Newton’s method in a Banach space setting. The novelty of this paper is that by using more precise Lipschitz constants than in earlier studies and our new idea of restricted convergence domains, we extend the applicability of Newton’s method as follows: The convergence domain is extended; the error estimates are tighter and the information on the location of the solution is at least as precise as before. These advantages are obtained using the same information as before, since new Lipschitz constant are tighter and special cases of the ones used before. Numerical examples and applications are used to test favorable the theoretical results to earlier ones.

1. Introduction

In this study we are concerned with the problem of approximating a locally unique solution z * of equation
G ( x ) = 0 ,
where G is a Fréchet-differentiable operator defined on a nonempty, open convex subset D of a Banach space E 1 with values in a Banach space E 2 .
Many problems in Computational disciplines such us Applied Mathematics, Optimization, Mathematical Biology, Chemistry, Economics, Medicine, Physics, Engineering and other disciplines can be solved by means of finding the solutions of equations in a form like Equation (1) using Mathematical Modelling [1,2,3,4,5,6,7]. The solutions of this kind of equations are rarely found in closed form. That is why most solutions of these equations are given using iterative methods. A very important problem in the study of iterative procedures is the convergence region. In general this convergence region is small. Therefore, it is important to enlarge the convergence region without additional hypotheses.
The study of convergence of iterative algorithms is usually centered into two categories: Semi-local and local convergence analysis. The semi-local convergence is based on the information around an initial point, to obtain conditions ensuring the convergence of theses algorithms while the local convergence is based on the information around a solution to find estimates of the computed radii of the convergence balls.
Newton’s method defined for all n = 0 , 1 , 2 , by
z n + 1 = z n G ( z n ) 1 G ( z n ) ,
is undoubtedly the most popular method for generating a sequence { z n } approximating z * , where z 0 is an initial point. There is a plethora of convergence results for Newton’s method [1,2,3,4,6,8,9,10,11,12,13,14]. We shall increase the convergence region by finding a more precise domain where the iterates { z n } lie leading to smaller Lipschitz constants which in turn lead to a tighter convergence analysis for Newton’s method than before. This technique can apply to improve the convergence domain of other iterative methods in an analogous way.
Let us consider the conditions:
There exist z 0 Ω and η 0 such that
G ( z 0 ) 1 L ( E 2 , E 1 ) and G ( z 0 ) 1 G ( z 0 ) η ;
There exists T 0 such that the Lipschitz condition
G ( z 0 ) 1 ( G ( x ) G ( y ) ) T x y
holds for all x , y Ω .
Then, the sufficient convergence condition for Newton’s method is given by the famous for its simplicity and clarity Kantorovich sufficient convergence criterion for Newton’s method
h K = 2 T η 1 .
Let us consider a motivational and academic example to show that this condition is not satisfied. Choose E 1 = E 2 = R , z 0 = 1 , p [ 0 , 0.5 ) , D = S ( z 0 , 1 p ) and define function G on D by
G ( x ) = z 3 p .
Then, we have T = 2 ( 2 p ) . Then, the Kantorovich condition is not satisfied, since h K > 1 for all p ( 0 , 0.5 ) . We set I K = to be the set of point satisfying Equation (3). Hence, there is no guarantee that Newton’s sequence starting at z 0 converges to z * = p 3 .
The rest of the paper is structured as follows: In Section 2 we present the semi-local convergence analysis of Newton’s method Equation (2). The numerical examples and applications are presented in Section 3 and the concluding Section 4.

2. Semi-Local Convergence Analysis

We need an auxiliary result on majorizing sequences for Newton’s method.
Lemma 1.
Let H > 0 , K > 0 , L > 0 , L 0 > 0 and η > 0 be parameters. Suppose that:
h 4 = L 4 η 1 ,
where
L 4 1 = 1 L 0 + H , i f b = L K + 2 δ L 0 ( K 2 H ) = 0 2 δ ( L 0 + H ) + δ 2 ( L 0 + H ) 2 + δ ( L K + 2 δ L 0 ( K 2 H ) ) L K + 2 δ L 0 ( K 2 H ) , i f b > 0 2 δ ( L 0 + H ) + δ 2 ( L 0 + H ) 2 + δ ( L K + 2 δ L 0 ( K 2 H ) ) L K + 2 δ L 0 ( K 2 H ) , i f b < 0
and
δ = 2 L L + L 2 + 8 L 0 L .
holds. Then, scalar sequence { t n } given by
t 0 = 0 , t 1 = η , t 2 = t 1 + K ( t 1 t 0 ) 2 2 ( 1 H t 1 ) , t n + 2 = t n + 1 + L ( t n + 1 t n ) 2 2 ( 1 L 0 t n + 1 ) f o r a l l n = 1 , 2 , ,
is well defined, increasing, bounded from above by
t * * = η + ( 1 + δ 0 1 δ ) K η 2 2 ( 1 H η )
and converges to its unique least upper bound t * which satisfies
t 2 t * t * * ,
where δ 0 = L ( t 2 t 1 ) 2 ( 1 L 0 t 2 ) . Moreover, the following estimates hold:
0 < t n + 2 t n + 1 δ 0 δ n 1 K η 2 2 ( 1 H η ) f o r a l l n = 1 , 2 ,
and
t * t n δ 0 ( t 2 t 1 ) 1 δ δ n 2 f o r a l l n = 2 , 3 , .
Proof. 
By induction, we show that
0 < L ( t k + 1 t k ) 2 ( 1 L 0 t k + 1 ) δ
holds for all k = 1 , 2 , . Estimate Equation (10) is true for k = 1 by Equation (4). Then, we have by Equation (5)
0 < t 3 t 2 δ 0 ( t 2 t 1 ) t 3 t 2 + δ 0 ( t 2 t 1 ) t 3 t 2 + ( 1 + δ 0 ) ( t 2 t 1 ) ( t 2 t 1 ) t 3 t 1 + 1 δ 0 2 1 δ 0 ( t 2 t 1 ) < t * *
and for m = 2 , 3 ,
t m + 2 t m + 1 + δ 0 δ m 1 ( t 2 t 1 ) t m + δ 0 δ m 2 ( t 2 t 1 ) + δ 0 δ m 1 ( t 2 t 1 ) t 1 + ( 1 + δ 0 ( 1 + δ + + δ m 1 ) ) ( t 2 t 1 ) = t 1 + ( 1 + δ 0 1 δ m 1 δ ) ( t 2 t 1 ) t * * .
Assume that Equation (10) holds for all natural integers n m . Then, we get by Equations (5) and (10) that
0 < t m + 2 t m + 1 δ 0 δ m 1 ( t 2 t 1 ) δ m ( t 2 t 1 )
and
t m + 2 t 1 + ( 1 + δ 0 1 δ m 1 δ ) ( t 2 t 1 ) t 1 + 1 δ m + 1 1 δ ( t 2 t 1 ) < t * * .
Evidently estimate Equation (10) is true, if m is replaced by m + 1 provided that
L 2 ( t m + 2 t m + 1 ) δ ( 1 L 0 t m + 2 )
or
L 2 ( t m + 2 t m + 1 ) + δ L 0 t m + 2 δ 0
or
L 2 δ m ( t 2 t 1 ) + δ L 0 ( t 1 + 1 δ m + 1 1 δ ( t 2 t 1 ) ) δ 0 .
Estimate Equation (11) motivates us to define recurrent functions { ψ k } on [ 0 , 1 ) by
ψ m ( s ) = L 2 ( t 2 t 1 ) t m + 1 + s L 0 ( 1 + s + t 2 + + t m ) ( t 2 t 1 ) ( 1 L 0 t 1 ) s .
We need a relationship between two consecutive functions ψ k . We get that
ψ m + 1 ( s ) = L 2 ( t 2 t 1 ) t m + 2 + s L 0 ( 1 + s + t 2 + + t m + 1 ) ( t 2 t 1 ) ( 1 L 0 t 1 ) s = L 2 ( t 2 t 1 ) t m + 2 + s L 0 ( 1 + s + t 2 + + t m + 1 ) ( t 2 t 1 ) ( 1 L 0 t 1 ) s L 2 ( t 2 t 1 ) t m s L 0 ( 1 + s + t 2 + + t m ) ( t 2 t 1 ) + ( 1 L 0 t 1 ) s + ψ k ( s ) .
Therefore, we deduce that
ψ m + 1 ( s ) = ψ m ( s ) + 1 2 ( 2 L 0 t 2 + L s L ) t m ( t 2 t 1 ) .
Estimate Equation (11) is satisfied, if
ψ m ( δ ) 0 holds for all m = 1 , 2 , .
Using Equation (12) we obtain that
ψ m + 1 ( δ ) = ψ m ( δ ) for all m = 1 , 2 , .
Let us now define function ψ on [ 0 , 1 ) by
ψ ( s ) = lim m ψ m ( s ) .
Then, we have by Equation (14) and the choice of δ that
ψ ( δ ) = ψ m ( δ ) for all m = 1 , 2 , .
Hence, Equation (13) is satisfied, if
ψ ( δ ) 0 .
Using Equation (11) we get that
ψ ( δ ) = ( L 0 1 δ ( t 2 t 1 ) + L 0 t 1 1 ) δ .
It then, follows from Equations (2.1) and (2.13) that Equation (15) is satisfied. The induction is now completed. Hence, sequence { t n } is increasing, bounded from above by t * * given by Equation (6), and as such it converges to its unique least upper bound t * which satisfies Equation (7). □
Let S ( z , ϱ ) , S ¯ ( z , ϱ ) stand, respectively for the open and closed ball in E 1 with center z E 1 and of radius ϱ > 0 .
The conditions ( A ) for the semi-local convergence are:
( A 1 )
G : D E 1 E 2 is Fréchet differentiable and there exist z 0 D , η 0 such that G ( z 0 ) 1 Ł ( E 2 , E 1 ) and
G ( z 0 ) 1 G ( z 0 ) η .
( A 2 )
There exists L 0 > 0 such that for all x D
G ( z 0 ) 1 ( G ( x ) G ( z 0 ) ) L 0 x z 0 .
( A 3 )
L 0 η < 1 and there exists L > 0 such that
G ( z 0 ) 1 ( G ( x ) G ( y ) ) L x y .
for all x , y D 0 : = S ( z 1 , 1 L 0 G ( z 0 ) 1 G ( z 0 ) ) D .
( A 4 )
There exists H > 0 such that
G ( z 0 ) 1 ( G ( z 1 ) G ( z 0 ) ) H z 1 z 0 ,
where z 1 = z 0 G ( z 0 ) 1 G ( z 0 ) .
( A 5 )
There exists K > 0 such that for all θ [ 0 , 1 ]
G ( z 0 ) 1 ( G ( z 0 + θ ( z 1 z 0 ) ) G ( z 0 ) ) K θ z 1 z 0 .
Notice that ( A 2 ) ( A 3 ) ( A 5 ) ( A 4 ) . Clearly, we have that
H K L 0
and L L 0 can be arbitrarily large [9]. It is worth noticing that ( A 3 ) ( A 5 ) are not additional to ( A 2 ) hypotheses, since in practice the computation of Lipschitz constant T requires the computation of the other constants as special cases.
Next, first we present a semi-local convergence result relating majorizing sequence { t n } with Newton’s method and hypotheses ( A ) .
Theorem 1.
Suppose that hypotheses ( A ) , hypotheses of Lemma 1 and S ¯ ( z 0 , t * ) D hold, where t * is given in Lemma 1. Then, sequence { z n } generated by Newton’s method is well defined, remains in S ¯ ( z 0 , t * ) and converges to a solution z * S ¯ ( z 0 , t * ) of equation G ( x ) = 0 . Moreover, the following estimates hold
z n + 1 z n t n + 1 t n
and
z n z * t * t n f o r a l l n = 0 , 1 , 2 , ,
where sequence { t n } is given in Lemma 1. Furthermore, if there exists R t * such that
S ¯ ( z 0 , R ) D and L 0 ( t * + R ) < 2 ,
then, the solution z * of equation G ( x ) = 0 is unique in S ¯ ( z 0 , R ) .
Proof. 
We use mathematical induction to prove that
z k + 1 x k t k + 1 t k
and
S ¯ ( z k + 1 , t * t k + 1 ) S ¯ ( z k , t * t k ) for all k = 1 , 2 , .
Let z S ¯ ( z 1 , t * t 1 ) .
Then, we obtain that
z z 0 z z 1 + z 1 z 0 t * t 1 + t 1 t 0 = t * t 0 ,
which implies z S ¯ ( z 0 , t * t 0 ) . Note also that
z 1 z 0 = G ( z 0 ) 1 G ( z 0 ) η = t 1 t 0 .
Hence, estimates Equations (20) and (21) hold for k = 0 . Suppose these estimates hold for n k . Then, we have that
z k + 1 z 0 i = 1 k + 1 z i z i 1 i = 1 k + 1 ( t i t i 1 ) = t k + 1 t 0 = t k + 1
and
z k + θ ( z k + 1 z k ) z 0 t k + θ ( t k + 1 t k ) t *
for all θ ( 0 , 1 ) . Using Lemma 1 and the induction hypotheses, we get in turn that
G ( z 0 ) 1 ( G ( z k + 1 ) G ( z 0 ) ) M x k + 1 z 0 M ( t k + 1 t 0 ) M t k + 1 < 1 ,
where
M = H if k = 0 L 0 if k = 1 , 2 , .
It follows from Equation (22) and the Banach lemma on invertible operators that G ( z m + 1 ) 1 exists and
G ( z k + 1 ) 1 G ( z 0 ) ( 1 M z k + 1 z 0 ) 1 ( 1 M t k + 1 ) 1 .
Using iteration of Newton’s method, we obtain the approximation
G ( z k + 1 ) = G ( z k + 1 ) G ( z k ) G ( z k ) ( z k + 1 z k ) = 0 1 ( G ( z k + θ ( z k + 1 z k ) ) G ( z m ) ) ( z k + 1 z k ) d θ .
Then, by Equation (24) we get in turn
G ( z 0 ) 1 G ( z k + 1 ) 0 1 G ( z 0 ) 1 ( G ( z k + θ ( z k + 1 z k ) ) G ( z k ) ) z k + 1 z k d θ M 1 0 1 θ ( z k + 1 z k ) z k + 1 z k d θ M 1 2 ( t k + 1 t k ) ) 2 ,
where
M 1 = K if k = 0 L if k = 1 , 2 , .
Moreover, by iteration of Newton’s method, Equations (23) and (25) and the induction hypotheses we get that
z k + 2 z k + 1 = ( G ( z k + 1 ) 1 G ( z 0 ) ) ( G ( z 0 ) 1 G ( z k + 1 ) ) G ( z k + 1 ) 1 G ( z 0 ) G ( z 0 ) 1 G ( z k + 1 ) M 1 2 ( t k + 1 t k ) 2 1 M t k + 1 = t k + 2 t k + 1 .
That is, we showed Equation (20) holds for all k 0 . Furthermore, let z S ¯ ( z k + 2 , t * t k + 2 ) . Then, we have that
z x k + 1 z z k + 2 + z k + 2 z k + 1 t * t k + 2 + t k + 2 t k + 1 = t * t k + 1 .
That is, z S ¯ ( z k + 1 , t * t k + 1 ) . The induction for Equations (20) and (21) is now completed. Lemma 1 implies that sequence { s n } is a complete sequence. It follows from Equations (20) and (21) that { z n } is also a complete sequence in a Banach space E 1 and as such it converges to some z * S ¯ ( z 0 , t * ) (since S ¯ ( z 0 , t * ) is a closed set). By letting k in Equation (25) we get G ( * ) = 0 . Estimate Equation (19) is obtained from Equation (18) (cf. [4,6,12]) by using standard majorization techniques. The proof for the uniqueness part has been given in [9]. □
The sufficient convergence criteria for Newton’s method using the conditions ( A ) , constants L , L 0 and η given in affine invariant form are:
  • Kantorovich [6]
    h K = 2 T η 1 .
  • Argyros [9]
    h 1 = ( L 0 + T ) η 1 .
  • Argyros [3]
    h 2 = 1 4 T + 4 L 0 + T 2 + 8 L 0 T η 1
  • Argyros [11]
    h 3 = 1 4 4 L 0 + L 0 T + 8 L 0 2 + L 0 T η 1
  • Argyros [12]
    h 4 = L ˜ 4 η 1 , L ˜ 4 = L 4 ( T ) , δ = δ ( T ) .
    If H = K = L 0 = L , then Equations (27)–(30) coincide with Equations (26). If L 0 < T , then L < T
    h K 1 h 1 1 h 2 1 h 3 1 h 4 1 h 5 1 ,
    but not vice versa. We also have that for L 0 T 0 :
    h 1 h K 1 2 , h 2 h K 1 4 , h 2 h 1 1 2 h 3 h K 0 , h 3 h 1 0 , h 3 h 2 0
Conditions Equations (31) show by how many times (at most) the better condition improves the less better condition.
Remark 1.
(a) 
The majorizing sequence { t n } , t * , t * * given in [12] under conditions ( A ) and Equation (29) is defined by
t 0 = 0 , t 1 = η , t 2 = t 1 + L 0 ( t 1 t 0 ) 2 2 ( 1 L 0 t 1 ) t n + 2 = t n + 1 + T ( t n + 1 t n ) 2 2 ( 1 L 0 t n + 1 ) , n = 1 , 2 , t * = lim n t n t * * = η + L 0 η 2 2 ( 1 δ ) ( 1 L 0 η ) .
Using a simple inductive argument and Equation (32) we get for L 1 < L that
t n < t n 1 , n = 3 , 4 , ,
t n + 1 t n < t n t n 1 , n = 2 , 3 , ,
and
t * t * *
Estimates for Equations (5)–(7) show the new error bounds are more precise than the old ones and the information on the location of the solution z * is at least as precise as already claimed in the abstract of this study (see also the numerical examples). Clearly the new majorizing sequence { t n } is more precise than the corresponding ones associated with other conditions.
(b) 
Condition S ¯ ( z 0 , t * ) D can be replaced by S ( z 0 , 1 L 0 ) (or D 0 ). In this case condition ( A 2 ) holds for all x , y S ( z 0 , 1 L 0 ) (or D 0 ).
(c) 
If L 0 η 1 , then, we have that z 0 S ¯ ( z 1 , 1 L 0 G ( z 0 ) 1 G ( z 0 ) ) , since S ¯ ( z 1 , 1 L 0 G ( z 0 ) 1 G ( z 0 ) ) S ( z 0 , 1 L 0 ) .

3. Numerical Examples

Example 1.
Returning back to the motivational example, we have L 0 = 3 p .
Conditions Equations (27)–(29) are satisfied, respectively for
p I 1 : = [ 0.494816242 , 0.5 ) ,
p I 2 : = [ 0.450339002 , 0.5 )
and
p I 3 : = [ 0.4271907643 , 0.5 ) .
We are now going to consider such an initial point which previous conditions cannot be satisfied but our new criteria are satisfied. That is, the improvement that we get with our new weaker criteria.
We get that
H = 5 + p 3 ,
K = 2 ,
L = 2 3 ( 3 p ) ( 2 p 2 + 5 p + 6 ) .
Using this values we obtain that condition Equation (4) is satisfied for p [ 0.0984119 , 0.5 ) , However, must also have that
L 0 η < 1
which is satisfied for p I 4 : = ( 0 , 0.5 ] . That is, we must have p I 4 , so there exist numerous values of p for which the previous conditions cannot guarantee the convergence but our new ones can. Notice that we have
I K I 1 I 2 I 3 I 4
Hence, the interval of convergence cannot be improved further under these conditions. Notice that the convergence criterion is even weaker than the corresponding one for the modified Newton’s method given in [11] by L 0 ( η ) < 0.5 .
For example, we choose different values of p and we see in Table 1.
Example 2.
Consider E 1 = E 2 = A [ 0 , 1 ] . Let D * = { x A [ 0 , 1 ] ; x R } , such that R > 0 and G defined on D * as
G ( x ) ( u 1 ) = x ( u 1 ) f ( u 1 ) λ 0 1 μ ( u 1 , u 2 ) x ( u 2 ) 3 d u 2 , x C [ 0 , 1 ] , u 1 [ 0 , 1 ] ,
where f A [ 0 , 1 ] is a given function, λ is a real constant and the kernel μ is the Green function. In this case, for all x D * , G ( x ) is a linear operator defined on D * by the following expression:
[ G ( x ) ( v ) ] ( u 1 ) = v ( u 2 ) 3 λ 0 1 μ ( u 1 , u 2 ) x ( u 2 ) 2 v ( u 2 ) d u 2 , v C [ 0 , 1 ] , u 1 [ 0 , 1 ] .
If we choose z 0 ( u 1 ) = f ( u 1 ) = 1 , it follows
I G ( z 0 ) 3 | λ | / 8 .
Hence, if
| λ | < 8 / 3 ,
G ( z 0 ) 1 is defined and
G ( z 0 ) 1 8 8 3 | λ | ,
G ( z 0 ) | λ | 8 ,
η = G ( z 0 ) 1 G ( z 0 ) | λ | 8 3 | λ | .
Consider λ = 1.00 , we get
η = 0.2 ,
T = 3.8 ,
L 0 = 2.6 ,
K = 2.28 ,
H = 1.28
and
L = 1.38154 .
By these values we conclude that conditions (26)–(29) are not satisfied, since
h K = 1.52 > 1 ,
h 1 = 1.28 > 1 ,
h 2 = 1.19343 > 1 ,
h 3 > 1.07704 > 1 ,
but condition (2.27) and condition (4) are satisfied, since
h 4 = 0.985779 < 1
and
h 5 = 0.97017 < 1 .
Hence, Newton’s method converges by Theorem 1.

4. Application: Planck’s Radiation Law Problem

We consider the following problem [15]:
φ ( λ ) = 8 π c P λ 5 e c P λ B T 1
which calculates the energy density within an isothermal blackbody. The maxima for φ occurs when density φ ( λ ) . From (36), we get
φ ( λ ) = 8 π c P λ 6 e c P λ B T 1 ( c P λ B T ) e c P λ B T 1 e c P λ k T 1 5 = 0 ,
that is when
( c P λ B T ) e c P λ B T 1 e c P λ B T 1 = 5 .
After using the change of variable x = c P λ B T and reordering terms, we obtain
f ( x ) = e x 1 + x 5 .
As a consequence, we need to find the roots of Equation (39).
We consider Ω = E ( 5 , 1 ) ¯ R and we obtain
η = 0.0348643 ,
L 0 = 0.0599067 ,
K = 0.0354792 ,
H = 0.0354792
and
L = 0.094771 .
So ( A ) are satisfied. Moreover, as b = 0.000906015 > 0 , then
L 4 = 10.0672 ,
which satisfies
L 4 η = 0.350988 < 1
and that means that conditions of Lemmal 1 are also satisfied. Finally, we obtain that
t * = 0.0348859 .
Hence, Newton’s method converges to the solution x * = 4.965114231744276 by Theorem 1.

Author Contributions

All authors have contributed in a similar way.

Funding

This research was supported in part by Programa de Apoyo a la investigación de la fundación Séneca-Agencia de Ciencia y Tecnología de la Región de Murcia 19374/PI/14, by the project MTM2014-52016-C2-1-P of the Spanish Ministry of Science and Innovation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Amat, S.; Busquier, S.; Gutiérrez, J.M. Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 2003, 157, 197–205. [Google Scholar] [CrossRef]
  2. Amat, S.; Busquier, S. Third-order iterative methods under Kantorovich conditions. J. Math. Anal. Appl. 2007, 336, 243–261. [Google Scholar] [CrossRef]
  3. Argyros, I.K. A semi-local convergence analysis for directional Newton methods. Math. Comput. 2011, 80, 327–343. [Google Scholar] [CrossRef]
  4. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press: Bocaratón, FL, USA, 2017. [Google Scholar]
  5. Farhane, N.; Boumhidi, I.; Boumhidi, J. Smart Algorithms to Control a Variable Speed Wind Turbine. Int. J. Interact. Multimed. Artif. Intell. 2017, 4, 88–95. [Google Scholar] [CrossRef]
  6. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  7. Kaur, R.; Arora, S. Nature Inspired Range Based Wireless Sensor Node Localization Algorithms. Int. J. Interact. Multimed. Artif. Intell. 2017, 4, 7–17. [Google Scholar] [CrossRef]
  8. Amat, S.; Busquier, S.; Negra, M. Adaptive approximation of nonlinear operators. Numer. Funct. Anal. Optim. 2004, 25, 397–405. [Google Scholar] [CrossRef]
  9. Argyros, I.K. On the Newton-Kantorovich hypothesis for solving equations. J. Comput. Math. 2004, 169, 315–332. [Google Scholar] [CrossRef]
  10. Argyros, I.K.; González, D. Extending the applicability of Newton’s method for k-Fréchet differentiable operators in Banach spaces. Appl. Math. Comput. 2014, 234, 167–178. [Google Scholar] [CrossRef]
  11. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef]
  12. Argyros, I.K.; Hilout, S. On an improved convergence analysis of Newton’s method. Appl. Math. Comput. 2013, 225, 372–386. [Google Scholar] [CrossRef]
  13. Ezquerro, J.A.; Hernández, M.A. How to improve the domain of parameters for Newton’s method. Appl. Math. Lett. 2015, 48, 91–101. [Google Scholar] [CrossRef]
  14. Gutiérrez, J.M.; Magreñán, Á.A.; Romero, N. On the semi-local convergence of Newton-Kantorovich method under center-Lipschitz conditions. Appl. Math. Comput. 2013, 221, 79–88. [Google Scholar]
  15. Divya, J. Families of Newton-like methods with fourth-order convergence. Int. J. Comput. Math. 2013, 90, 1072–1082. [Google Scholar]
Table 1. Convergence of Newton’s method choosing z 0 = 1 , for different values of p.
Table 1. Convergence of Newton’s method choosing z 0 = 1 , for different values of p.
p0.410.430.45
z 1 0.8033330.8100000.816667
z 2 0.7473290.7584630.769351
z 3 0.7429220.7548020.766321
z 4 0.7428960.7547840.766309
z 5 0.7428960.7547840.766309

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Magreñán, Á.A.; Orcos, L.; Sarría, Í. Advances in the Semilocal Convergence of Newton’s Method with Real-World Applications. Mathematics 2019, 7, 299. https://doi.org/10.3390/math7030299

AMA Style

Argyros IK, Magreñán ÁA, Orcos L, Sarría Í. Advances in the Semilocal Convergence of Newton’s Method with Real-World Applications. Mathematics. 2019; 7(3):299. https://doi.org/10.3390/math7030299

Chicago/Turabian Style

Argyros, Ioannis K., Á. Alberto Magreñán, Lara Orcos, and Íñigo Sarría. 2019. "Advances in the Semilocal Convergence of Newton’s Method with Real-World Applications" Mathematics 7, no. 3: 299. https://doi.org/10.3390/math7030299

APA Style

Argyros, I. K., Magreñán, Á. A., Orcos, L., & Sarría, Í. (2019). Advances in the Semilocal Convergence of Newton’s Method with Real-World Applications. Mathematics, 7(3), 299. https://doi.org/10.3390/math7030299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop