Next Article in Journal
Generalized Thermoelastic Functionally Graded on a Thin Slim Strip Non-Gaussian Laser Beam
Next Article in Special Issue
Comparison Methods for Solving Non-Linear Sturm–Liouville Eigenvalues Problems
Previous Article in Journal
Overshoot Elimination for Control Systems with Parametric Uncertainty via a PID Controller
Previous Article in Special Issue
Direct Comparison between Two Third Convergence Order Schemes for Solving Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extending the Convergence Domain of Methods of Linear Interpolation for the Solution of Nonlinear Equations

1
Department of Mathematics, Cameron University, Lawton, OK 73505, USA
2
Department of Theory of Optimal Processes, Ivan Franko National University of Lviv, Universitetska Str. 1, 79000 Lviv, Ukraine
3
Department of Computational Mathematics, Ivan Franko National University of Lviv, Universitetska Str. 1, 79000 Lviv, Ukraine
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(7), 1093; https://doi.org/10.3390/sym12071093
Submission received: 26 May 2020 / Revised: 27 June 2020 / Accepted: 29 June 2020 / Published: 1 July 2020
(This article belongs to the Special Issue Iterative Numerical Functional Analysis with Applications)

Abstract

:
Solving equations in abstract spaces is important since many problems from diverse disciplines require it. The solutions of these equations cannot be obtained in a form closed. That difficulty forces us to develop ever improving iterative methods. In this paper we improve the applicability of such methods. Our technique is very general and can be used to expand the applicability of other methods. We use two methods of linear interpolation namely the Secant as well as the Kurchatov method. The investigation of Kurchatov’s method is done under rather strict conditions. In this work, using the majorant principle of Kantorovich and our new idea of the restricted convergence domain, we present an improved semilocal convergence of these methods. We determine the quadratical order of convergence of the Kurchatov method and order 1 + 5 2 for the Secant method. We find improved a priori and a posteriori estimations of the method’s error.

1. Introduction

We consider solving equation
F ( x ) = 0
using iterative methods. Here F : Ω B 1 B 2 , B 1 , B 2 are Banach spaces, Ω is an open region of B 1 .
Secant method
x n + 1 = x n [ x n , x n 1 ; F ] 1 F ( x n ) , n = 0 , 1 ,
is a popular device for solving nonlinear equations. It is due to the following: simplicity of the method; small amount of calculations on each iteration and use of the value of an operator from only two previous iterations in the iterative formula of the method. A lot of works are dedicated to this method [1,2,3]. In [4] the Secant method is used for solving the nonlinear least squares problem. The Kurchatov’s method of linear interpolation
x n + 1 = x n [ 2 x n x n 1 , x n 1 ; F ] 1 F ( x n ) , n = 0 , 1 ,
is less known. This method has the same order of convergence as Newton’s method but does not require the calculation of derivatives. In (2) and (3), [ u , v ; F ] is a divided difference of the first order for the operator F at the points u and v [5,6].
In this work we will investigate the Secant method and Kurchatov’s method using the Kantorovich’s principle of majorants. For the first time, this principle was used by L.V. Kantorovich for investigating the convergence of the classical and modified Newton’s method, having built for the nonlinear operator a majorizing real quadratic function [7]. Corresponding to this, the iterative sequence for nonlinear operator is majorized by a converging sequence for nonlinear equation with one variable. Later the nonlinear majorants for investigating other methods of solving nonlinear functional equations have been built. In work [8] with the help of the majorant principle, a method with the order of convergence 1.839 . . . , which in its iterative formula uses the value of an operator from the three previous iterations, is investigated. Specifically, a real cubical polynomial, which majorizes the given nonlinear operator is built. With that, the Lipschitz conditions are put upon the divided differences’ operator of the second order [8,9]. We investigate the Secant method with different conditions that have been put upon the nonlinear operator. In particular, if the Lipschitz condition for the divided differences of the first order are fulfilled, the quadratic majorizing function of one variable is built, and if the Lipschitz condition for operator of divided difference of the second order are fulfilled, the cubical majorizing function is built. The cubical majorizing function for Kurchatov’s method is also built. Methods of linear interpolation applied to these functions produce a numerical sequence, which majorizes by norm the iterative sequence, produced by applying these methods to the nonlinear operator. In all cases, the a priori and a posteriori error estimations of the linear interpolation methods are also provided.

2. Divided Differences and Their Properties

Let us assume that x , y and z are three points in region Ω .
Definition 1
([6]). Let F be a nonlinear operator defined on a subset Ω of a Banach space B 1 with values in a Banach space B 2 and let x , y be two points of Ω. A linear operator from B 1 to B 2 which is denoted by [ x , y ; F ] and satisfies the conditions:
(1) for all fixed two points x , y Ω
[ x , y ; F ] ( x y ) = F ( x ) F ( y ) ,
(2) if exist a Fréchet derivative F ( x ) , then
[ x , x ; F ] = F ( x )
is called a divided difference of F at the points x and y.
Note that (4) and (5) do not uniquely determine the divided difference with the exception of the case when B 1 is one-dimensional. For specific spaces, the differences are defined in Section 6.
Definition 2
([8]). The operator [ x , y , z ; F ] is called divided difference of the second order of function F at the points x, y and z, if
[ x , y , z ; F ] ( y z ) = [ x , y ; F ] [ x , z ; F ] .
We assume that for [ x , y ; F ] and [ x , y , z ; F ] the conditions of the Lipschitz type are being satisfied in the following form:
[ x , y ; F ] [ x , z ; F ] p y z , x , y , z Ω ,
[ y , x ; F ] [ z , x ; F ] p ¯ y z , x , y , z Ω ,
[ x , y , z ; F ] [ u , y , z ; F ] q x u , u , x , y , z Ω .
If the divided difference [ x , y ; F ] of F satisfies (7) or (8), then F is differentiable by Fréchet on Ω . Moreover, if (7) and (8) are fulfilled, then the Fréchet derivative is continuous by Lipschitz on Ω with the Lipschitz constant L = p + p ¯ [8].
Let us denote U ( x 0 , τ ) = { x Ω : x x 0 < τ } and U ¯ ( x 0 , τ ) = { x Ω : x x 0 τ } . The semilocal convergence of the Secant method uses the conditions ( C ) :
( c 1 ) F : Ω Y is nonlinear operator with [ · , · ; F ] : Ω × Ω L ( B 1 , B 2 ) denoting a first order divided difference on Ω .
( c 2 ) Let x 0 , x 1 Ω . Suppose that the linear operator A 0 is invertible and let a , c be nonnegative numbers such that
x 0 x 1 a , A 0 1 F ( x 0 ) c .
( c 3 ) Assume that the following conditions hold on Ω
A 0 1 ( A 0 [ x 0 , x 0 ; F ] ) p ˜ x 0 x 1 for some p ˜ > 0
or
A 0 1 ( A 0 [ x 0 , x 0 ; F ] ) b 0 .
Moreover, assume the following Lipschitz conditions hold for all u , v , z Ω for some p ˜ 0 > 0 and p ˜ ˜ 0 > 0
A 0 1 ( [ x 0 , x 0 ; F ] [ x 0 , v ; F ] ) p ˜ 0 v x 0
and
A 0 1 ( [ x 0 , u ; F ] [ z , u ; F ] ) p ˜ ˜ 0 z x 0 .
Set b = min { b 0 , p ˜ a } . Define
Ω 0 = Ω U ( x 0 , r ˜ ) , r ˜ = 1 b p ˜ 0 + p ˜ ˜ 0
provided b < 1 .
( c 4 ) The following Lipschitz conditions hold on Ω 0 for some p ¯ 0 > 0 and p ¯ ¯ 0 > 0
A 0 1 ( [ z , u ; F ] [ z , v ; F ] ) p ¯ 0 u v
and
A 0 1 ( [ u , z ; F ] [ v , z ; F ] ) p ¯ ¯ 0 u v .
Set p = max { p ¯ 0 , p ¯ ¯ 0 } .
( c 5 ) Suppose p a < 1 , and define r ¯ = 1 p a 2 p and
h ¯ ( t ) = p t 2 + ( 1 p a ) t .
Moreover, suppose
c h ¯ ( r ¯ ) = ( 1 p a ) 2 4 p .
( c 6 ) U 0 = U ¯ ( x 0 , r ¯ 0 ) Ω , where r ¯ 0 is the unique root in ( 0 , r ¯ ] of equation h ¯ ( t ) c = 0 .
Remark 1.
The following Lipschitz condition is used in the literature for the study of iterative methods using divided differences [1,2,3,4,8,9,10,11,12,13,14,15,16] for p 0 > 0
A 0 1 ( [ x , y ; F ] [ u , v ; F ] ) p 0 ( x u + y v ) ,
although it is not really needed, since tighter conditions are really needed (see conditions ( C ) and proofs that follow).
By these definitions we have
p ˜ 0 p 0 ,
p ˜ ˜ 0 p 0 ,
p p 0 ,
Ω 0 Ω .
The sufficient semilocal convergence criterion in the literature arrived at different ways and corresponding to ( c 5 ) [1] is
c h ( r ) = ( 1 p 0 a ) 2 4 p 0 ,
where
h ( t ) = p 0 t 2 + ( 1 p 0 a ) t , r = 1 p 0 a 2 p 0
provided that p 0 a < 1 (stronger than p a < 1 ).
Then, we have
c h ( r ) c h ¯ ( r ¯ ) ,
but not necessarily vice versa, r r ¯ and h ¯ ( t ) h ( t ) for each t [ 0 , r ] .
Hence, the applicability of the Secant method is extended and under no additional conditions, since all new Lipschitz conditions are specializations of the old condition. Then, in practice the computation of p 0 requires that of of the other p as special cases. Some more advantages are reported after Proposition 1. It is also worth noticing that ( c 3 ) and ( c 4 ) help define Ω 0 through which p ¯ 0 , p ¯ ¯ 0 and p are defined too. With the old approach p depends only on Ω , which contains Ω 0 . In our approach the iterates x n remain in Ω 0 (not Ω used in [1]). That is why our new p constants are at least as tight as p 0 . There is where the novelty of our paper lies and the new idea helps us extend the applicability of these methods. It is also worth noticing that the new constants are specializations of the old ones. Hence, no additional conditions are added to obtain these extensions.
It is worth noting from the proof of Theorem 1 that Ω 0 can be defined as Ω 0 = Ω U x 0 , 1 b p ˜ 0 or Ω 0 = Ω U x 0 , 1 b p ˜ ˜ 0 .
Theorem 1.
Suppose that the conditions ( C ) hold. Then, the iterative procedure (2) is well defined and the sequence generated by it converges to a root x * of the equation F ( x ) = 0 . Moreover, the following error estimate holds:
x n x * t n , n = 1 , 0 , 1 , ,
where
t 1 = r ¯ 0 + a , t 0 = r ¯ 0 ,
t n + 1 = t n p t n 1 1 p a 2 p r ¯ 0 + p ( t n + t n 1 ) , n = 0 , 1 ,
The semilocal convergence of the discussed methods was based on the verification of the criterion (11). If this criterion is not satisfied there is no guarantee that the methods converge. We have now replaced (11) by (10) which is weaker (see (12)).
Proof. 
Notice that the sequence { t n } n 0 is generated by applying the iterative method (2) to a real polynomial
f ( t ) = p t 2 + ( 1 p a + 2 p r ¯ 0 ) t .
It is easy to see that the sequence monotonically converges to zero. In addition, we have
t n + 1 t n + 2 = f ( t n + 1 ) f ( t n ) t n + 1 t n 1 f ( t n + 1 ) = p ( t n 1 t n + 1 ) 1 p a p ( 2 t 0 t n t n + 1 ) ( t n t n + 1 ) .
We prove by using of induction that the iterative method is well defined and that
x n x n + 1 t n t n + 1 .
Using ( c 2 ) , ( c 5 ) , (13), (14) and
t 0 t 1 = t 1 t 0 f ( t 1 ) f ( t 0 ) f ( t 0 ) = t 1 t 0 p ( t 1 2 t 0 2 ) + ( 1 p a 2 p r ¯ 0 ) ( t 1 t 0 ) f ( t 0 ) = f ( r ¯ 0 ) = h ¯ ( r ¯ 0 ) = c ,
it follows that (17) holds for n = 1 , 0 . Let k be a nonnegative integer and for all the n k fulfills (17). If A k + 1 = [ x k + 1 , x k ; F ] , then by ( c 3 ) , we have
I A 0 1 A k + 1 = A 0 1 ( A 0 A k + 1 ) = A 0 1 ( [ x 0 , x 1 ; F ] [ x 0 , x 0 ; F ] + [ x 0 , x 0 ; F ] [ x 0 , x k ; F ] + [ x 0 , x k ; F ] [ x k + 1 , x k ; F ] ) b + p ˜ 0 x 0 x k + p ˜ ˜ 0 x 0 x k + 1 b + p ˜ 0 ( t 0 t k ) + p ˜ ˜ 0 ( t 0 t k + 1 ) < b + ( p ˜ 0 + p ˜ ˜ 0 ) t 0 b + ( p ˜ 0 + p ˜ ˜ 0 ) r ¯ b + ( p ˜ 0 + p ˜ ˜ 0 ) 1 b p ˜ 0 + p ˜ ˜ 0 = 1 .
In view of the Banach lemma [7] A k + 1 is invertible, and
A k + 1 1 A 0 ( 1 p a p ( x 0 x k + 1 + x 0 x k ) ) 1 .
Next, we prove that the iterative method exist for n = k + 1. We get
x k + 1 x k + 2 = A k + 1 1 F ( x k + 1 ) = A k + 1 1 ( F ( x k + 1 ) F ( x k ) A k ( x k + 1 x k ) ) A k + 1 1 A 0 A 0 1 ( [ x k + 1 , x k ; F ] A k ) x k x k + 1 .
By condition ( c 4 ) , we have
A 0 1 ( [ x k + 1 , x k ; F ] A k ) = A 0 1 ( [ x k + 1 , x k ; F ] [ x k , x k ; F ] + [ x k , x k ; F ] [ x k , x k 1 ; F ] ) p ( x k x k + 1 + x k 1 x k ) .
Then, it follows from (20)–(22)
x k + 1 x k + 2 p ( x k x k + 1 + x k 1 x k ) x k x k + 1 1 p a p ( x 0 x k + 1 + x 0 x k ) .
In view of (16) and (17), we obtain
x k + 1 x k + 2 t k + 1 t k + 2 .
Hence, the iterative method is well defined for each n. Hence, it follows that
x n x k t n t k , 1 n k .
Estimate (23) shows that { x n } n 0 is a Cauchy sequence in space B 1 so, it is converging. Let k tend to infinity in formula (23), then we get (13). It is easy to see that x * is the root of equation F ( x ) = 0 , because accordingly to (22), we can write
A 0 1 F ( x k + 1 ) = A 0 1 ( [ x k + 1 , x k ; F ] A k ) ( x k + 1 x k ) p ( x k x k + 1 + x k x k 1 ) x k x k + 1 .
 □
Corollary 1.
The convergence order of iterative Secant method (2) is equal to 1 + 5 2 .
Proof. 
From equality (15) it follows that the order of convergence of the real sequence { t n } n 0 is the only positive root of the equation s 2 s 1 = 0 , i.e., s * = 1 + 5 2 = 1.618 . Given inequality (13), according to Kantorovich’s majorant principle, we obtain that the sequence { x n } n 0 also has an order of convergence 1 + 5 2 . □
Concerning the uniqueness of the solution, we have the result.
Proposition 1.
Under the conditions ( C ) further suppose that for d > 0
A 0 1 ( A 0 [ x * , v ; F ] ) d ( x 0 x * + x 1 v )
holds for all u , v Ω 1 , where Ω 1 = Ω U ( x 0 , R ¯ ) , R ¯ = 1 d ( a + r ¯ 0 ) d and provided d ( a + r ¯ 0 ) < 1 , where x * is a solution of equation F ( x ) = 0 . Then, x * is the only solution of equation F ( x ) = 0 in the set Ω 1 .
Proof. 
Let M = [ x * , y * ; F ] , where y * Ω 1 and F ( y * ) = 0 . Then, we get
A 0 1 ( A 0 [ x * , y * ; F ] ) d ( x 0 x * + x 1 y * ) < d ( r ¯ 0 + a + R ¯ ) = 1 ,
so x * = y * follows from 0 = F ( x * ) F ( y * ) = [ x * , y * ; F ] ( x * y * ) .  □
Remark 2.
The result in Proposition 1 improves the corresponding one in the literature using the old condition, since R = 1 p 0 2 p 0 R ¯ . Hence, we present a larger ball inside which we guarantee the uniqueness of the solution x * .
If, additionally, the second divided difference of function F exists and satisfies the Lipschitz condition with constant q, then the majorizing function for F ( x ) is a cubical polynomial. Then, the following theorem holds.
Theorem 2.
Under the ( C ) conditions (except ( c 5 ) ) further suppose
( h 1 ) Let us presume that p a + q a 2 1 and denote
s = ( p + q a ) 2 + 3 q ( 1 p a q a 2 ) 1 / 2 , r ¯ = 1 p a q a 2 p + q a + s .
Let h be a real polynomial
h ¯ ( t ) = q t 3 ( p + q a ) t 2 + [ 1 p a q a 2 ] t .
It the following inequality is satisfied
c h ¯ ( r ¯ ) = 1 3 · p + q a + 2 s 1 q a 2 1 p a q a 2 p + q a + s 2
and the closed ball U 0 = U ¯ ( x 0 , r ¯ 0 ) Ω , where r ¯ 0 ( 0 , r ¯ ] is the root of equation h ¯ ( t ) = c ( 1 q a 2 ) .
( h 2 )
A 0 1 ( [ x , y , z ; F ] [ u , y , z ; F ] ) q x u .
( h 3 ) Conditions of Propositions 1 hold on Ω 0 .
Then, the iterative method (2) is well defined and the generated by it sequence converges to the solution x * of the equation F ( x ) = 0 . Moreover, the following estimate is satisfied
x n x * t n , n = 1 , 0 , 1 , 2 , ,
where
t 0 = r ¯ 0 , t 1 = r ¯ 0 + a ,
a 0 = p + 3 q r ¯ 0 + q a , b 0 = 3 q r ¯ 0 2 2 a 0 r ¯ 0 q a 2 p a + 1 ,
t n + 1 = t n t n 1 · a 0 q ( t n + t n 1 ) b 0 + a 0 ( t n + t n 1 ) q ( t n 2 + t n 1 t n + t n 1 2 ) , n = 0 , 1 , 2 , .
This proof is analogous to the proof of Theorem 1.
Remark 3. 
( a ) The majorizing sequences { t n } are more precise than the one in [1] (using ( p 0 , q 0 ) instead of ( p , q ) , respectively).
( b ) Similar advantages we report in the case of Theorem 2, see e.g., [1] where instead of ( h 2 ) the following condition is used on Ω
A 0 1 ( [ x , y , z ; F ] [ u , y , z ; F ] ) q 0 x u | ) .
We have
q q 0
(since Ω 0 Ω ).

3. A Posteriori Estimation of Error of the Secant Method

If the constants a , c , p , q are known, then we can compute the sequence { x n } n 0 before generating the sequence { x n } n 0 by the iterative Secant algorithm. With help of inequalities (13) and (27), the a priori estimation of error of the Secant method is given. We obtain an a posteriori estimation of the method’s error, which is sharper than the a priori one.
Theorem 3.
Let the conditions of the Theorem 2 hold. Denote
e n = p ( x n x n 1 + x n 1 x n 2 ) x n x n 1 ,
g n = 1 p a 2 p x n x 0 .
Then, the estimate holds for n = 1 , 2 , 3 ,
x n x * 2 e n g n + ( g n 2 4 p e n ) 1 2 t n .
Proof. 
By condition ( c 4 ) , we have
I A 0 1 [ x n , x * ; F ] = A 0 1 ( [ x 0 , x 1 ; F ] [ x 0 , x 0 ; F ] + [ x 0 , x 0 ; F ] [ x n , x 0 ; F ] + [ x n , x 0 ; F ] [ x n , x * ; F ] ) p ( x 0 x 1 + x n x 0 + x 0 x * ) p a + p ( 2 x n x 0 + x n x * ) p a + p ( 2 t 0 2 t n + t n ) = p a + 2 p t 0 p t n < p a + 2 p t 0 .
It is easy to see that p a + 2 p t 0 1 . Then, according to the Banach lemma [ x n , x * ; F ] is invertible, and
[ x n , x * ; F ] 1 A 0 1 1 p ( x 0 x 1 + x n x 0 + x 0 x * ) ( g n p x n x * ) 1 .
From (4) we can write
x n x * = [ x n , x * ; F ] 1 ( F ( x n ) F ( x * ) ) = ( [ x n , x * ; F ] 1 A 0 ) A 0 1 F ( x n ) .
Using (24) and (31), we obtain an inequality
x n x * ( g n p x n x * ) 1 e n ,
from which follows that
x n x * 2 { g n + ( g n 2 4 p e n ) 1 2 } 1 e n , t n = p t n 2 + ( 1 p a 2 p r 0 ) t n p t n + ( 1 p a 2 p r 0 ) = p ( t n 2 t n ) 1 p a 2 p ( t 0 t n ) p t n ( t n 1 t n ) e n g n p x n x * x n x * .
 □
If the second divided difference of function F exists and satisfies the Lipschitz condition with constant q, then the following theorem holds.
Theorem 4.
Let the conditions of the Theorem 2 hold. Denote
e n = p ( x n x n 1 + x n 2 x n 1 ) + q x n 1 x n 2 2 ,
g n = 1 p a q a 2 2 p x n x 0 .
Then the following estimate holds for n = 1 , 2 , 3 , . . .
x n x * 2 e n g n + ( g n 2 4 p e n ) 1 2 t n .
Proof. 
The proof of this theorem is similar to the previous theorem, but instead of inequalities (32), the following majorizing inequalities are used
t n = q t n 3 + a 0 t n 2 + b 0 t n q t n 2 + a 0 t n + b 0 = p ( t n 2 t n ) + q ( t n 2 t n 1 ) 2 + q [ 2 t 0 + t 1 + t n 1 + t n 2 + t n ) ] 1 p a q a 2 2 p ( t 0 t n ) p t n ( 3 q r 0 + q a ) ( t 0 t n ) 2 q t n 2 q a t 0 p ( t n 2 t n ) + q ( t n 2 t n 1 ) 2 1 p a q a 2 2 p ( t 0 t n ) p t n e n g n p x n x * x n x * .
 □

4. Semilocal Convergence of the Kurchatov’s Method

Sufficient conditions of semilocal convergence and the speed of convergence of the Kurchatov’s method (3) are determined by the following theorem.
Theorem 5.
Suppose: Conditions ( c 1 ) ( c 4 ) , ( h 2 ) and ( h 3 ) hold, but with A 0 = [ 2 x 0 x 1 , x 1 ; F ] and Ω 0 = Ω U ( x 0 , r ˜ ) , where r ˜ = 1 q ˜ a 2 3 p ˜ 0 + p ˜ ˜ 0 , provided q ˜ a 2 < 1 and
A 0 1 ( [ 2 x 0 x 1 , x 1 , x 0 ; F ] [ x 0 , x 1 , x 0 ; F ] ) q ˜ x 0 x 1 .
Let us assume that 2 q a 2 1 and denote
s = ( p + q a ) 2 + 3 q ( 1 q a 2 ) 1 / 2 , r ¯ = 1 q a 2 p + q a + s .
Let h ¯ be a real polynomial
h ¯ ( t ) = q t 3 ( p + q a ) t 2 + ( 1 q a 2 ) t .
Assume
c ( 1 2 q a 2 ) h ¯ ( r ¯ ) = 1 3 · ( p + q a + 2 s ) 1 q a 2 p + q a + s 2 ,
x 0 , x 1 U 0 V 0 = U ¯ ( x 0 , 3 r ¯ 0 ) Ω , and r ¯ 0 ( 0 , r ¯ ] is the unique root of equation h ¯ ( t ) c ( 1 2 q a 2 ) = 0 , then the iterative Kurchatov’s method (3) is well defined and the sequence generated by it converges to the solution x * of the equation (1). Moreover, the following inequality is being satisfied
x n x * t n , n = 1 , 0 , 1 , 2 , ,
where
t 0 = r ¯ 0 , t 1 = r ¯ 0 + a ,
a 0 = p + 3 q r ¯ 0 + q a , b 0 = 3 q r ¯ 0 2 2 a 0 r ¯ 0 q a 2 + 1 ,
t n + 1 = t n · a 0 t n q ( t n t n 1 ) 2 2 q t n 2 b 0 + 2 a 0 t n q ( t n t n 1 ) 2 3 q t n 2 , n = 0 , 1 , 2 ,
Proof. 
The proof of the theorem is realized with help of the majorants of Kantorovich. As in Theorem 1 but we also use the crucial estimate
I A 0 1 A k + 1 = A 0 1 ( A 0 A k + 1 ) = A 0 1 ( [ 2 x 0 x 1 , x 1 ; F ] [ x 0 , x 1 ; F ] + [ x 0 , x 1 ; F ] [ x 0 , x 0 ; F ] + [ x 0 , x 0 ; F ] [ x k + 1 , x 0 ; F ] + [ x k + 1 , x 0 ; F ] [ x k + 1 , x k ; F ] + [ x k + 1 , x k ; F ] [ 2 x k + 1 x k , x k ; F ] ) = A 0 1 ( ( [ 2 x 0 x 1 , x 1 , x 0 ; F ] [ x 0 , x 1 , x 0 ; F ] ) ( x 0 x 1 ) + [ x 0 , x 0 ; F ] [ x k + 1 , x 0 ; F ] + [ x k + 1 , x 0 ; F ] [ x k + 1 , x k ; F ] + [ x k + 1 , x k ; F ] [ 2 x k + 1 x k , x k ; F ] ) q ˜ a 2 + ( p ˜ 0 + p ˜ ˜ 0 ) x 0 x k + 1 + 2 p ˜ 0 x 0 x k .
 □
Corollary 2.
The convergence order of iterative Kurchatov’s procedure (3) is quadratic.
Proof. 
As a result that according to (34) the convergence of the sequence { t n } n 0 to zero not higher than quadratic, then there are C 0 and N > 0 , that for all n N the inequality holds
( t n t n 1 ) 2 t n 1 2 C t n .
Given this inequality, a quadratic order convergence of the sequence { t n } n 0 follows from (34), and according to (33) follows a quadratic convergence order of the sequence { x n } n 0 of the Kurchatov’s method (3). □
Thus, Kurchatov’s method has a quadratic convergence order as Newton’s method but does not require the calculation of derivatives.
Remark 4.
We obtain similar advantages as the ones reported earlier for Theorem 2.

5. A Posteriori Estimation of Error of the Kurchatov’s Method

If the constants a , c , p , q , are known, then we can compute the sequence { t n } n 0 before receiving the sequence { x n } n 0 by the iterative algorithm (3). With help of inequality (33) the a priori estimation of error of the Kurchatov’s method is given. We will receive a posteriori estimation of the method’s error, which is coarser than the a priori one.
Theorem 6.
Let the conditions of the Theorem 5 be fulfilled. Denote
e n = p x n x n 1 2 + q x n 1 x n 2 2 x n 1 x n , g n = 1 2 p x n x 0 q a 2 .
Then for n = 1 , 2 , 3 , . . . the following estimation holds
x n x * 2 e n g n + ( g n 2 4 p e n ) 1 2 t n .
Proof. 
The proof of the theorem is similar [8]. From the conditions (7) and (9), we get
I A 0 1 [ x n , x * ; F ] = A 0 1 ( A 0 [ x 0 , x 0 ; F ] + [ x 0 , x 0 ; F ] [ x n , x * ; F ] ) q a 2 + p ( x 0 x n + x 0 x * ) q a 2 + p ( t 0 t n + t 0 ) < q a 2 + 2 p t 0 .
It is easy to see that q a 2 + 2 p t 0 1 . Then by Banach lemma [ x n , x * ; F ] has inverse and
[ x n , x * ; F ] 1 A 0 ( g n p x n x * ) 1 .
From (4) we can write
x n x * = [ x n , x * ; F ] 1 ( F ( x n ) F ( x * ) ) = ( [ x n , x * ; F ] 1 A 0 ) A 0 1 F ( x n ) .
Using (35) and (36) we get inequality
x n x * ( g n p x n x * ) 1 e n .
So, it follows
x n x * 2 { g n + ( g n 2 4 p e n ) 1 2 } 1 e n ,
and
t n = q t n 3 + a 0 t n 2 + b 0 q t n 2 + a 0 t n + b 0 = { [ a 0 q ( 2 t n 1 + t n ) ] ( t n 1 t n ) + q ( t n 2 t n 1 ) 2 } ( t n 1 t n ) 1 q a 2 2 p ( t 0 t n ) p t n ( 3 q r 0 + q a ) ( t 0 t n ) q t n 2 q a t 0 p ( t n 1 t n ) 2 + q ( t n 1 t n ) ( t n 1 t n 2 ) 2 1 q a 2 2 p ( t 0 t n ) p t n e n g n p x n x * x n x * .
 □
Proposition 2.
Under the conditions of Theorem 6 further suppose that for μ > 0
A 0 1 ( A 0 [ x * , v ; F ] ) μ ( 2 x 0 x 1 + x 1 v )
holds for all v Ω 2 , where Ω 2 = Ω U ( x 0 , R 1 ) , R 1 = 1 μ ( a + r ¯ 0 ) μ provided μ ( a + r ¯ 0 ) < 1 .
Proof. 
This time, we have
A 0 1 ( A 0 [ x * , y * ; F ] ) μ ( 2 x 0 x 1 x * + x 1 y * ) < μ ( r ¯ 0 + 2 a + R 1 ) = 1 .
The rest follows as in Proposition 1. □
Remark 5.
The results reported here can immediately be extended further, if we work in Ω ¯ 0 instead of the set Ω 0 , where Ω ¯ 0 = Ω U ( x 1 , r ˜ a ) , i f a < r ˜ . The new p constants will be at least as tight as the ones presented previously in our paper, since Ω ¯ 0 Ω 0 .

6. Numerical Experiments

In this Section, we verify the conditions of the theorems on convergence of the considered methods for some nonlinear operators, and also compare the old and new radii of the convergence domains and error estimates. We first consider the representation of the first-order divided differences for specific nonlinear operators [5,6].
Let B 1 = B 2 = I R m . We have a nonlinear system of m algebraic and transcendental equations with m variables
F i ( x 1 , x 2 , , x m ) = 0 , i = 1 , m .
In this case [ x , y ; F ] is the matrix with entries
[ x , y ; F ] i , j = F i ( x 1 , , x j , y j + 1 , , y m ) F i ( x 1 , , x j 1 , y j , , y m ) x j y j , i , j = 1 , , m .
If x j = y j , then [ x , y ; F ] i , j = F i x j ( x 1 , , x j , y j + 1 , , y m ) .
Let us consider a nonlinear integral equation
F ( x ) = x ( s ) 0 1 K ( s , t , x ( t ) ) d t = 0 ,
where K ( s , t , x ) is a continuous function of its arguments and continuously differentiable by x. In this case [ x , y ; F ] is defined by formula
[ x , y ; F ] h = h ( s ) 0 1 K ( s , t , x ( t ) ) K ( s , t , y ( t ) ) x ( t ) y ( t ) h ( t ) d t .
If x ( t ) y ( t ) = 0 holds for some t = t j , then lim t t j K ( s , t , x ( t ) ) K ( s , t , y ( t ) ) x ( t ) y ( t ) = K x ( s , t j , x ( t j ) ) .
Example 1.
Let B 1 = B 2 = I R , Ω = ( 0.5 , 2 ) and F ( x ) = x 3 9 x + 3 . The solution of equation F ( x ) = 0 is x * 0.33761 .
In view of F, we can write F ( x ) = 3 x 2 9 , [ x , y ; F ] = x 2 + x y + y 2 9 , [ x , y , z ; F ] = x + y + z .
Let us choose x 0 = 0.5 and x 1 = 0.5001 . Then we get p ˜ = | 2 x 0 + x 1 A 0 | , b 0 = | ( 2 x 0 + x 1 ) ( x 1 x 0 ) A 0 | for Secant method and p ˜ = | x 0 x 1 A 0 | , b 0 = | ( x 0 x 1 ) 2 A 0 | for Kurchatov’s method, p ˜ 0 = max v Ω | 2 x 0 + v A 0 | , p ˜ ˜ 0 = max u , z Ω | x 0 + u + z A 0 | , p = p ¯ 0 = p ¯ ¯ 0 = max u , v , z Ω 0 | u + v + z A 0 | , q = | 1 A 0 | . For the corresponding theorems in [1] p 0 = max u , v , z Ω | u + v + z A 0 | , q 0 = | 1 A 0 | .
In Table 1, there are radii and convergence domains of considered methods. They are solutions of corresponding equations and satisfy the condition r ¯ 0 r ¯ . We see that U ¯ ( x 0 , r ¯ 0 ) Ω hold. Moreover, for Kurchatov’s method U ( x 0 , 3 r ¯ 0 ) ( 0.04166 , 1.04166 ) and V 0 Ω . So, the assumptions of the theorems are fulfilled. Next, we show that error estimates hold, i.e., x n x * t n , and compare them with corresponding ones in [1]. Table 2 and Table 3 give results for Secant method (2), and Table 4 for Kurchatov’s method (3).
Table 2, Table 3 and Table 4 show the superiority of our results over the earlier ones, i.e., obtained error estimates are tighter in all cases. That means fewer iterates than before are needed to reach a predetermined error tolerance.
Example 2.
Let B 1 = B 2 = I R 3 , Ω = ( 0.1 , 0.5 ) × ( 0.1 , 0.5 ) × ( 0.1 , 0.5 ) and
F ( x ) = e x 1 1 , x 2 3 + x 2 , x 3 T .
The solution of equation F ( x ) = 0 is x * = ( 0 , 0 , 0 ) T .
For x and y I R 3 , we have
F ( x ) = d i a g ( e x 1 , 3 x 2 2 + 1 , 1 ) a n d [ x , y ; F ] = d i a g e x 1 e y 1 x 1 y 1 , x 2 2 + x 2 y 2 + y 2 2 + 1 , 1 .
For this problem we verify conditions (C) and corresponding ones from [1]. Let us choose x 0 = 0.1 and x 1 = 0.1001 . Having made calculations, we get α = 0.0001 , c 0.1 , b 0 4.99992 e 05 , p 0 = p ˜ = p ˜ 0 = p ˜ ˜ 0 1.45627 , b 4.99992 e 05 < 1 and r ˜ 0.34332 . Then Ω 0 ( 0.1 , 0.44333 ) × ( 0.1 , 0.44333 ) × ( 0.1 , 0.44333 ) . Next p = p ¯ 0 = p ¯ ¯ 0 1.22974 , p α 1.22974 e 04 < 1 and r ¯ 0.40654 . The equation h ¯ ( t ) = c has two solutions τ ¯ 1 0.69630 and τ ¯ 2 0.11679 . Only τ ¯ 2 ( 0 , r ¯ ] . Therefore, r ¯ 0 0.11679 , U ( x 0 , r ¯ 0 ) ( 0.01679 , 0.21679 ) × ( 0.01679 , 0.21679 ) × ( 0.01679 , 0.21679 ) and U 0 Ω .
Analogy, an equation h ( t ) = c has two solutions τ 1 0.56506 and τ 2 0.12152 . r 0.34329 . Therefore, r 0 0.12152 , U ( x 0 , r 0 ) ( 0.02152 , 0.22152 ) × ( 0.02152 , 0.22152 ) × ( 0.02152 , 0.22152 ) and U ¯ ( x 0 , r 0 ) Ω .
In view of (14) and (15), r ¯ 0 < r 0 and Remark 1, we get
t n N E W t n O L D .
So, estimates (13) are tighter than the corresponding ones in [1].
Secant and Kurchatov’s methods solve this system under 5 iterations for ε = 10 10 and the specified initial approximations.
Example 3.
Let B 1 = B 2 = C [ a , b ] and
F ( x ) = x ( s ) 0 1 [ 3 + 0.6625 s + 0.05 s t x 2 ( t ) ] d t = 0 .
The solution of this equation is x * ( s ) = s + 3 . In view of F, we can write
[ x , y ; F ] h = h ( s ) 0 1 0.05 s t [ x ( t ) + y ( t ) ] h ( t ) d t .
Let us choose x 0 ( s ) = 5 and x 1 ( s ) = 6 . Both methods give approximate solution of the integral equation under 13 iterations for ε = 10 10 . To solve a linear integral equation at each iteration was applied Nystrom method. We use a trapezoidal quadrature formula with 101 nodes. On the graphs P n denotes x n x n 1 and E n denotes x n x * (see Figure 1). We can see that E n = O ( h 2 ) , where h = 0.01 . This corresponds to the error estimation of the trapezoidal quadrature formula.

7. Conclusions

The investigations conducted showed the effectiveness of applying the Kantorovich majorant principle for determining the convergence and the order of convergence of iterative difference methods. The convergence of the Secant method (2) with the order 1 + 5 2 and the quadratic convergence order of the Kurchatov’s method (3) are established. According to this technique, nonlinear majorants for a nonlinear operator are constructed, taking into account the conditions imposed on it. By using our idea of restricted convergence regions, we find tighter Lipschitz constants leading to a finer local convergence analysis of these methods than in [1]. Our new technique can be used to extend the applicability of other methods along the same lines. More details on the extensions were given in Remark 1.

Author Contributions

Editing, I.K.A.; conceptualization S.S.; investigation I.K.A., S.S. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shakhno, S.M. Nonlinear majorants for investigation of methods of linear interpolation for the solution of nonlinear equations. In Proceedings of the ECCOMAS 2004—European Congress on Computational Methods in Applied Sciences and Engineering, Jyväskylä, Finland, 24–28 July 2004; Available online: http://www.mit.jyu.fi/eccomas2004/proceedings/pdf/424.pdf (accessed on 26 May 2020).
  2. Amat, S. On the local convergence of secant-type methods. Int. J. Comput. Math. 2004, 81, 1153–1161. [Google Scholar] [CrossRef]
  3. Hernandez, M.A.; Rubio, M.J. The Secant method for nondifferentiable operators. Appl. Math. Lett. 2002, 15, 395–399. [Google Scholar] [CrossRef] [Green Version]
  4. Shakhno, S.; Gnatyshyn, O. Iterative-Difference Methods for Solving Nonlinear Least-Squares Problem. In Progress in Industrial Mathematics at ECMI 98; Arkeryd, L., Bergh, J., Brenner, P., Pettersson, R., Eds.; Verlag B. G. Teubner GMBH: Stuttgart, Germany, 1999; pp. 287–294. [Google Scholar]
  5. Ul’m, S. Algorithms of the generalized Steffensen method. Izv. Akad. Nauk ESSR Ser. Fiz.-Mat. 1965, 14, 433–443. (In Russian) [Google Scholar]
  6. Ul’m, S. On generalized divided differences I, II. Izv. Akad. Nauk ESSR Ser. Fiz.-Mat. 1967, 16, 13–26. (In Russian) [Google Scholar]
  7. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  8. Potra, F.A. On an iterative algorithm of order 1.839... for solving nonlinear operator equations. Numer. Funct. Anal. Optim. 1985, 7, 75–106. [Google Scholar] [CrossRef]
  9. Ul’m, S. Iteration methods with divided differences of the second order. Doklady Akademii Nauk SSSR 1964, 158, 55–58. [Google Scholar]
  10. Schwetlick, H. Numerische Lösung Nichtlinearer Gleichungen; VEB Deutscher Verlag der Wissenschaften: Berlin, Germany, 1979. [Google Scholar]
  11. Argyros, I.K. A Kantorovich-type analysis for a fast iterative method for solving nonlinear equations. J. Math. Anal. Appl. 2007, 332, 97–108. [Google Scholar] [CrossRef] [Green Version]
  12. Argyros, I.K.; George, S. On a two-step Kurchatov-type method in Banach space. Mediterr. J. Math. 2019, 16. [Google Scholar] [CrossRef]
  13. Argyros, I.K.; Magreñán, A.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  14. Kurchatov, V.A. On a method of linear interpolation for the solution of functional equations. Dokl. Akad. Nauk SSSR 1971, 198, 524–526. [Google Scholar]
  15. Shakhno, S.M. Kurchatov method of linear interpolation under generalized Lipschitz conditions for divided differences of first and second order. Visnyk Lviv. Univ. Ser. Mech. Math. 2012, 77, 235–242. (In Ukrainian) [Google Scholar]
  16. Shakhno, S.M. On the difference method with quadratic convergence for solving nonlinear operator equations. Matematychni Studii 2006, 26, 105–110. (In Ukrainian) [Google Scholar]
Figure 1. Values of (a) x n x n 1 and (b) x n x * at each iteration.
Figure 1. Values of (a) x n x n 1 and (b) x n x * at each iteration.
Symmetry 12 01093 g001
Table 1. Radii and convergence domains.
Table 1. Radii and convergence domains.
Secant Method (Theorem 1)Secant Method (Theorem 2)Kurchatov’s Method  (Theorem 5)
r ¯ 0.859330.704300.88501
r ¯ 0 0.187030.188070.18055
U ( x 0 , r ¯ 0 ) (0.31297, 0.68703)(0.31193, 0.68807)(0.31945, 0.68055)
Table 2. New and old error estimates (13).
Table 2. New and old error estimates (13).
n x n x * t n NEW t n OLD
01.62391 × 10 1 1.87033 × 10 1 1.94079 × 10 1
14.27865 × 10 3 2.03636 × 10 2 2.74082 × 10 2
29.60298 × 10 5 2.45405 × 10 3 4.40268 × 10 3
34.78431 × 10 8 3.65459 × 10 5 1.18474 × 10 4
45.37514 × 10 13 6.65774 × 10 8 5.26214 × 10 7
501.80951 × 10 12 6.31743 × 10 11
Table 3. New and old error estimates (27).
Table 3. New and old error estimates (27).
n x n x * t n NEW t n OLD
01.62391 × 10 1 1.88065 × 10 1 1.95339 × 10 1
14.27865 × 10 3 2.13956e × 10 2 2.86694 × 10 2
29.60298 × 10 5 2.79473 × 10 3 4.93386 × 10 3
34.78431 × 10 8 4.93737 × 10 5 1.54194 × 10 4
45.37514 × 10 13 1.16446 × 10 7 8.59640 × 10 7
504.86585 × 10 12 1.50730 × 10 10
Table 4. New and old error estimates (33).
Table 4. New and old error estimates (33).
n x n x * t n NEW t n OLD
01.62391 × 10 1 1.80552 × 10 1 1.95314 × 10 1
14.27562 × 10 3 1.38863 × 10 2 2.86498 × 10 2
21.16228 × 10 5 3.06832 × 10 4 1.30247 × 10 3
34.04780 × 10 11 6.34947 × 10 7 7.68893 × 10 6
402.81890 × 10 11 1.78179 × 10 9

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Shakhno, S.; Yarmola, H. Extending the Convergence Domain of Methods of Linear Interpolation for the Solution of Nonlinear Equations. Symmetry 2020, 12, 1093. https://doi.org/10.3390/sym12071093

AMA Style

Argyros IK, Shakhno S, Yarmola H. Extending the Convergence Domain of Methods of Linear Interpolation for the Solution of Nonlinear Equations. Symmetry. 2020; 12(7):1093. https://doi.org/10.3390/sym12071093

Chicago/Turabian Style

Argyros, Ioannis K., Stepan Shakhno, and Halyna Yarmola. 2020. "Extending the Convergence Domain of Methods of Linear Interpolation for the Solution of Nonlinear Equations" Symmetry 12, no. 7: 1093. https://doi.org/10.3390/sym12071093

APA Style

Argyros, I. K., Shakhno, S., & Yarmola, H. (2020). Extending the Convergence Domain of Methods of Linear Interpolation for the Solution of Nonlinear Equations. Symmetry, 12(7), 1093. https://doi.org/10.3390/sym12071093

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop