Next Article in Journal
Fuzzy Graph Structures with Application
Next Article in Special Issue
Resistance Distance in the Double Corona Based on R-Graph
Previous Article in Journal
An Iterative Algorithm for Solving Generalized Variational Inequalities and Fixed Points Problems
Previous Article in Special Issue
Convergence Ball and Complex Geometry of an Iteration Function of Higher Order
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extending the Applicability of Two-Step Solvers for Solving Equations

by
Ioannis K. Argyros
1 and
Stepan Shakhno
2,*
1
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Departament of Numerical Mathematics, Ivan Franko National University of Lviv, Universytetska str. 1, 79000 Lviv, Ukraine
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(1), 62; https://doi.org/10.3390/math7010062
Submission received: 23 November 2018 / Revised: 19 December 2018 / Accepted: 30 December 2018 / Published: 8 January 2019
(This article belongs to the Special Issue Computational Methods in Analysis and Applications)

Abstract

:
We present a local convergence of two-step solvers for solving nonlinear operator equations under the generalized Lipschitz conditions for the first- and second-order derivatives and for the first order divided differences. In contrast to earlier works, we use our new idea of center average Lipschitz conditions, through which, we define a subset of the original domain that also contains the iterates. Then, the remaining average Lipschitz conditions are at least as tight as the corresponding ones in earlier works. This way, we obtain weaker sufficient convergence criteria, larger radius of convergence, tighter error estimates, and better information on the solution. These extensions require the same effort, since the new Lipschitz functions are special cases of the ones in earlier works. Finally, we give a numerical example that confirms the theoretical results, and compares favorably to the results from previous works.

1. Introduction

Let X , Y stand for Banach spaces, D X denote an open and convex set.
Consider the nonlinear equation
H ( x ) F ( x ) + G ( x ) = 0
where F : D Y ,   G : D Y ; F is a differentiable operator in the sense of Fréchet, and G is a continuous operator.
The difference one-step methods (Secant method, Kurchatov method) [1,2,3,4,5,6,7] and two-step method [8,9,10]:
x k + 1 = x k [ H ( x k , y k ) ] 1 H ( x k ) y k + 1 = x k + 1 [ H ( x k , y k ) ] 1 H ( x k + 1 ) ,     k = 0 , 1 , 2 , ,
where H ( x k , y k ) is the first-order divided difference, which can be applied for solving nonlinear equations with nondifferentiable operator (Equation (1)). However, it is desirable to build iterative methods that take into account properties of the problem (Equation (1)). In particular, we can use only derivative F ( x k ) of differentiable part of operator instead of full derivative H ( x k ) , which in fact, does not exist. The method [11,12]:
x k + 1 = x k F ( x k ) 1 ( F ( x k ) + G ( x k ) ) , k = 0 , 1 , 2 ,
obtained using this approach converges slowly. More efficient are methods that use sum of the derivatives of the differentiable part and divided difference of the nondifferentiable part of the operator instead of the full derivative H ( x k ) . In particular, the one-step Newton-Secant method [13,14,15]:
x k + 1 = x k ( F ( x k ) + G ( x k , x k 1 ) ) 1 ( F ( x k ) + G ( x k ) ) , k = 0 , 1 , 2 ,
and Newton-Kurchatov method [16,17]:
x k + 1 = x k ( F ( x k ) + G ( 2 x k x k 1 , x k 1 ) ) 1 ( F ( x k ) + G ( x k ) ) , k = 0 , 1 , 2 ,
The method (3) has linear convergence order, Newton-Secant method (4) has order of convergence not higher than ( 1 + 5 ) / 2 and Newton-Kurchatov method (5) has a quadratic convergence order.
In works [18,19], we first proposed two-step solvers for solving Equation (1) defined as:
x k + 1 = x k [ F ( x k + y k 2 ) + G ( x k , y k ) ] 1 ( F ( x k ) + G ( x k ) ) y k + 1 = x k + 1 [ F ( x k + y k 2 ) + G ( x k , y k ) ] 1 ( F ( x k + 1 ) + G ( x k + 1 ) ) , k = 0 , 1 , 2 , ,
where x 0 ,   y 0 are given.
Just like Newton–Secant method, which was studied by many authors, the new method requires at each iteration the computation of one operator A k and its inverse. The operator A k = F ( x k + y k 2 ) + G ( x k , y k ) consists of a combination of a Fréchet derivative of one part of the operator (the differentiable part) and the divided difference of the second part (which is, generally speaking, nondifferentiable). The number of computations of the function value increases by one at each step. Therefore, the number of computations at one iteration is almost identical in both methods. However, the convergence order of the new method is higher ( 1 + 2 2.41 vs. ≈ 1.62).
The investigations of me thod (6) have been conducted under Hölder’s conditions for the second order derivatives from F and usual Lipschitz conditions for the first order divided differences of G [18] of Equation (1) and under generalized Lipschitz conditions for the second order derivatives and divided differences of the first order [19].
There is a plethora of problems from computational sciences, that through mathematical modelling, can look like Equation (1) with a nonzero G . As an example, boundary value problems of first and second order appear in all branches of experimental sciences, e.g., Newton’s second law, in population dynamic modes, evaluating concentrations of different reagents during a reaction, etc. [20,21,22]. Let us consider nonlinear integral equation of Hammerstein-type defined as:
x ( s ) = v ( s ) + 0 t Q ( s , t ) ( λ x ( t ) 1 + p + μ x ( t ) q ) d t
where v ( s ) C [ 0 , 1 ] , s [ a , b ] , p , q [ 0 , 1 ] , and Q is a continuous kernel. Clearly, this equation can be written in two parts with the second part involving the form x ( t ) q and related to a nontrivial G . In particular, we can define F : C [ 0 , 1 ] C [ 0 , 1 ] and G : C [ 0 , 1 ] C [ 0 , 1 ] as:
F ( x ) ( s ) = x ( s ) v ( s ) λ a b Q ( s , t ) x ( t ) 1 + p d t , G ( x ) ( s ) = μ a b Q ( s , t ) x ( t ) q d t , μ 0
and
H ( x ) ( s ) = F ( x ) ( s ) + G ( x ) ( s )
The discretized form of equation H ( x ) ( s ) = 0 can also be written in a similar form where the discretized G is also nontrivial and nondifferentiable [22]. Therefore, there is a strong motivation for studying two-step solver (6) for a nonzero G .
We also like to bring to the attention of the motivated reader the works in References [23,24,25], which constitute subdivision-based solvers seeking all roots in the domain of interest using Newton’s method provided the domain is sufficiently small. These solvers can be used to include a non-differentiable part. Moreover, in References [26,27], the continuation method has also been used as another powerful tool for finding roots. However, these methods involve systems of equations on R i and cannot provide information about solutions of Equation (1) in a Banach space setting. In practice, one may use our results for the original equation, and the aformentioned references for the discretized versions, which are systems in R i .
In this work we continue to study a combined method (6) for solving nonlinear Equation (1). Using more precise estimates on the distances involved, under weaker hypotheses, and under the same computational cost, we provide an analysis of method (6) with the following advantages over the corresponding results in Reference [19]: larger convergence domain, finer error estimates on the distances involved, and an at least as precise information on the location of the solution.
The rest of the paper is structured as follows: Section 2 contains the definitions and auxiliary lemmas, in Section 3 and Section 4, we present the local convergence analysis of method (6) and the uniqueness ball for solution of equation, respectively. In Section 5, we provide the numerical example. The article ends with some conclusions.

2. Definitions and Auxiliary Lemmas

Definition 1.
Let G : D Y be a nonlinear operator and x , y D . If a linear operator G ( x , y ) satisfies the condition:
G ( x , y ) ( x y ) = G ( x ) G ( y )
then it is called a divided difference of G at the points x and y .
Define B ( x 0 , r ) = { x : x x 0 < r } and denote by B ¯ ( x 0 , r ) its closure.
Lemma 1
[28]. Let φ ( t ) = 1 t 0 t λ ( u ) d u ,   0 t r , where λ ( u ) is a positive integrable function and nondecreaing on [ 0 , r ] . Then φ ( t ) is nondecreasing monotonically.
Lemma 2
[19]. Let ψ ( t ) = 1 t 3 0 t μ ( u ) ( t u ) 2 d u ,   0 t r , where μ ( u ) is a positive integrable function and nondecreasing monotonically on [ 0 , r ] . Then ψ ( t ) is nondecreasing monotonically.
Lemma 3
[28]. Let p ( t ) = 1 t 0 t ( t u ) M ( u ) d u ,   0 t r , where M is a positive integrable function and a nondecreasing monotonically function on [ 0 , r ] . Then p ( t ) is increasing monotonically.

3. Convergence

Let B ( x , r ) D , and r 0 ,   r 1 be the solutions of the systems of equations:
J τ 0 2 + C τ 1 = 1 [ J τ 0 2 + E ( 2 τ 0 + τ 1 ) ] τ 0 = τ 1
where parameters J, r, E, and C are given in Theorem 1.
Definition 2.
We say that the divided difference G ( , ) satisfies the center-Lipschitz condition on Ω with M 0 average if:
H ( x ) 1 ( G ( x , y ) G ( x ) ) 0 x x + y x M 0 ( z ) d z
Definition 3.
We say that the operator F satisfies the center-Lipschitz condition on Ω with L 0 average if:
H ( x ) 1 ( F ( x ) F ( x ) ) 0 ρ ( x ) L 0 ( u ) d u f o r   e a c h   x Ω .
for each x , y Ω S ( x , R 2 ) Ω , and some R > 0 .
Define function ζ on the interval [ 0 , ) as:
ζ ( t ) = 0 t L 0 ( u ) d u   + 0 2 t M 0 ( z ) d z
Suppose that the equation
ζ ( t ) = 1
has positive solutions. Denote by s the smallest such solution. Set Ω 0 = Ω S ( x , s ) .
Definition 4.
We say that the operator F satisfies the restricted Lipschitz condition on Ω 0 with L average if:
H ( x ) 1 ( F ( y ) F ( x ) ) 0 y x L ( u ) d u f o r   e a c h   x , y Ω 0
Definition 5.
We say that the divided difference G ( , ) satisfies the restricted Lipschitz condition on Ω 0 × Ω 0 with M average if:
H ( x ) 1 ( G ( x , y ) G ( u , v ) ) 0 x u + y v M ( z ) d z
for each x , y , u , v Ω S ( x , R 4 ) Ω and some R > 0 .
Definition 6.
We say that the operator F satisfies the restricted Lipschitz condition on Ω 0 with N average if:
H ( x ) 1 ( F ( y ) F ( x ) ) 0 y x N ( u ) d u f o r   e a c h   x , y Ω 0
Definition 7
[19]. We say that the operator F satisfies the Lipschitz condition on Ω with L 1 average if:
H ( x ) 1 ( F ( y ) F ( x ) ) 0 y x L 1 ( u ) d u f o r   e a c h   x , y Ω
Definition 8
[19]. We say that the divided difference G ( , ) satisfies the Lipschitz condition on Ω × Ω with M 1 average if:
H ( x ) 1 ( G ( x , y ) G ( u , v ) ) 0 x u + y v M 1 ( z ) d z
for each x , y , u , v Ω S ( x , R 4 ) Ω and some R > 0 .
Definition 9.
We say that the operator F satisfies the Lipschitz condition on Ω with N 1 average if:
H ( x ) 1 ( F ( y ) F ( x ) ) 0 y x N 1 ( u ) d u f o r   e a c h   x , y Ω .
Remark 1.
We have that
Ω 0 Ω .
Hence, if follows from these definitions that for each t [ 0 , λ ] for some λ > 0
L 0 ( t ) L 1 ( t )
L ( t ) L 1 ( t )
and
M ( t ) M 1 ( t )
Then, ( L 0 , L ) ,   M can replace L 1 ,   M 1 in Reference [19] to obtain improved results as already noted in the introduction of this study.
The radius of the convergence ball and the order of convergence of the method (6) are defined by the following theorem.
Theorem 1.
Let H ( x ) F ( x ) + G ( x ) be a nonlinear operator, defined in an open convex subset D of space X with values in the space Y. Suppose that:
(i) Let H ( x ) 0 for some x for which H ( x ) 1 exists;
(ii) Conditions (9), (10), (13), and (15) are satisfied, s exists and is given in Reference [12].
(iii) r > 0 satisfies the inequality
1 8 0 r N ( u ) ( r u ) 2 d u + r 0 3 r / 2 L ( u ) d u + r 0 3 r M ( u ) d u r ( 1 0 r L 0 ( u ) d u 0 2 r M 0 ( u ) d u ) 1
Then, for all x 0 B ( x , r 0 ) and y 0 B ( x , r 1 ) sequences { x k } k N and { y k } k N , as defined according to formulas (6), converge to the solution x , x k B ( x , r 0 ) and y k B ( x , r 1 ) for k = 0 , 1 , 2 , and the estimations are being fulfilled:
ρ ( x k + 1 ) = x k + 1 x J ρ ( x k ) 3 + C ρ ( x k ) ρ ( y k ) , k = 0 , 1 , 2 , , ρ ( y k + 1 ) = y k + 1 x J ρ ( x k + 1 ) 3 + E ( ρ ( x k ) + ρ ( x k + 1 ) + ρ ( y k ) ) ρ ( x k + 1 ) ,     k = 0 , 1 , 2 , ,
where
J = Q 0 8 ρ ( x 0 ) 3 0 ρ ( x 0 ) N ( u ) ( ρ ( x 0 ) u ) 2 d u ; E = Q 0 z 0 ( 0 z 0 / 2 L ( u ) d u + 0 z 0 M ( u ) d u ) ; C = Q 0 ρ ( y 0 ) ( 0 ρ ( y 0 ) / 2 L ( u ) d u + 0 ρ ( y 0 ) M ( u ) d u ) Q 0 = ( 1 0 ρ ( x 0 + y 0 ) / 2 L 0 ( u ) d u 0 ρ ( x 0 ) + ρ ( y 0 ) M 0 ( u ) d u ) 1 z 0 = ρ ( x 0 ) + ρ ( y 0 ) + ρ ( x 1 ) .
Sequences { x k } k N and { y k } k N converge to x with order 1 + 2 .
Proof. 
Pick x 0 B ( x , r 0 ) and y 0 B ( x , r 1 ) arbitrarily.
By Lemmas 1 and 2 we get, that 1 t 0 t L 0 ( u ) d u , 1 t 0 t L ( u ) d u , 1 t 0 t M ( u ) d u , and 1 t 3 0 t N ( u ) ( t u ) 2 d u are nondecreasing with respect to t .
Set A = A ( x , y ) the linear operator A = F ( x + y 2 ) + G ( x , y ) with x , y B ( x , r ) . Then, by Inequalities (9) and (10), we get:
I H ( x ) 1 A = H ( x ) 1 ( H ( x ) A ) = H ( x ) 1 [ F ( x ) + G ( x , x ) F ( x + y 2 ) G ( x , y ) ] 0 ρ ( x + y 2 ) L 0 ( u ) d u   + 0 ρ ( x ) + ρ ( y ) M 0 ( u ) d u
It follows that from the definition of r
0 r L 0 ( u ) d u + 0 2 r M 0 ( u ) d u = 1 1 8 r 0 r N ( u ) ( r u ) 2 d u 0 3 r / 2 L ( u ) d u 0 3 r M ( u ) d u < 1 .
In view of the identity
A 1 H ( x ) = I ( I [ H ( x ) 1 A ] ) 1
and the Banach perturbation lemma [13], A 1 exists, and:
A 1 H ( x ) ( 1 0 ρ ( x + y 2 ) L 0 ( u ) d u 0 ρ ( x ) + ρ ( y ) M 0 ( u ) d u ) 1
Let us now suppose that x k B ( x , r 1 ) . Set A k = A ( x k , y k ) . Then A k is invertible and, according to (6), we can write:
x k + 1 x = x k x A k 1 ( F ( x k ) + G ( x k ) F ( x ) G ( x ) ) = A k 1 [ F ( x k ) + G ( x k ) F ( x ) G ( x ) ( F ( x k + y k 2 ) + G ( x k , y k ) ) ( x k x ) ] = A k 1 H ( x ) { H ( x ) 1 [ F ( x k + y k 2 ) ( x k x ) F ( x k ) + F ( x ) ] + H ( x ) 1 [ G ( x k , x ) G ( x k , y k ) ] ( x k x ) } = A k 1 H ( x ) { H ( x ) 1 [ F ( x k + x 2 ) ( x k x ) F ( x k ) + F ( x ) ] + H ( x ) 1 [ F ( x k + y k 2 ) F ( x k + x 2 ) ] ( x k x ) + H ( x ) 1 [ G ( x k , x ) G ( x k , y k ) ] ( x k x ) } .
Let us write down the identity [29] (p. 336) (lemma 1) for the value ω = ½:
F ( x ) F ( y ) F ( x + y 2 ) ( x y ) = 1 4 0 1 ( 1 t ) [ F ( x + y 2 + t 2 ( x y ) ) F ( x + y 2 + t 2 ( y x ) ) ] ( x y ) ( x y ) d t .
Letting in this equality x = x ,   y = x , we have in turn that:
H ( x ) 1 [ F ( x ) F ( x k ) F ( x k + x 2 ) ( x x k ) ] = 1 4 0 1 ( 1 t ) H ( x ) 1 [ F ( x k + x 2 + t 2 ( x x k ) ) F ( x k + x 2 + t 2 ( x k x ) ) × ( x x k ) ( x x k ) ] d t 1 4 0 1 ( 1 t ) 0 t x k x N ( u ) d u x k x 2 d t = 1 8 0 x k x ( 1 u x k x ) N ( u ) ( 1 u x k x ) d u x k x 2 = 1 8 0 ρ ( x k ) N ( u ) ( ρ ( x k ) u ) 2 d u
Moreover, we have
H ( x ) 1 [ F ( x k + y k 2 ) F ( x k + x 2 ) ] 0 ρ ( y k ) / 2 L ( u ) d u H ( x ) 1 [ G ( x k , x ) G ( x k , y k ) ] ( x k x ) 0 ρ ( y k ) M ( u ) d u x k x
Then, according to Lemmas 1–3, taking into account the latter inequalities, we obtain:
x k + 1 x A k 1 H ( x ) { H ( x ) 1 [ F ( x k + x 2 ) ( x k x ) F ( x k ) + F ( x ) ] + H ( x ) 1 [ F ( x k + y k 2 ) F ( x k + x 2 ) ] ( x k x ) + H ( x ) 1 [ G ( x k , x ) G ( x k , y k ) ] ( x k x ) } A k 1 H ( x ) { 1 4 0 1 ( 1 t ) H ( x ) 1 [ F ( x k + x 2 + t 2 ( x k x ) ) F ( x k + x 2 + t 2 ( x x k ) ) ] ( x k x ) ( x k x ) d t + H ( x ) 1 [ F ( x k + y k 2 ) F ( x k + x 2 ) ] ( x k x ) + H ( x ) 1 [ G ( x k , x ) G ( x k , y k ) ] ( x k x ) } A k 1 H ( x ) { 1 8 0 ρ ( x k ) N ( u ) ( ρ ( x k ) u ) 2 d u + 0 ρ ( y k ) / 2 L ( u ) d u ρ ( x k ) + 0 ρ ( y k ) M ( u ) d u ρ ( x k ) } .
Therefore
x k + 1 x Q k { 1 8 ρ ( x k ) 3 0 ρ ( x k ) N ( u ) ( ρ ( x k ) u ) 2 d u ρ ( x k ) 3 + 2 ρ ( y k ) 0 ρ ( y k ) / 2 L ( u ) d u   ρ ( x k ) ρ ( y k ) / 2 + 1 ρ ( y k ) 0 ρ ( y k ) M ( u ) d u   ρ ( x k ) ρ ( y k ) } Q 0 { 1 8 ρ ( x 0 ) 3 0 ρ ( x 0 ) N ( u ) ( ρ ( x 0 ) u ) 2 d u ρ ( x k ) 3 + 2 ρ ( y 0 ) 0 ρ ( y 0 ) / 2 L ( u ) d u   ρ ( x k ) ρ ( y k ) / 2 + 1 ρ ( y 0 ) 0 ρ ( y 0 ) M ( u ) d u   ρ ( x k ) ρ ( y k ) } J ρ ( x k ) 3 + C ρ ( x k ) ρ ( y k ) < ( J r 0 2 + C r 1 ) ρ ( x k ) = ρ ( x k ) < r 0
where Q k = ( 1 0 ρ ( x k + y k 2 ) L 0 ( u ) d u   0 ρ ( x k ) + ρ ( y k ) M 0 ( u ) d u ) 1 .
Analogously [19]:
y k + 1 x Q k { 1 8 0 ρ ( x k + 1 ) N ( u ) ( ρ ( x k + 1 ) u ) 2 d u + 0 z k / 2 L ( u ) d u   ρ ( x k + 1 ) + 0 z k M ( u ) d u   ρ ( x k + 1 ) } J ρ ( x k + 1 ) 3 + E ( ρ ( x k ) + ρ ( y k ) + ρ ( x k + 1 ) ) ρ ( x k + 1 ) < [ J r 0 2 + E ( 2 r 0 + r 1 ) ] ρ ( x k + 1 ) = r 1 r 0 ρ ( x k + 1 ) < r 1
where z k = ρ ( x k ) + ρ ( y k ) + ρ ( x k + 1 ) .
By Inequality (27), sequence { x k x } is convergent monotonically to some a ,   0 a < r 0 . From Inequality (27) comes a J a 3 + C a r 1 . We arrive at a contradiction for a 0 :
1 J a 2 + C r 1 < J r 0 2 + C r 1 = 1
Hence, we get: lim k x k = x = lim k y k .
We set a k = ρ ( x k ) ,   b k = ρ ( y k ) ,   k = 0 , 1 , 2 , . From Inequalities (27) and (28), we get:
a k + 1 J a k 3 + C a k b k ,   k = 0 , 1 , 2 , , b k + 1 a k + 1 min { r 1 r 0 ,   J a k + 1 2 + E ( a k + a k + 1 + b k ) } a k + 1 min { r 1 r 0 ,   ( 2 E + J a k ) a k + E b k } a k + 1 min { r 1 r 0 , ( 2 E + J r 0 + E r 1 r 0 ) a k } ( J r 0 + E ( 2 + r 1 r 0 ) ) a k + 1 a k
a k + 1 min { r 1 r 0 , ( 2 E + J r 0 + E r 1 r 0 ) a k } ( J r 0 + E ( 2 + r 1 r 0 ) ) a k + 1 a k
From the Inequalities (29) and (30) for large enough k and some positive constant C 1 , it follows that a k + 1 C 1 a k 2 a k 1 , which leads to the ρ 2 2 ρ 1 = 0 , with positive root ρ = 1 + 2 . □

4. The Uniqueness Ball for the Solution of Equations

The uniqueness ball for the solution is defined in the following theorem.
Theorem 2.
Suppose that H ( x ) F ( x ) + G ( x ) = 0 , F is continuous in B ( x , r ) , H ( x ) 1 exists, F satisfies the center-Lipschitz condition with L 0 average:
H ( x ) 1 ( F ( x ) F ( x ) ) 0 x x L 0 ( u ) d u , x , y B ( x , r )
the divided difference G ( x , y ) satisfies the center-Lipschitz condition with M 0 average:
H ( x ) 1 ( G ( x , y ) G ( x , x ) ) 0 x x + y x M 0 ( u ) d u , x , y B ( x , r )
Suppose r satisfies
1 r 0 r ( r u ) L 0 ( u ) d u + 0 r M 0 ( u ) d u 1.
Then the equation H ( x ) = 0 has a unique solution x in B ( x , r ) .
Proof. 
Pick x 0 B ( x , r ) arbitrarily. Then, by the iteration:
x k + 1 = x k H ( x ) 1 ( F ( x k ) + G ( x k ) ) , k = 0 , 1 , 2 , ,
we can write
x 1 x = x 0 x H ( x ) 1 ( F ( x 0 ) + G ( x 0 ) ) = H ( x ) 1 [ H ( x ) ( x 0 x ) F ( x 0 ) + F ( x ) G ( x 0 ) + G ( x ) ] = = H ( x ) 1 [ F ( x ) ( x 0 x ) F ( x 0 ) + F ( x ) + G ( x , x ) ( x 0 x ) G ( x 0 ) + G ( x ) ] = H ( x ) 1 { F ( x ) ( x 0 x ) F ( x 0 ) + F ( x ) + [ G ( x , x ) G ( x 0 , x ) ] ( x 0 x ) }
Let us estimate the rate of expression as H ( x ) 1 { F ( x ) ( x 0 x ) F ( x 0 ) + F ( x ) } ,
H ( x ) 1 { F ( x ) ( x 0 x ) F ( x 0 ) + F ( x ) } 0 1 H ( x ) 1 { F ( x + t ( x 0 x ) ) F ( x ) } ( x 0 x ) d t 0 1 0 t x 0 x L 0 ( u ) d u x 0 x d t 0 x 0 x ( 1 u x 0 x ) L 0 ( u ) d u x 0 x = 0 ρ ( x 0 ) ( ρ ( x 0 ) u ) L 0 ( u ) d u .
Then:
x 1 x ( 1 ρ ( x 0 ) 0 ρ ( x 0 ) ( ρ ( x 0 ) u ) L 0 ( u ) d u + 0 ρ ( x 0 ) M 0 ( u ) d u ) x 0 x = q 0 x 0 x ,
where:
q 0 = 1 ρ ( x 0 ) 0 ρ ( x 0 ) ( ρ ( x 0 ) u ) L 0 ( u ) d u + 0 ρ ( x 0 ) M 0 ( u ) d u < 1 r 0 r ( r u ) L 0 ( u ) d u + 0 r M 0 ( u ) d u 1.
According to Inequality (35):
x 1 x q 0 x 0 x .
Thus, the iteration (34) can be continued infinitely, and:
x k x q 0 k x 0 x , k = 1 , 2 , .
Therefore, lim n x k = x . However, if H ( x 0 ) = 0 , then from Equality (34), x k = x 0 . Therefore, from this follows x 0 = x . □
Let us denote that having set in Equation (1) G ( x ) 0 , from Theorem 1 we obtain Theorem 1 from Reference [29], and from Theorems 2–4.1 with Reference [28] for Newton’s method and having set in Equation (1) F ( x ) 0 , we obtain from Theorem 1 a corresponding Theorem 1 from Reference [10] for method (2), but with more precise and obvious estimations. With Lipschitz constants, we obtain the corresponding theorems from References [18,29,30,31].
Remark 2.
If L 0 = L = L 1 , M 0 = M = M 1 , and N = N 1 , then our results reduce to the ones in Reference [19]. Otherwise, i.e., if the inequalities in (20) or (21) or (22) are strict, then the following advantages are available: larger convergence ball, tighter estimates on x k x , and better information regarding the location of the solution since all new constants and functions are more precise (see also the numerical example). Let us look for the corresponding Inequality (33) given in Reference [19], Inequality (23):
1 r 0 r ( r u ) L 1 ( u ) d u + 0 r M 1 ( u ) d u 1.
Then, it follows from Inequalities (33) and (36) that the new radius r is larger than the one in Reference [19] provided (20) or (22) hold as strict inequalities. The advantages are obtained under the same effort as in Reference [19]. This is because finding L 1 , M 1 , N 1 requires finding L 0 , L , M , and N as special cases.
Remark 3.
In our earlier works [17,18], we reported the results of numerical experiments for solving Equation (1).

5. Numerical Examples

In first example we test the local convergence hypotheses and show that the new radius of convergence the is larger than the one in Reference [19]. The second example is used to show the superiority of our method (6) over other methods using the similar information.
Example 1.
Let X = Y = R 3 , Ω = S ( v , 1 ) , v = ( 0 , 0 , 0 ) T , and G = 0 on Ω .
Define F on Ω by F ( v ) = ( e v 1 1 , e 1 2 v 2 2 + v 2 , v 3 ) T for v = ( v 1 , v 2 , v 3 ) T . Then we have:
F ( v ) = ( e v 1 0 0 0 ( e 1 ) v 2 + 1 0 0 0 1 ) ,
so L 0 ( u ) = ( e 1 ) u , s = 1 e 1 , M 0 ( u ) = M ( u ) = M 1 ( u ) = 0 , N ( u ) = L ( u ) = e 1 e 1 u , and N 1 ( u ) = L 1 ( u ) = e u . Then, using Inequality (23), r n e w = 0.614 , and compared to r o l d = 0.475 using Inequality (23) again but for N N 1 , L L 1 , M M 1 , L 0 L 1 , and M 0 M 1 , we conclude that:
r o l d < r n e w
Example 2.
Let X = Y = R 3 :
H 1 ( v ) = v 3 2 ( 1 v 2 ) v 1 v 2 + | v 2 v 3 2 | = 0 , H 2 ( v ) = v 3 2 ( v 1 3 v 1 ) v 2 2 + | 3 v 2 2 v 3 2 + 1 | = 0 , H 3 ( v ) = 6 v 1 v 2 3 + v 2 2 v 3 2 v 1 v 2 2 v 3 + | v 1 + v 3 v 2 | = 0 , v = ( 1 ; 2 ; 3 ) .
Define
G 1 ( v ) = | v 2 v 3 2 | , G 2 ( v ) = | 3 v 2 2 v 3 2 + 1 | , G 3 ( v ) = | v 1 + v 3 v 2 | .
Let us denote x = v . We choose the initial approximation x 0 = ( 2 ; 3 ; 5 ) p . An additional approximation is x 1 = y 0 = x 0 + 0.0001. We performed the calculations to fulfill the conditions x k + 1 x k ε ;   H ( x k + 1 ) ε .
To solve the System (37) we compared methods (3), (4), (6), and the secant method:
x k + 1 = x k ( F ( x k , x k 1 ) + G ( x k , x k 1 ) ) 1 ( F ( x k ) + G ( x k ) ) , k = 0 , 1 , 2 ,
The computations were performed using Matlab R2010a. The results of calculations are shown in Table 1.
As can be seen from column (6) of Table 1, the number of iterations for p = 1,10,100 and ε = 10 5 , 10 15 was smaller than the corresponding one of the other methods. The presented results show the high efficiency of the proposed method (6).

6. Conclusions

We conducted an improved local convergence analysis of two-step solvers for solving nonlinear equations. To study the method we used a center and a restricted radius Lipshchitz conditions to obtain a larger radius of convergence and tighter error estimates, and under the same computational effort. The uniqueness ball of the solution was established. Our results have the following advantages over earlier works: (1) larger convergence region, leading to more initial points; (2) tighter upper bound estimates on x k + 1 x , as well as x k x , which means that fewer iterations are needed to arrive at a desired error tolerance; and (3) the information on the location of the solution is at least as precise. In partial cases, the obtained results of local convergence contain the results obtained in the works of other authors. Conducted numerical calculations confirm the theoretical results.

Author Contributions

Conceptualization, Formal analysis, I.K.A.; Investigation, Data Curation, S.S.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
  2. Argyros, I.K.; Magréñan, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  3. Hernandez, M.A.; Rubio, M.J. The Secant method for nondifferentiable operators. Appl. Math. Lett. 2002, 15, 395–399. [Google Scholar] [CrossRef] [Green Version]
  4. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  5. Ren, H.; Argyros, I.K. A new semilocal convergence theorem for a fast iterative method with nondifferentiable operators. J. Appl. Math. Comp. 2010, 34, 39–46. [Google Scholar] [CrossRef]
  6. Shakhno, S.M. About the difference method with quadratic convergence for solving nonlinear operator equations. Matematychni Studii 2006, 26, 105–110. (In Ukrainian) [Google Scholar]
  7. Shakhno, S.M. On the secant method under generalized Lipschitz conditions for the divided difference operator. Proc. Appl. Math. Mech. 2007, 7, 2060083–2060084. [Google Scholar] [CrossRef]
  8. Bartish, M.Y. About one iterative method of solving functional equations. Dopov. Acad. Nauk Ukr. RSR. Ser. A 1968, 5, 387–391. (In Ukrainian) [Google Scholar]
  9. Bartish, M.Y.; Shcherbyna, Y.M. About one difference method of solving operator equations. Dopov. Acad. Nauk Ukr. RSR. Ser. A 1972, 7, 579–582. (In Ukrainian) [Google Scholar]
  10. Shakhno, S.M. On a two-step iterative process under generalized Lipschitz conditions for first-order divided differences. J. Math. Sci. 2010, 168, 576–584. [Google Scholar] [CrossRef]
  11. Argyros, I.K.; Hilout, S. Newton-Kantorovich approximations under weak continuity conditions. J. Appl. Math. Comput. 2011, 37, 361–375. [Google Scholar] [CrossRef]
  12. Zabrejko, P.P.; Nguen, D.F. The majorant method in the theory of Newton-Kantorovich approximations and the Ptâk error estimates. Numer. Funct. Anal. Optim. 1987, 9, 671–684. [Google Scholar] [CrossRef]
  13. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  14. Cătinas, E. On some iterative methods for solving nonlinear equations. Revue d’Analyse Numérique et de Theorie de l’Approximation. 1994, 23, 47–53. [Google Scholar]
  15. Shakhno, S.M.; Mel’nyk, I.V.; Yarmola, H.P. Analysis of the convergence of a combined method for the solution of nonlinear equations. J. Math. Sci. 2014, 201, 32–43. [Google Scholar] [CrossRef]
  16. Shakhno, S.M. Combined Newton-Kurchatov method under the generalized Lipschitz conditions for the derivatives and divided differences. J. Comput. Appl. Math. Kiev. 2015, 2, 78–89. [Google Scholar]
  17. Shakhno, S.M.; Yarmola, H.P. Two-point method for solving nonlinear equation with nondifferentiable operator. Matematychni Studii 2011, 36, 213–222. (In Ukrainian) [Google Scholar]
  18. Shakhno, S.; Yarmola, H. Two-step method for solving nonlinear equations with nondifferentiable operator. J. Comput. Appl. Math. Kiev. 2012, 3, 105–115. [Google Scholar] [CrossRef]
  19. Shakhno, S.M. Convergence of the two-step combined method and the uniqueness of solution of the nonlinear operator equations. J. Comput. Appl. Math. 2014, 261, 378–386. [Google Scholar] [CrossRef]
  20. Chen, X.; Nashed, Z.; Qi, L. Convergence of Newton’s method for singular smooth and nonsmoot hequations using adaptive outher inverses. SIAM J. Optim. 1997, 7, 445–462. [Google Scholar] [CrossRef]
  21. Chen, X.; Yamamoto, T. Convergence domains of certain iterative methods for solving nonlinear equations. Numer. Funct. Anal. Optim. 1989, 10, 37–48. [Google Scholar] [CrossRef]
  22. Esquerro, J.A.; Hernández, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Frontiers in Mathematics; Birkhäuser: Basel, Schwitzerland, 2017. [Google Scholar]
  23. Elber, G.; Kim, M.S. Geometric constraint solver using multivariate rational spline functions. In Proceedings of the 6th ACM Symposium on Solid Modeling and Applications, Ann Arbor, MI, USA, 4–8 June 2001; pp. 1–10. [Google Scholar]
  24. Aizenshtein, M.; Bartoň, M.; Elber, G. Global solutions of well-constrained transcendental systems using expression trees and a single solution test. Comput. Aided Geom. Des. 2012, 29, 265–279. [Google Scholar] [CrossRef]
  25. Bartoň, M. Solving polynomial systems using no-root elimination blending schemes. Comput.-Aided Des. 2011, 43, 1870–1878. [Google Scholar] [CrossRef]
  26. Morgan, A.P. A homotopy for solving polynomial systems. Appl. Math. Comput. 1986, 18, 87–92. [Google Scholar] [CrossRef]
  27. Morgan, A.; Sommese, A. Computing all solutions to polynomial systems using homotopy continuation. Appl. Math. Comput. 1987, 24, 115–138. [Google Scholar] [CrossRef]
  28. Wang, X. Convergence of Newton’s method and uniqueness of the solution of equations in Banach space. IMA J. Numer. Anal. 2000, 20, 123–134. [Google Scholar] [CrossRef]
  29. Werner, W. Über ein Verfahren der Ordnung zur Nullstellenbestimmung. Numer. Math. 1979, 32, 333–342. [Google Scholar] [CrossRef]
  30. Shakhno, S.M. Convergence of the two-step Newton type method for solving of nonlinear equations under the generalized Lipschitz conditions. Phys.-Math. Model. Inf. Technol. 2012, 16, 163–172. (In Ukrainian) [Google Scholar]
  31. Shakhno, S.M. On an iterative algorithm with superquadratic convergence for solving nonlinear operator equations. J. Comp. App. Math. 2009, 231, 222–235. [Google Scholar] [CrossRef] [Green Version]
Table 1. Numbers of iterations for solving system of Equation (37).
Table 1. Numbers of iterations for solving system of Equation (37).
pεMethod
(3)(4)(38)(6)
1 10 5 857107
10 15 26610128
10 10 5 1021102514
10 15 284202716
100 10 5 110283923
10 15 292304124

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Shakhno, S. Extending the Applicability of Two-Step Solvers for Solving Equations. Mathematics 2019, 7, 62. https://doi.org/10.3390/math7010062

AMA Style

Argyros IK, Shakhno S. Extending the Applicability of Two-Step Solvers for Solving Equations. Mathematics. 2019; 7(1):62. https://doi.org/10.3390/math7010062

Chicago/Turabian Style

Argyros, Ioannis K., and Stepan Shakhno. 2019. "Extending the Applicability of Two-Step Solvers for Solving Equations" Mathematics 7, no. 1: 62. https://doi.org/10.3390/math7010062

APA Style

Argyros, I. K., & Shakhno, S. (2019). Extending the Applicability of Two-Step Solvers for Solving Equations. Mathematics, 7(1), 62. https://doi.org/10.3390/math7010062

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop