Next Article in Journal
Walking Practice Combined with Virtual Reality Contributes to Early Acquisition of Symmetry Prosthetic Walking: An Experimental Study Using Simulated Prosthesis
Previous Article in Journal
Improved Opposition-Based Particle Swarm Optimization Algorithm for Global Optimization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Semi-Local Convergence of an Ostrowski-Type Method for Solving Equations

1
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Learning Commons, University of North Texas at Dallas, Dallas, TX 75201, USA
4
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Mangalore 575025, India
*
Author to whom correspondence should be addressed.
Symmetry 2021, 13(12), 2281; https://doi.org/10.3390/sym13122281
Submission received: 13 October 2021 / Revised: 12 November 2021 / Accepted: 18 November 2021 / Published: 1 December 2021
(This article belongs to the Section Mathematics)

Abstract

:
Symmetries play a crucial role in the dynamics of physical systems. As an example, microworld and quantum physics problems are modeled on principles of symmetry. These problems are then formulated as equations defined on suitable abstract spaces. Then, these equations can be solved using iterative methods. In this article, an Ostrowski-type method for solving equations in Banach space is extended. This is achieved by finding a stricter set than before containing the iterates. The convergence analysis becomes finer. Due to the general nature of our technique, it can be utilized to enlarge the utilization of other methods. Examples finish the paper.

1. Introduction

We are concerned with finding x * solving
F ( x ) = 0 ,
where F : D E E 1 is an operator acting between Banach spaces E and E 1 with D .
The famous Ostrowski-type method is defined for x 0 D and each n = 0 , 1 , 2 , by
y k = x k F ( x k ) 1 F ( x k ) x k + 1 = y k A k F ( x k ) ,
where A k = 2 [ y k , x k ; F ] 1 F ( x k ) 1 , with [ . , . , F ] : D × D L ( E , E 1 ) . There are numerous results for the convergence of iterative methods utilizing the information ( D , x 0 , F , F ) and higher order derivatives [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39]. However, higher order derivatives cannot be found on method (2). Moreover, these results do not give uniqueness ball or estimates on x k x * or x k + 1 x k . That is why we are motivated to write this paper, where only hypotheses on the derivative and divided differences of order one are used. Notice that only these operators appear on method (2).
The method (2) is shown to be of order four using Taylor expansion and assumptions on the fifth order derivative of F , which is not on these schemes [5]. So, the assumptions on the sixth derivative reduce the applicability of this method.
For example: Let E = E 1 = R , D = [ 0.5 , 1.5 ] . Define λ on D by
λ ( t ) = t 3 log t 2 + t 5 t 4 i f t 0 0 i f t = 0 .
Then, we get t * = 1 , and
λ ( t ) = 6 log t 2 + 60 t 2 24 t + 22 .
Obviously, λ ( t ) is not bounded on D . So, the convergence of method (2) is not guaranteed by the previous analyses in [5].
The rest of the study is organized as follows: Section 2 contains results on majorizing sequences. In Section 3, we develop the semi-local convergence analysis based on majorizing sequences. The local convergence analysis can be found in Section 4. Numerical examples can be found in Section 5. The paper ends with some concluding remarks in Section 6.

2. Majorizing Sequences

We recall the definition of a majorizing sequences.
Definition 1.
Let { v k } be a sequence in a complete normed space. Then, a non-decreasing scalar sequence { d k } is called majorizing for { v k } if
v k + 1 v k d k + 1 d k for   each k = 0 , 1 , 2 , .
Then, the convergence of sequence { v k } reduces to studying that of { d k } [40].
Let η 0 and L , L i , i = 0 , 1 , 2 , 3 , 4 be positive parameters. Set M 0 = L 0 L 2 2 , M = L 2 , M 1 = L L 2 2 and M 2 = L L 3 2 . Define sequences { t k } , { s k } , { α k } and { β k } for each k = 0 , 1 , 2 , by t 0 = 0 , s 0 = η
t 1 = s 0 + M 0 s 0 2 1 L 1 s 0 + M 2 s 0 3 ( 1 L 1 s 0 ) ( 1 L 0 s 0 ) + M s 0 2 1 L 0 s 0 , s 1 = t 1 + M ( t 1 t 0 ) 2 + t 1 s 0 1 L 0 t 1 t k + 1 = s k + α k ( s k t k ) , s n + 1 = t n + 1 + M ( t n + 1 t n ) 2 + L 4 t n ( t n + 1 s n ) 1 L 0 t n + 1 , α k = M 1 ( s k t k ) ( 1 L 1 ( s k + t k ) ) ( 1 L 0 t k ) + M 2 ( s k t k ) ( 1 L 1 ( s k + t + k ) ) ( 1 L 0 s k ) + M ( s k t k ) 1 L 0 s k , β k = M ( t k + 1 t k ) + L 4 t k 1 L 0 t k + 1 .
Moreover, define quadratic polynomials and functions on the interval [ 0 , 1 ] for some b > 1
p 1 ( t ) = t 2 ( 1 L 0 t 1 ) t + L 4 t 1 ,
p 2 ( t ) = t 2 + t ( 1 2 b L 1 t 1 b 1 ) ,
p 3 ( t ) = t 2 + t ( 1 L 0 t 1 ) ,
g 1 ( t ) = ( M 1 b + M 2 b + M ) t
and
g 2 ( t ) = M t ( ( 1 + t ) t 1 ) ( 1 + t ) ) + ( 1 + t ) 2 t 2 ( L 4 + L 0 t 2 ( 1 + t ) ) .
Denote by γ 0 , γ 1 or γ 2 or γ 3 , γ 4 , the non-negative zeros of p 1 , p 2 , p 3 if they exist. Furthermore, define sequences of functions on the interval [ 0 , 1 ] for δ = δ ( t ) = ( 1 + t ) t by
f n ( 1 ) ( t ) = M 1 b t δ n 1 t 1 + M 2 b t δ n 1 t 1 + M t δ n 1 t 1 + L 0 t ( 1 + δ + + δ n ) t 1 t
and
f n ( 2 ) ( t ) = M ( t 2 δ n 1 + t δ n 1 ) t 1 + L 4 ( 1 + δ + + δ n ) t 1 + L 0 t ( 1 + δ + + δ n ) t 1 t .
Next, we present two results on the majorizing sequence for method (2).
Lemma 1.
Suppose that for each k = 0 , 1 , 2 , , items
s k t k + 1 < 1 L 0
and
s k + t k < 1 L 1
hold. Then, sequences { s k } and { t k } are increasing, bounded from above by 1 L 0 and converge to their unique least upper bound s * [ 0 , 1 L 0 ] .
Proof. 
It follows from (3)–(5) that sequences { s k } , { t k } are increasing, bounded from above by 1 L 0 and as such they converge to s * .
Remark 1.
Conditions (4) and (5) hold only in some special cases. This is why we present stronger conditions that can be verified more easily.
We shall use the following set of conditions denoted by (A) in our second result on majorizing sequences for method (2).
Suppose: there exists γ S : = ( 0 , 5 1 2 ) , b > 0 , η > 0 satisfying
0 α 0 γ , 0 β 0 γ , L 0 η < 1 , L 1 η < 1 , L 1 t 1 < b 1 2 b , γ 0 γ γ 1 if g 2 ( t ) 0 for   each t S ,
or
f 1 ( 2 ) ( γ ) 0 if g 2 ( t ) 0 for   each t S , γ γ 2 and γ 3 γ γ 4 .
Then, under the preceding notation and conditions (A), we can show.
Lemma 2.
Under conditions (A), the conclusions of Lemma 1 hold for sequences { s k } , { t k } . Moreover, the following assertions hold for each k = 0 , 1 , 2 ,
0 s k t k γ k ( 1 + γ ) k 1 ( t 1 t 0 ) ,
0 t k + 1 s k γ k + 1 ( 1 + γ ) k 1 ( t 1 t 0 ) ,
0 s k 1 δ k + 1 1 δ t 1
and
0 t k + 1 1 δ n + 2 1 δ t 1 .
Recall that δ = γ ( γ + 1 ) and δ ( t ) = δ .
Proof. 
We shall show using induction on n that the following hold.
0 α n γ ,
0 β n γ ,
L 0 t n 1 , L 0 s n < 1 , L 1 ( s n + t n ) < 1 ,
and
t n s n t n + 1 .
Estimates (10)–(13) hold for n = 0 , by the initial conditions and conditions (A). We also have
0 s 0 t 0 η , 0 t 1 s 0 γ η , 0 s 1 t 1 γ ( t 1 t 0 ) , 0 t 2 s 1 γ 2 ( t 1 t 0 ) , 0 s 2 t 2 γ 2 ( 1 + γ ) ( t 1 t 0 ) , 0 t 3 s 2 γ 3 ( 1 + γ ) ( t 1 t 0 ) , 0 s n t n γ n ( 1 + γ ) n 1 ( t 1 t 0 ) ,
0 t n + 1 s n γ n + 1 ( 1 + γ ) n 1 ( t 1 t 0 ) ,
t n + 1 s n + γ n + 1 ( 1 + γ ) n 1 ( t 1 t 0 ) t n + γ n ( 1 + γ ) n 1 ( t 1 t 0 ) + γ n + 1 ( 1 + γ ) n 1 ( t 1 t 0 ) t 1 + γ ( 1 + γ ) n 1 ( t 1 t 0 ) + γ 2 ( 1 + γ ) ( t 1 t 0 ) + + γ n ( 1 + γ ) n 1 ( t 1 t 0 ) + γ n + 1 ( 1 + γ ) n 1 ( t 1 t 0 ) ( 1 + δ + + δ n + 1 ) t 1 = 1 δ n + 2 1 δ t 1
and
s n t n + γ n ( 1 + γ ) n 1 t 1 1 δ n + 1 1 δ t 1 .
Suppose these estimates hold for all integers smaller or equal to n . Then, evidently, (10) holds (since 1 1 L 1 ( s n + t n ) b ), if we show instead using (14)–(18) that
M 1 b ( s n t n ) 1 L 0 t n + M 2 b ( s n t n ) 1 L 0 s n + M ( s n t n ) 1 L 0 s n γ
or
M 1 b γ δ n 1 t 1 + M 2 b γ δ n 1 t 1 + M γ δ n 1 t 1 + γ L 0 ( 1 + δ + + δ n ) t 1 γ 0 .
Notice that expression (19) is obtained if we replace s n t n , t n , s n by the right hand sides of (14), (15) and (17), respectively, in (18), remove denominators and move all terms at the right hand side of the inequality.
Estimate (19) motivates us to define functions f n ( 1 ) on the interval [ 0 , 1 ] and show instead of (19)
f n ( 1 ) ( t ) 0 a t t = γ .
We shall find a relationship between two consecutive functions f n ( 1 ) . We can write in turn that
f n + 1 ( 1 ) ( t ) = M 1 b t δ n t 1 + M 2 b t δ n t 1 + M t δ n t 1 + L 0 t ( 1 + δ + + δ n + 1 ) t 1 t M 1 b t δ n 1 t 1 M 2 b t δ n 1 t 1 M t δ n 1 t 1 L 0 t ( 1 + δ + + δ n ) t 1 + t + f n ( 1 ) ( t ) = f n ( 1 ) ( t ) + ( M 1 b t δ n t 1 M 1 b t δ n 1 t 1 ) + ( M 2 b t δ n t 1 M 2 b t δ n 1 t 1 ) + ( M t δ n t 1 M t δ n 1 t 1 ) + t L 0 δ n + 1 t 1 = f n ( 1 ) ( t ) + ( δ 1 ) t ( M 1 b + M 2 b + M ) δ n 1 t 1 = f n ( 1 ) ( t ) + ( δ 1 ) g 1 ( t ) δ n 1 t 1 f n ( 1 ) ( t ) ,
since t [ 0 , 5 1 2 ] , so
f n + 1 ( 1 ) ( t ) f n ( 1 ) ( t ) .
Define function
f ( 1 ) ( t ) = lim k + f k ( 1 ) ( t ) .
By the definition of functions f n ( 1 ) and f ( 1 ) , we get
f ( 1 ) ( t ) = t L 0 t 1 1 δ t .
Then, we can show instead of (20) that
f ( 1 ) ( t ) 0 a t t = γ ,
which is true by the definition of p 3 and γ 3 γ γ 4 . Similarly, (11) holds if
M ( γ 2 δ n 1 + γ δ n 1 ) t 1 + L 4 ( 1 + δ + + δ n ) t 1 + L 0 γ ( 1 + δ + + δ n + 1 ) t 1 γ 0
or
f n ( 2 ) ( t ) 0 a t t = γ .
This time, we have
f n + 1 ( 2 ) ( t ) = M ( t 2 δ n + t δ n ) t 1 + L 4 ( 1 + δ + + δ n + 1 ) t 1 + L 0 t ( ( 1 + δ + + δ n + 2 ) t 1 t M ( t 2 δ n 1 + t δ n 1 ) t 1 L 4 ( 1 + δ + + δ n ) t 1 L 0 t ( 1 + δ + + δ n + 1 ) t 1 + t + f n ( 2 ) ( t ) = f n ( 2 ) + M ( t 2 δ n + t δ n t 2 δ n 1 t δ n 1 ) t 1 + L 4 δ n + 1 t 1 + L 0 t δ n + 2 t 1 = f n ( 2 ) ( t ) + g 2 ( t ) δ n 1 t 1 ,
so
f n + 1 ( 2 ) ( t ) = f n ( 2 ) ( t ) + g 2 ( t ) δ n 1 t 1 .
Define function
f ( 2 ) ( t ) = lim k + f n ( 2 ) ( t ) .
Then, we get
f ( 2 ) ( t ) = L 4 t 1 1 t + L 0 t t 1 1 t t .
If γ 0 γ γ 1 , then g 2 ( t ) 0 for each t S and f ( 2 ) ( t ) 0 holds at t = γ . However, if g 2 ( t ) 0 for each t S , then
f n + 1 ( 2 ) ( t ) f n ( 2 ) ( t ) .
In this case, (26) holds if f 1 ( 2 ) ( t ) 0 at t = γ , which is true. Therefore, the induction for (10)–(13) is completed. Hence, sequences { s k } , { t k } are non-decreasing, bounded from above by t 1 1 δ t 1 and as such they converge to s * satisfying s * [ η , t 1 1 δ ] .

3. Semi-Local Convergence

We shall use conditions (H):
Suppose
(H1)
There exist x 0 D , η 0 such that F ( x 0 ) is invertible and
F ( x 0 ) 1 F ( x 0 ) η .
(H2)
For each u D
F ( x 0 ) 1 ( F ( u ) F ( x 0 ) ) L 0 u x 0 .
Set D 0 = U ( x * , 1 L 0 ) D .
(H3)
For each v , w D 0
F ( x 0 ) 1 ( F ( w ) F ( v ) ) L w v ,
F ( x 0 * ) 1 ( [ v , w ; F ] F ( x 0 ) ) L 1 ( v x 0 + w x 0 ) ,
F ( x 0 ) 1 ( [ v , w ; F ] F ( w ) ) L 2 v w
F ( x 0 ) 1 ( [ v , w ; F ] F ( v ) ) L 3 v w .
and
F ( x 0 ) 1 F ( w ) L 4 w x 0 .
(H4)
U [ x 0 , s * ] D
and
(H5)
Conditions of Lemma 1 or Lemma 2 hold.
Then, based on conditions (H), we present the semi-local convergence analysis of method (2).
Theorem 1.
Suppose hypotheses (H) hold. Then, sequences { x k } , { y k } generated by method (2) with starter x 0 are well defined in U [ x 0 , s * ] , remain in U [ x 0 , s * ] for each n = 0 , 1 , 2 , and converge to a solution x * U [ x 0 , s * ] of equation F ( x ) = 0 . Moreover, the following error estimates hold
x k x * t n s * .
Proof. 
Mathematical induction is employed to show
x k + 1 y k t k + 1 s k
and
y k x k s k t k .
Iterate y 0 is well defined by the first substep of method (2) and (H1). We can write
y 0 x 0 = F ( x 0 ) 1 F ( x 0 ) η = s 0 t 0 = s 0 s * ,
so y 0 U [ x 0 , s * ] . Using (H3), we get in turn for v , w U ( x 0 , s * )
F ( x 0 ) 1 ( [ v , w ; F ] F ( x 0 ) ) L 1 ( v x 0 + w x 0 ) L 1 ( s * + s * ) = 2 L 1 s * < 1
by the Lemma on invertible opertors due to Banach [41,42], leading to
[ v , w ; F ] 1 F ( x 0 ) 1 1 L 1 ( v x 0 + w x 0 ) .
Similarly, iterate x 1 is well defined by the second substep of method (2). We also have by (H2) for w U ( x 0 , s * )
F ( x 0 ) 1 ( F ( w ) F ( x 0 ) ) L 0 w x 0 L 0 s * < 1 ,
so F ( w ) 1 L ( E 1 , E ) and
F ( w ) 1 F ( x 0 ) 1 1 L 0 w x 0 .
Hence, by (35) for v = y 0 and w = x 0 and (36) for w = x 0 , we have
x 1 y 0 = [ y 0 , x 0 ; F ] 1 ( F ( x 0 ) [ y 0 , x 0 ; F ] ) F ( x 0 ) 1 F ( y 0 ) [ y 0 , x 0 ; F ] 1 ( F ( y 0 ) [ y 0 , x 0 ; F ] ) F ( y 0 ) 1 F ( y 0 ) F ( y 0 ) 1 F ( y 0 ) .
In view of (H3), (35), (36) (for v = y 0 , w = x 0 ), (37) and triangle inequality, we get in turn
x 1 y 0 L L 2 y 0 x 0 y 0 x 0 2 2 ( 1 L 1 ( y 0 x 0 + x 0 x 0 ) ) ( 1 L 0 x 0 x 0 ) + L L 3 y 0 x 0 y 0 x 0 2 2 ( 1 L 1 ( y 0 x 0 + x 0 x 0 ) ) ( 1 L 0 y 0 x 0 ) + L y 0 x 0 2 2 ( 1 L 0 y 0 x 0 ) α 0 ( s 0 t 0 ) = t 1 s 0 ,
and
x 1 x 0 x 1 y 0 + y 0 x 0 t 1 s 0 + s 0 t 0 = t 1 s * ,
so x 1 U [ x 0 , s * ] . Thus, estimates (32) and (33) hold for n = 0 , where we also used
F ( x 0 ) 1 F ( y 0 ) = 0 1 F ( x 0 ) 1 ( F ( x 0 + θ ( y 0 x 0 ) ) F ( x 0 ) ) ( y 0 x 0 ) d θ L 0 2 y 0 x 0 2 L 2 ( s 0 t 0 ) 2 .
We know that (36) holds for w = x 1 , so iterate y 1 is well defined by the first substep of method (2) for n = 1 , and we can write
F ( x 1 ) = F ( x 1 ) F ( x 0 ) F ( x 0 ) ( x 1 x 0 ) + F ( x 0 ) ( x 1 y 0 ) = 0 1 ( F ( x 0 + θ ( x 1 x 0 ) ) F ( x 0 ) ) ( x 1 x 0 ) d θ + F ( x 0 ) ( x 1 y 0 ) .
Then, we obtain by method (2), (36) (for w = x 1 ), (40) and the triangle inequality
y 1 x 1 L 2 x 1 x 0 2 + x 1 y 0 1 L 0 x 1 x 0 M ( t 1 t 0 ) 2 + t 1 s 0 1 L 0 t 1 = s 1 t 1 .
Then, we have
y 1 x 0 y 1 x 1 + x 1 x 0 s 1 t 1 + t 1 = s 1 s * ,
so y 1 U [ x 0 , s * ] . Suppose estimates (32) and (33) hold for all integers smaller or equal to n 1 . Then, simply repeat the preceding calculations with x 0 , y 0 , x 1 replaced by x m , y m , x m + 1 , respectively, and use the induction hypotheses to terminate the proof for (32) and (33). By the Lemma sequence { t k } is Cauchy in a Banach space E and as such it converges to some x * U [ x 0 , s * ] since it is a closed set. Finally, using (40), we get
F ( x 0 ) 1 F ( x k + 1 ) L 2 x k + 1 x k 2 + L 4 x k x 0 x k + 1 x k L 2 ( t k + 1 t k ) 2 + L 4 t k ( t k + 1 s k ) 0
as n + implying F ( x * ) = 0 (by the continuity of F). □
The point s * can be replaced by 1 L 0 or t 1 1 δ , respectively, given in closed form.
Next, a uniqueness of the solution x * of equation F ( x ) = 0 is presented.
Proposition 1.
Suppose:
(a) 
There exists a solution x * D of equation F ( x ) = 0 ;
(b) 
There exists s s * such that
L 0 2 ( s + s * ) < 1 .
Set D 1 = U [ x 0 , s * ] D . Then, the only solution of equation F ( x ) = 0 in the region D 1 is x * .
Proof. 
Let x * * D 1 with F ( x * * ) = 0 . Set M = 0 1 F ( x * * + θ ( x * x * * ) ) d θ . Using (H2) and (42), we obtain in turn that
F ( x 0 ) 1 ( M F ( x 0 ) ) L 0 0 1 ( ( 1 θ ) x * x 0 + θ x * * x 0 ) d θ L 0 0 1 ( 1 θ ) s d θ + L 0 0 1 θ s * d θ L 0 2 ( s + s * ) < 1 ,
so x * * = x * follows from the invertability of linear operator M and the identity M ( x * x * * ) = F ( x * ) F ( x * * ) = 0 0 = 0 .

4. Local Convergence

Let , j , j = 0 , 1 , 2 , 3 , 4 be positive parameters. Define function ψ 1 : [ 0 , 1 0 ) [ 0 , + ) by
ψ 1 ( t ) = t 2 ( 1 0 t )
and set
ρ A = 2 2 0 + .
Define functions q : [ 0 , 1 0 ) [ 0 , + ) by
q ( t ) = 4 ( 1 + ψ 1 ( t ) ) t 1 .
By this definition, we have q ( 0 ) = 1 and q ( t ) + as t 1 0 . It then follows from the intermediate value theorem that function q has zeros in ( 0 , 1 0 ) . Denote by ρ q the smallest such zero. Similarly, denote by ρ p the smallest zero of function p : [ 0 , 1 0 ) [ 0 , + ) defined by p ( t ) = 0 ψ 1 ( t ) t 1 . Set ρ ¯ = min { ρ q , ρ p } . Moreover, define function ψ 2 : [ 0 , ρ ¯ ) [ 0 , + ) by
ψ 2 ( t ) = ψ 1 ( t ) t 2 ( 1 0 ψ 1 ( t ) t ) + 3 4 ( 1 + ψ 1 ( t ) t ) ψ 1 ( t ) t ( 1 0 ψ 1 ( t ) t ) ( 1 1 ( 1 + ψ 1 ( t ) ) t ) + 2 4 ( 1 + ψ 1 ( t ) ) ( 1 1 ( 1 + ψ 1 ( t ) ) t ) ) 2 ψ 1 ( t ) t .
Set
μ ( t ) = ψ 2 ( t ) 1 .
We have again μ ( 0 ) = 1 and μ ( t ) + as t ρ ¯ . Denote by ρ μ the smallest zero of function μ in ( 0 , ρ ¯ ) . We shall show that
ρ * = min { ρ A , ρ μ }
is a convergence radius for method (2). Set T = [ 0 , ρ * ) . Then, it follows from these definitions that for each t T
0 0 t < 1 ,
0 q ( t ) < 1 ,
0 p ( t ) < 1 ,
and
0 ψ i ( t ) < 1 , i = 1 , 2 .
The conditions (C) shall be used together with the preceding notation provided that x * is a simple solution of equation F ( x ) = 0 .
Suppose:
(C1)
For each u D
F ( x * ) 1 ( F ( u ) F ( x * ) ) 0 u x 0 .
Set D 2 = U ( x * , 1 0 ) D .
(C2)
For each v , w D 2
F ( x * ) 1 ( F ( w ) F ( v ) ) w v ,
F ( x * ) 1 F ( v ) 1 v x * ,
F ( x * ) 1 ( [ w , v ; F ] F ( v ) ) 2 ( w v ) ,
F ( x * ) 1 ( [ w , v ; F ] F ( w ) ) 3 w v
and
F ( x * ) 1 ( [ w , v ; F ] F ( x * ) ) 4 ( w x * + v x * ) .
(C3)
U [ x * , ρ * ] D .
Next, we present the local convergence analysis of method (2).
Theorem 2.
Under the conditions (C) further suppose that x 0 U ( x * , ρ * ) { x * } . Then, we have lim k + x k = x * .
Proof. 
We shall use mathematical induction to show
y n x * ψ 1 ( x n x * ) x n x * x n x * < ρ *
and
x n + 1 x * ψ 2 ( x n x * ) x n x * x n x * .
where functions ψ i are given previously and radius ρ * is defined by (45). Let z U ( x * , ρ * ) { x * } . Then, using (C1), (45) and (46), we obtain
F ( x * ) 1 ( F ( z ) F ( x * ) ) 0 z x * 0 ρ * < 1 ,
so F ( z ) is invertible with
F ( z ) 1 F ( x * ) 1 1 0 z x * ,
and iterate y 0 exists by (52) for z = x 0 . Then, we can write
y 0 x * = 0 1 F ( x 0 ) 1 ( F ( x * + θ ( x 0 x * ) ) F ( x 0 ) ) d θ ( x 0 x * ) ,
so by (C1), (C2) and (52) (for z = x 0 ), we get
y 0 x * x 0 x * 2 2 ( 1 0 | | x 0 x * ) ψ 1 ( x 0 x * ) x 0 x * x 0 x * < ρ * ,
so y 0 U ( x * , r ) and (50) hold for n = 0 . As in (52), we also show
F ( y 0 ) 1 F ( x * ) 1 1 0 y 0 x *
and
[ y 0 , x 0 ; F ] 1 F ( x * ) 1 1 4 ( y 0 x * + x 0 x * ) ,
so iterate x 1 exists. Then, we can write in turn by the second substep of method (2) that
x 1 x * = y 0 x * F ( y 0 ) 1 F ( y 0 ) + ( F ( y 0 ) 1 [ y 0 , x 0 ; F ] 1 ) F ( y 0 ) + ( [ y 0 , x 0 ; F ] 1 F ( x 0 ) 1 ) F ( y 0 ) = y 0 x * F ( y 0 ) 1 F ( y 0 ) + F ( y 0 ) 1 ( [ y 0 , x 0 ; F ] F ( y 0 ) ) [ y 0 , x 0 ; F ] 1 F ( y 0 ) + [ y 0 , x 0 ; F ] 1 ( F ( x 0 ) [ y 0 , x 0 ; F ] ) F ( x 0 ) 1 F ( y 0 ) .
Then, in view of (45), (49) (C2), (52) (for z = y 0 ) and (54)–(56), we get in turn that
x 1 x * y 0 x * 2 2 ( 1 0 y 0 x * ) + 3 y 0 x 0 4 y 0 x * ( 1 0 y 0 x * ) ( 1 1 ( y 0 x * + x 0 x * ) ) + 2 | | y 0 x 0 4 y 0 x * ( 1 1 ( y 0 x * + x 0 x * ) ) 2 ψ 2 ( x 0 x * ) x 0 x * x 0 x * ,
showing (51) for n = 0 and x 1 U ( x * , ρ * ) , where we also used (53) and
y 0 x 0 y 0 x * + x * x 0 ψ 1 ( x 0 x * ) x 0 x * + x 0 x * = ( 1 + ψ 1 ( x 0 x * ) ) x 0 x * .
If we exchange x 0 , y 0 , x 1 by x m , y m , x m + 1 , respectively, in the previous calculations we complete the induction for (50) and (51). Then, from the estimate
x m + 1 x * λ 1 x m x * ,
where λ 1 = ψ 2 ( x 0 x * ) [ 0 , 1 ) , we conclude lim m + x m = x * . We also have
y m x * λ 2 x m x * < ρ * ,
where λ 2 = ψ 1 ( x 0 x * ) [ 0 , 1 ) , so lim m + y m = x * .
Next, we present a uniqueness of the solution result.
Proposition 2.
Suppose:
(a) 
x * D is a simple solution of equation F ( x ) = 0 .
(b) 
There exists s ˜ 0 such that
0 2 s ˜ < 1 .
Set D 4 = D U [ x * , s ˜ ] . Then, the only solution of equation F ( x ) = 0 in the region D 4 is x * .
Proof. 
Let x * * D 4 with F ( x * * ) = 0 . Set Q = 0 1 F ( x * + θ ( x * * x * ) ) d θ . Then, using (C1) and (60), we obtain
F ( x * ) 1 ( Q F ( x * ) ) 0 2 x * * x * 0 2 s ˜ < 1 ,
so x * * = x * , since Q 1 L ( E 1 , E ) and Q ( x * * x * ) = F ( x * * ) F ( x * ) = 0 0 = 0 .

5. Numerical Experiments

We provide some examples, showing that the old convergence criteria are not verified, but ours are. The divided difference is chosen by
[ u , v ; F ] = 0 1 F ( v + θ ( u v ) ) d θ .
Example 1.
Define function
h ( t ) = c 0 t + c 1 + c 2 sin c 3 t , t 0 = 0 ,
where c j , j = 0 , 1 , 2 , 3 are parameters. Then, clearly for c 3 large and c 2 small, L 0 L can be small (arbitrarily).
Example 2.
Let E = E 1 = H ( [ 0 , 1 ] ) the domain of functions given on [ 0 , 1 ] , which are continuous. We consider the max-norm. Choose D = U ( 0 , d ) , d > 1 . Define G on D be
G ( x ) ( s ) = x ( s ) w ( s ) ϵ 0 1 P ( s , t ) x 3 ( t ) d t ,
x E , s [ 0 , 1 ] , w E is given, ϵ is a parameter and P is the Green’s kernel given by
P ( ϵ 2 , ϵ 1 ) = ( 1 ϵ 2 ) ϵ 1 , ϵ 1 ϵ 2 ϵ 2 ( 1 ϵ 1 ) , ϵ 2 ϵ 1 .
By (31), we have
( G ( x ) ( z ) ) ( s ) = z ( s ) 3 ϵ 0 1 P ( s , t ) x 2 ( t ) z ( t ) d t ,
t E , s [ 0 , 1 ] . Consider x 0 ( s ) = w ( s ) = 1 and | ϵ | < 8 3 . We get
I G ( x 0 ) < 3 8 | ϵ | , G ( x 0 ) 1 L ( E 1 , E ) ,
F ( x 0 ) 1 8 8 3 | ϵ | , η = | ϵ | 8 3 | ϵ | , L 0 = 12 | ϵ | 8 3 | ϵ | ,
and L = 6 η | ϵ | 8 3 | ϵ | .
Example 3.
Let E , E 1 and D be as in the Example 5.3. It is well known that the boundary value problem [4]
ξ ( 0 ) = 0 , ( 1 ) = 1 ,
ξ = ξ λ ξ 2
can be given as a Hammerstein-like nonlinear integral equation
ξ ( s ) = s + 0 1 K ( s , t ) ( ξ 3 ( t ) + λ ξ 2 ( t ) ) d t
where λ is a parameter. Then, define F : D E 1 by
[ F ( x ) ] ( s ) = x ( s ) s 0 1 K ( s , t ) ( x 3 ( t ) + λ x 2 ( t ) ) d t .
Choose ξ 0 ( s ) = s and D = U ( ξ 0 , ρ 0 ) . Then, clearly U ( ξ 0 , ρ 0 ) U ( 0 , ρ 0 + 1 ) , since ξ 0 = 1 . Suppose 2 λ < 5 . Then, conditions (A) are satisfied for
L 0 = 2 λ + 3 ρ 0 + 6 8 , L = λ + 6 ρ 0 + 3 4
and η = 1 + λ 5 2 λ . Notice that L 0 < L .
Example 4.
Consider the motion system
T 1 ( x ) = e x , T 2 ( y ) = ( e 1 ) y + 1 , T 3 ( z ) = 1
with T 1 ( 0 ) = T 2 ( 0 ) = T 3 ( 0 ) = 0 . Let T = ( T 1 , T 2 , T 3 ) . Let E = E 1 = R 3 , D = B [ 0 , 1 ] , x * = ( 0 , 0 , 0 ) T . Define function T on D for w = ( x , y , z ) T by
T ( w ) = ( e x 1 , e 1 2 y 2 + y , z ) T .
Then, we get
T ( v ) = e x 0 0 0 ( e 1 ) y + 1 0 0 0 1 ,
so 0 = e 1 , = e 1 e 1 = 1 , 2 = 3 = 2 , 4 = 0 2 . Then, the radii are:
ρ A = 0.3827 = ρ * , ρ μ = 1.7156 .

6. Conclusions

A finer convergence analysis is presented for method (2) utilizing generalized conditions. This analysis includes weaker criteria of convergence and computable error bounds not given in earlier papers.

Author Contributions

Conceptualization, C.I.A., I.K.A., J.J., S.R. and S.G.; methodology, C.I.A., I.K.A., J.J., S.R. and S.G.; software, C.I.A., I.K.A., J.J., S.R. and S.G.; validation, C.I.A., I.K.A., J.J., S.R. and S.G.; formal analysis, C.I.A., I.K.A., J.J., S.R. and S.G.; investigation, C.I.A., I.K.A., J.J., S.R. and S.G.; resources, C.I.A., I.K.A., J.J., S.R. and S.G.; data curation, C.I.A., I.K.A., J.J., S.R. and S.G.; writing—original draft preparation, C.I.A., I.K.A., J.J., S.R. and S.G.; writing—review and editing, C.I.A., I.K.A., J.J., S.R. and S.G.; visualization, C.I.A., I.K.A., J.J., S.R. and S.G.; supervision, C.I.A., I.K.A., J.J., S.R. and S.G.; project administration, C.I.A., I.K.A., J.J., S.R. and S.G.; funding acquisition, C.I.A., I.K.A., J.J., S.R. and S.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, X.; Yamamoto, T. Convergence domains of certain iterative methods for solving nonlinear equations. Numer. Funct. Anal. Optim. 1989, 10, 37–48. [Google Scholar] [CrossRef]
  2. Ðukić, D.; Paunović, L.; Radenović, S. Convergence of iterates with errors of uniformly quasi-Lipschitzian mappings in cone metric spaces. Kragujev. J. Math. 2011, 35, 399–410. [Google Scholar]
  3. Ezquerro, J.A.; Gutiérrez, J.M.; Hernández, M.A.; Romero, N.; Rubio, M.J. The Newton method: From Newton to Kantorovich (Spanish). Gac. R. Soc. Mat. Esp. 2010, 13, 53–76. [Google Scholar]
  4. Ezquerro, J.A.; Hernandez, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Birkhäuser: Cham, Switzerland, 2018. [Google Scholar]
  5. Grau-Sánchez, M.; Àngela, G.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  6. Magréñan, A.A.; Gutiérrez, J.M. Real dynamics for damped Newton’s method applied to cubic polynomials. J. Comput. Appl. Math. 2015, 275, 527–538. [Google Scholar] [CrossRef]
  7. Nashed, M.Z.; Chen, X. Convergence of Newton-like methods for singular operator equations using outer inverses. Numer. Math. 1993, 66, 235–257. [Google Scholar] [CrossRef]
  8. Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems. J. Complex. 2010, 26, 3–42. [Google Scholar] [CrossRef] [Green Version]
  9. Todorcević, V. Harmonic Quasiconformal Mappings and Hyperbolic Type Metrics; Springer Nature AG: Cham, Switzerland, 2019. [Google Scholar]
  10. Yamamoto, T. A convergence theorem for Newton-like methods in Banach spaces. Numer. Math. 1987, 51, 545–557. [Google Scholar] [CrossRef] [Green Version]
  11. Argyros, I.K. On the Newton-Kantorovich hypothesis for solving equations. J. Comput. Math. 2004, 169, 315–332. [Google Scholar] [CrossRef] [Green Version]
  12. Argyros, I.K. Computational Theory of Iterative Methods; Series: Studies in Computational Mathematics; Chui, C.K., Wuytack, L., Eds.; Elsevier Publ. Co.: New York, NY, USA, 2007; Volume 15. [Google Scholar]
  13. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: Berlin, Germany, 2008. [Google Scholar]
  14. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef] [Green Version]
  15. Argyros, I.K.; Hilout, S. On an improved convergence analysis of Newton’s method. Appl. Math. Comput. 2013, 225, 372–386. [Google Scholar] [CrossRef]
  16. Argyros, I.K.; Magréñan, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  17. Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
  18. Behl, R.; Maroju, P.; Martinez, E.; Singh, S. A study of the local convergence of a fifth order iterative method. Indian J. Pure Appl. Math. 2020, 51, 439–455. [Google Scholar]
  19. Cătinaş, E. The inexact, inexact perturbed, and quasi-Newton methods are equivalent models. Math. Comp. 2005, 74, 291–301. [Google Scholar] [CrossRef]
  20. Dennis, J.E., Jr. On Newton-like methods. Numer. Math. 1968, 11, 324–330. [Google Scholar] [CrossRef]
  21. Dennis, J.E., Jr.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; SIAM: Philadelphia, PA, USA, 1996. [Google Scholar]
  22. Deuflhard, P.; Heindl, G. Affine invariant convergence theorems for Newton’s method and extensions to related methods. SIAM J. Numer. Anal. 1979, 16, 1–10. [Google Scholar] [CrossRef]
  23. Deuflhard, P. Newton Methods for Nonlinear Problems. Affine Invariance and Adaptive Algorithms; Springer Series in Computational Mathematics; Springer: Berlin, Germany, 2004; Volume 35. [Google Scholar]
  24. Gutiérrez, J.M.; Magreñán, Á.A.; Romero, N. On the semilocal convergence of Newton-Kantorovich method under center-Lipschitz conditions. Appl. Math. Comput. 2013, 221, 79–88. [Google Scholar] [CrossRef]
  25. Hernandez, M.A.; Romero, N. On a characterization of some Newton-like methods of R-order at least three. J. Comput. Appl. Math. 2005, 183, 53–66. [Google Scholar] [CrossRef] [Green Version]
  26. Magréñan, A.A.; Argyros, I.K.; Rainer, J.J.; Sicilia, J.A. Ball convergence of a sixth-order Newton-like method based on means under weak conditions. J. Math. Chem. 2018, 56, 2117–2131. [Google Scholar] [CrossRef]
  27. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; SIAM Publ.: Philadelphia, PA, USA, 2000. [Google Scholar]
  28. Proinov, P.D. General local convergence theory for a class of iterative processes and its applications to Newton’s method. J. Complex. 2009, 25, 38–62. [Google Scholar] [CrossRef] [Green Version]
  29. Rheinboldt, W.C. An Adaptive Continuation Process of Solving Systems of Nonlinear Equations; Polish Academy of Science, Banach Center Publisher: Warsaw, Poland, 1978; Volume 3, pp. 129–142. [Google Scholar]
  30. Shakhno, S.M.; Gnatyshyn, O.P. On aan iterative algorithm of order 1.839… for solving nonlinear least squares problems. Appl. Math. Appl. 2005, 161, 253–264. [Google Scholar]
  31. Shakhno, S.M.; Iakymchuk, R.P.; Yarmola, H.P. Convergence analysis of a two step method for the nonlinear squares problem with decomposition of operator. J. Numer. Appl. Math. 2018, 128, 82–95. [Google Scholar]
  32. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  33. Soleymani, F.; Lotfi, T.; Bakhtiari, P. A multi-step class of iterative methods for nonlinear systems. Optim. Lett. 2014, 8, 1001–1015. [Google Scholar] [CrossRef]
  34. Steffensen, J.F. Remarks on iteration. Skand Aktuar Tidsr. 1993, 16, 64–72. [Google Scholar] [CrossRef]
  35. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  36. Traub, J.F.; Werschulz, A.G. Complexity and Information; Lincei Lectures; Cambridge University Press: Cambridge, UK, 1998; xii+139pp.; ISBN 0-521-48506-1. [Google Scholar]
  37. Traub, J.F.; Wozniakowski, H. Path integration on a quantum computer. Quantum Inf. Process 2002, 1, 356–388. [Google Scholar] [CrossRef]
  38. Verma, R. New Trends in Fractional Programming; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
  39. Zabrejko, P.P.; Nguen, D.F. The majorant method in the theory of Newton-Kantorovich approximations and the Pták error estimates. Numer. Funct. Anal. Optim. 1987, 9, 671–684. [Google Scholar] [CrossRef]
  40. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Research Notes in Mathematics; Pitman (Advanced Publishing Program): Boston, MA, USA, 1984; Volume 103. [Google Scholar]
  41. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  42. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Argyros, C.I.; Argyros, I.K.; Joshi, J.; Regmi, S.; George, S. On the Semi-Local Convergence of an Ostrowski-Type Method for Solving Equations. Symmetry 2021, 13, 2281. https://doi.org/10.3390/sym13122281

AMA Style

Argyros CI, Argyros IK, Joshi J, Regmi S, George S. On the Semi-Local Convergence of an Ostrowski-Type Method for Solving Equations. Symmetry. 2021; 13(12):2281. https://doi.org/10.3390/sym13122281

Chicago/Turabian Style

Argyros, Christopher I., Ioannis K. Argyros, Janak Joshi, Samundra Regmi, and Santhosh George. 2021. "On the Semi-Local Convergence of an Ostrowski-Type Method for Solving Equations" Symmetry 13, no. 12: 2281. https://doi.org/10.3390/sym13122281

APA Style

Argyros, C. I., Argyros, I. K., Joshi, J., Regmi, S., & George, S. (2021). On the Semi-Local Convergence of an Ostrowski-Type Method for Solving Equations. Symmetry, 13(12), 2281. https://doi.org/10.3390/sym13122281

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop