Next Article in Journal
An Efficient Spectral Method to Solve Multi-Dimensional Linear Partial Different Equations Using Chebyshev Polynomials
Next Article in Special Issue
Extended Local Convergence for the Combined Newton-Kurchatov Method Under the Generalized Lipschitz Conditions
Previous Article in Journal
Coefficient Inequalities of Functions Associated with Hyperbolic Domains
Previous Article in Special Issue
A Few Iterative Methods by Using [1,n]-Order Padé Approximation of Function and the Improvements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ball Comparison for Some Efficient Fourth Order Iterative Methods Under Weak Conditions

by
Ioannis K. Argyros
1,† and
Ramandeep Behl
2,*,†
1
Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(1), 89; https://doi.org/10.3390/math7010089
Submission received: 17 December 2018 / Revised: 5 January 2019 / Accepted: 9 January 2019 / Published: 16 January 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
We provide a ball comparison between some 4-order methods to solve nonlinear equations involving Banach space valued operators. We only use hypotheses on the first derivative, as compared to the earlier works where they considered conditions reaching up to 5-order derivative, although these derivatives do not appear in the methods. Hence, we expand the applicability of them. Numerical experiments are used to compare the radii of convergence of these methods.

1. Introduction

Let E 1 , E 2 be Banach spaces and D E 1 be a nonempty and open set. Set LB ( E 1 ,   E 2 ) = { M : E 1 E 2 } , bounded and linear operators. A plethora of works from numerous disciplines can be phrased in the following way:
λ ( x ) = 0 ,
using mathematical modelling, where λ : D E 2 is a continuously differentiable operator in the Fréchet sense. Introducing better iterative methods for approximating a solution s * of expression (1) is a very challenging and difficult task in general. Notice that this task is extremely important, since exact solutions of Equation (1) are available in some occasions.
We are motivated by four iterative methods given as
y j = x j 2 3 λ ( x j ) 1 λ ( x j ) x n + 1 = x j 1 2 3 λ ( y j ) λ ( x j ) 1 3 λ ( y j ) + λ ( x j ) λ ( x j ) 1 λ ( x j ) ,
y j = x j 2 3 λ ( x j ) 1 λ ( x j ) x n + 1 = x j 1 2 I + 9 8 B j + 3 8 A j λ ( x j ) 1 λ ( x j ) ,
y j = x j 2 3 λ ( x j ) 1 λ ( x j ) x n + 1 = x j I + 1 4 ( A j I ) + 3 8 ( A j I ) 2 λ ( y j ) 1 λ ( x j ) ,
and
y j = x j H j λ ( x j ) 1 λ ( x j ) x n + 1 = z j 3 I H j λ ( x j ) 1 [ x j , z j ; λ ] λ ( x j ) 1 λ ( z j ) ,
where H j 0 = H 0 ( x j ) ,   x 0 , y 0 D are initial points, H ( x ) = 2 I + H 0 ( x ) , H j = H ( X j ) L B ( E 1 , E 1 ) , A j = λ ( x j ) 1 λ ( y j ) , z j = x j + y j 2 , B j = λ ( y j ) 1 λ ( x j ) , and [ · , · ; λ ] : D × D L B ( E 1 , E 1 ) is a first order divided difference. These methods specialize to the corresponding ones (when E 1 = E 2 = R i , i is a natural number) studied by Nedzhibov [1], Hueso et al. [2], Junjua et al. [3], and Behl et al. [4], respectively. The 4-order convergence of them was established by Taylor series and conditions on the derivatives up to order five. Even though these derivatives of higher-order do not appear in the methods (2)–(5). Hence, the usage of methods (2)–(5) is very restricted. Let us start with a simple problem. Set E 1 = E 2 = R and D = [ 5 2 , 3 2 ] . We suggest a function λ : A R as
λ ( t ) = 0 , t = 0 t 5 + t 3 ln t 2 t 4 , t 0 .
Then, s * = 1 is a zero of the above function and we have
λ ( t ) = 5 t 4 + 3 t 2 ln t 2 4 t 3 + 2 t 2 ,
λ ( t ) = 20 t 3 + 6 t ln t 2 12 t 2 + 10 t ,
and
λ ( t ) = 60 t 2 + 6 ln t 2 24 t + 22 .
Then, the third-order derivative of function λ ( x ) is not bounded on D . The methods (2)–(5) cannot be applicable to such problems or their special cases that require the hypotheses on the third or higher-order derivatives of λ . Moreover, these works do not give a radius of convergence, estimations on x j s * , or knowledge about the location of s * . The novelty of our work is that we provide this information, but requiring only the derivative of order one, for these methods. This expands the scope of utilization of them and similar methods. It is vital to note that the local convergence results are very fruitful, since they give insight into the difficult operational task for choosing the starting points/guesses.
Otherwise with the earlier approaches: (i) We use the Taylor series and high order derivative, (ii) we do not have any clue for the choice of the starting point x 0 , (iii) we have no estimate in advance about the number of iterations needed to obtain a predetermined accuracy, and (iv) we have no knowledge of the uniqueness of the solution.
The work lays out as follows: We give the convergence of these iterative schemes (2)–(5) with some main theorems in Section 2. Some numerical problems are discussed in the Section 3. The final conclusions are summarized in Section 4.

2. Local Convergence Analysis

Let us consider that I = [ 0 , ) and φ 0 : I I be a non-decreasing and continuous function with φ 0 ( 0 ) = 0 .
Assume that the following equation
φ 0 ( t ) = 1
has a minimal positive solution ρ 0 . Let I 0 = [ 0 , ρ 0 ) . Let φ : I 0 I and φ 1 : I 0 I be continuous and non-decreasing functions with φ ( 0 ) = 0 . We consider functions on the interval I 0 as
ψ 1 ( t ) = 0 1 φ ( 1 τ ) t d τ + 1 3 0 1 φ 1 ( τ t ) d τ 1 φ 0 ( t )
and
ψ ¯ 1 ( t ) = ψ 1 ( t ) 1 .
Suppose that
φ 0 ( t ) < 3 .
Then, by (7), ψ ¯ 1 ( 0 ) < 0 and ψ ¯ 1 ( t ) , as t ρ 0 . On the basis of the classical intermediate value theorem, the function ψ ¯ 1 ( t ) has a minimal solution R 1 in ( 0 , ρ 0 ) . In addition, we assume
q ( t ) = 1
has a minimal positive solution ρ q , where
q ( t ) = 1 2 3 φ 0 ( ψ 1 ( t ) t ) + φ 0 ( t ) .
Set ρ = min { ρ 0 ,   ρ q } .
Moreover, we consider two functions ψ 2 and ψ ¯ 2 on I 1 = [ 0 , ρ ) as
ψ 2 ( t ) = 0 1 φ ( 1 τ ) t d τ 1 φ 0 ( t ) + 3 2 φ 0 ψ 1 ( t ) t + φ 0 ( t ) 0 1 φ 1 ( τ t ) d τ ( 1 q ( t ) ) ( 1 φ 0 ( t ) )
and
ψ ¯ 2 ( t ) = ψ 2 ( t ) 1 .
Then, ψ ¯ 2 ( 0 ) = 1 , and ψ ¯ 2 ( t ) , with t ρ . We recall R 2 as the minimal solution of ψ ¯ 2 ( t ) = 0 . Set
R = min { R 1 ,   R 2 } .
It follows from (9) that for every t [ 0 , R )
0 φ 0 ( t ) < 1 ,
0 ψ 1 ( t ) < 1 ,
0 q ( t ) < 1
and
0 ψ 2 ( t ) < 1
Define by S ( s * , r ) = y E 1 : s * y < r , and denote by S ¯ ( s * , r ) the closure of S ( s * , r ) .
The local convergence of method (2) uses the conditions ( A ) :
(a1)
λ : D E 2 is a continuously differentiable operator in the Fréchet sense, and there exists s * D .
(a2)
There exists a function φ 0 : I I non-decreasing and continuous with φ 0 ( 0 ) = 0 for all x D
λ ( s * ) 1 λ ( x ) λ ( s * ) φ 0 ( x s * ) .
Set D 0 = D S ( s * , ρ 0 ) , where ρ 0 is given in (6).
(a3)
There exist functions φ : I 0 I , φ 1 : I 0 I non-decreasing and continuous with φ ( 0 ) = 0 so that for all x , y D 0
λ ( s * ) 1 λ ( y ) λ ( x ) φ ( y x )
and
λ ( s * ) 1 λ ( x ) φ 1 ( y x )
(a4)
S ( s * , R ) D , radii ρ 0 , ρ q as given, respectively by (6), (8) exist; the condition (7) holds, where R is defined in (9).
(a4)
0 1 φ 0 ( τ R * ) d τ < 1 ,   for some   R * R .
Set D 1 = D S ( s * , R * ) .
We can now proceed with the local convergence study of Equation (2) adopting the preceding notations and the conditions ( A ) .
Theorem 1.
Under the conditions ( A ) sequence { x j } starting at x 0 S ( s * , R ) { s * } converges to s * , { x j } S ( x , R ) so that
y j s * ψ 1 ( x j s * ) x j s * x j s * < R
and
x n + 1 s * ψ 2 ( x j s * ) x j s * x j s * ,
with ψ 1 and ψ 2 functions considered previously and R is given in (9). Moreover, s * is a unique solution in the set D 1 .
Proof. 
We proof the estimates (14) and (15) by adopting mathematical induction. Therefore, we consider x S ( s * , R ) { s * } . By ( a 1 ) , ( a 2 ) , (9), and (10), we have
λ ( s * ) 1 ( λ ( s * ) λ ( x ) ) φ 0 ( s * x 0 ) < φ 0 ( R ) < 1 ,
hence λ ( x ) 1 L B ( E 2 , E 1 ) and
λ ( x ) 1 λ ( s * ) 1 1 φ 0 ( s * x 0 ) .
The point y 0 is also exists by (17) for n = 0 . Now, by using ( a 1 ) , we have
λ ( x ) = λ ( x ) λ ( s * ) = 0 1 λ ( s * + τ ( x s * ) ) d τ ( x s * ) .
From ( a 3 ) and (18), we yield
λ ( s * ) 1 λ ( x ) 0 1 φ 1 ( τ x s * ) d τ x s * .
We can also write by method (2) for n = 0
y 0 s * = x 0 s * λ ( x 0 ) 1 λ ( x 0 ) + 1 3 λ ( x 0 ) 1 λ ( x 0 ) .
By expressions (9), (11), (17), (19), and (20), we obtain in turn that
y 0 s * λ ( x 0 ) 1 λ ( s * ) 0 1 λ ( s * ) 1 λ s * + τ ( x 0 s * ) λ ( x 0 ) ( x 0 s * ) d τ + 1 3 λ ( x 0 ) 1 λ ( s * ) λ ( s * ) 1 λ ( x 0 ) 0 1 φ ( 1 τ ) x 0 s * d τ + 1 3 0 1 φ ( τ x 0 s * ) d τ 1 φ 0 ( x 0 s * ) = ψ 1 ( x 0 s * ) x 0 s * x 0 s * < R ,
which confirms y 0 S ( s * ,   R ) and (14) for n = 0 . We need to show that 3 λ ( y 0 ) 3 λ ( x 0 ) 1 L B ( E 2 , E 1 ) .
In view of ( a 2 ) , (12), and (21), we have
2 λ ( s * ) 1 3 λ ( y 0 ) λ ( x 0 ) 3 λ ( s * ) + λ ( s * ) 1 2 [ 3 λ ( s * ) 1 λ ( y 0 ) λ ( s * ) + λ ( s * ) 1 λ ( x 0 ) λ ( s * ) ] 1 2 [ φ 0 ( y 0 s * ) + φ 0 ( x 0 s * ) ] 1 2 [ φ 0 ψ 1 ( x 0 s * ) x 0 s * + φ 0 ( x 0 s * ) ] = q ( x 0 s * ) < 1 ,
so
3 λ ( y 0 ) λ ( x 0 ) 1 λ ( s * ) 1 1 q ( x 0 s * ) .
Using (9), (13), (17), ( a 3 ) , (21), (23), and the second substep of method (2) (since x 1 exists by (23)), we can first write
x 1 s * = x 0 s * λ ( x 0 ) 1 λ ( x 0 ) + I 1 2 3 λ ( y 0 ) λ ( x 0 ) 1 3 λ ( y 0 ) + λ ( x 0 ) λ ( x 0 ) 1 λ ( x 0 )
so
x 1 s * x 0 s * λ ( x 0 ) 1 λ ( x 0 ) + 3 2 3 λ ( y 0 ) λ ( x 0 ) 1 λ ( s * ) × λ ( s * ) 1 λ ( y 0 ) λ ( x 0 ) + λ ( s * ) 1 λ ( x 0 ) λ ( s * ) 1 λ ( x 0 ) 1 λ ( s * ) λ ( x 0 ) 1 λ ( x 0 ) 0 1 φ ( ( 1 τ ) t ) d τ 1 φ 0 ( t ) + 3 2 φ 0 ( y 0 s * ) + φ 0 ( x 0 s * ) 0 1 φ 1 ( τ x 0 s * ) d τ ( 1 q ( x 0 s * ) ) ( 1 φ 0 ( x 0 s * ) ) x 0 s * ψ 2 ( x 0 s * ) x 0 s * x 0 s * .
So, (15) holds and x 1 S ( s * , R ) .
To obtain estimate (25), we also used the estimate
I 1 2 3 λ ( y 0 ) λ ( x 0 ) 1 3 λ ( y 0 ) + λ ( x 0 ) = 1 2 3 λ ( y 0 ) λ ( x 0 ) 1 2 3 λ ( y 0 ) λ ( x 0 ) 3 λ ( y 0 ) + λ ( x 0 ) = 3 2 3 λ ( y 0 ) λ ( x 0 ) 1 λ ( y 0 ) λ ( s * ) + λ ( s * ) λ ( x 0 )
The induction for (14) and (15) can be finished, if x m , y m , x m + 1 replace x 0 , y 0 ,   x 1 in the preceding estimations. Then, from the estimate
x m + 1 s * μ x m s * < R ,   μ = φ 2 ( x 0 s * ) [ 0 ,   1 ) ,
we arrive at lim m x m = s * and x m + 1 S ( s * ,   R ) . Let us consider that K = 0 1 λ ( y * + τ ( s * y * ) ) d τ for y * D 1 with K ( y * ) = 0 . From ( a 1 ) and ( a 5 ) , we obtain
λ ( s * ) 1 ( λ ( s * ) K ) 0 1 φ 0 ( τ s * y * ) d τ 0 1 φ 0 ( τ R ) d τ < 1 .
So, K 1 L B ( E 1 , E 2 ) , and s * = y * by the identity
0 = K ( s * ) K ( y * ) = K ( s * y * ) .
 □
Proof. 
Next, we deal with method (3) in an analogous way. We shall use the same notation as previously. Let φ 0 , φ , φ 1 , ρ 0 , ψ 1 , R 1 , and ψ ¯ 1 , be as previously.
We assume
φ 0 ψ 1 ( t ) t = 1
has a minimal solution ρ 1 . Set ρ = min { ρ 0 , ρ 1 } . Define functions ψ 2 and ψ ¯ 2 on interval I 2 = [ 0 , ρ ) by
ψ 2 ( t ) = 0 1 φ ( 1 τ ) t d τ 1 φ 0 ( t ) + 2 + 3 φ 0 ( ψ 1 ( t ) t ) + φ 0 ( t ) 8 ( 1 φ 0 ( t ) ) + 9 φ 0 ( ψ 1 ( t ) t ) + φ 0 ( t ) 8 ( 1 φ 0 ( ψ 1 ( t ) t ) ) 0 1 φ 1 ( τ t ) d τ 1 φ 0 ( t )
and
ψ ¯ 2 ( t ) = ψ 2 ( t ) 1 .
Then, ψ ¯ 2 ( 0 ) = 1 and ψ ¯ 2 ( t ) , with t ρ . R 2 is known as the minimal solution of equation ψ ¯ 2 ( t ) = 0 in ( 0 , ρ ) , and set
R = min { R 1 , R 2 } .
Replace ρ q by ρ 1 in the conditions ( A ) and call the resulting conditions ( A ) .
Moreover, we use the estimate obtained for the second substep of method (3)
x 1 s * = x 0 s * λ ( x 0 ) 1 λ ( x 0 ) + 3 2 I 9 8 B 0 9 16 A 0 λ ( x 0 ) 1 λ ( x 0 ) = x 0 s * λ ( x 0 ) 1 λ ( x 0 ) + 2 I + 3 8 ( I A 0 ) + 9 8 ( I B 0 ) λ ( x 0 ) 1 λ ( x 0 ) = x 0 s * λ ( x 0 ) 1 λ ( x 0 ) + [ 2 I + 3 8 λ ( x 0 ) 1 λ ( x 0 ) λ ( y 0 ) + 9 8 λ ( y 0 ) 1 λ ( y 0 ) λ ( x 0 ) ] λ ( x 0 ) 1 λ ( x 0 ) .
Then, by replacing (24) by (32) in the proof of Theorem 1, we have instead of (25)
x 1 s * = [ 0 1 φ ( ( 1 τ ) s * x 0 ) d τ 1 φ ( s * x 0 ) + { 2 + 3 φ ( y 0 s * ) + φ 0 ( s * x 0 ) 8 ( 1 φ 0 ( s * x 0 ) ) + 9 φ 0 ( y 0 s * ) + φ 0 ( s * x 0 ) 8 ( 1 φ 0 ( y 0 s * ) ) } 0 1 φ 1 ( s * x 0 ) d τ 1 φ 0 ( s * x 0 ) ] s * x 0 ψ 2 ( s * x 0 ) s * x 0 s * x 0 .
The rest follows as in Theorem 1. □
Hence, we arrived at the next Theorem.
Theorem 2.
Under the conditions ( A ) , the conclusions of Theorem 1 hold for method (3).
Proof. 
Next, we deal with method (4) in the similar way. Let φ 0 , φ , φ 1 , ρ 0 , ρ 1 , ρ , ψ 1 , R 1 , and ψ ¯ 1 , be as in the case of method (3). We consider functions ψ 2 and ψ ¯ 2 on I 1 as
ψ 2 ( t ) = 0 1 φ ( ( 1 τ ) t ) d τ 1 φ 0 ( t ) + φ 0 ( ψ 1 ( t ) t ) + φ 0 ( t ) 1 φ 0 ( t ) 1 φ 0 ( ψ 1 ( t ) t ) + 1 4 φ 0 ( ψ 1 ( t ) t ) + φ 0 ( t ) 1 φ 0 ( t ) + 3 8 φ 0 ( ψ 1 ( t ) t ) + φ 0 ( t ) 1 φ 0 ( t ) 2
and
ψ ¯ 2 ( t ) = ψ 2 ( t ) 1 .
The minimal zero of ψ ¯ 2 ( t ) = 0 is denoted by R 2 in ( 0 , ρ ) , and set
R = min { R 1 , R 2 } .
Notice again that from the second substep of method (4), we have
x 1 s * = x 0 s * λ ( x 0 ) 1 λ ( x 0 ) + λ ( x 0 ) 1 λ ( y 0 ) 1 1 4 ( A 0 I ) 3 8 ( I A 0 ) 2 λ ( x 0 ) = x 0 s * λ ( x 0 ) 1 λ ( x 0 ) + { λ ( x 0 ) 1 λ ( y 0 ) λ ( s * ) + λ ( s * ) λ ( x 0 ) 1 4 λ ( x 0 ) 1 λ ( y 0 ) λ ( s * ) + λ ( s * ) λ ( x 0 ) 3 8 λ ( x 0 ) 1 λ ( y 0 ) λ ( s * ) + λ ( s * ) λ ( x 0 ) 2 } λ ( x 0 ) ,
so
x 1 s * [ 0 1 φ ( ( 1 τ ) s * x 0 ) d τ 1 φ ( s * x 0 ) + φ 0 ψ ( s * x 0 ) s * x 0 + φ 0 ( s * x 0 ) ( 1 φ 0 ( s * x 0 ) ) 1 φ 0 ( ψ ( s * x 0 ) s * x 0 ) + 1 4 φ ψ 1 ( s * x 0 ) s * x 0 + φ 0 ( s * x 0 ) 1 φ 0 ( s * x 0 ) + 3 8 φ ψ 1 ( s * x 0 ) s * x 0 + φ 0 ( s * x 0 ) 1 φ 0 ( s * x 0 ) 2 ] s * x 0 ψ 2 ( s * x 0 ) s * x 0 s * x 0 .
The rest follows as in Theorem 1. □
Hence, we arrived at the next following Theorem.
Theorem 3.
Under the conditions ( A ) , conclusions of Theorem 1 hold for scheme (4).
Proof. 
Finally, we deal with method (5). Let φ 0 , φ , φ 1 , ρ 0 , I 0 be as in method (2). Let also φ 2 : I 0 I , φ 3 : I 0 I , φ 4 : I 0 I and φ 5 : I 0 × I 0 I be continuous and increasing functions with φ 3 ( 0 ) = 0 . We consider functions ψ 1 and ψ ¯ 1 on I 0 as
ψ 1 ( t ) = 0 1 φ ( ( 1 τ ) t ) d τ + φ 2 ( t ) 0 1 φ 1 ( τ t ) d τ 1 φ 0 ( t )
and
ψ ¯ 1 ( t ) = ψ 1 ( t ) 1 .
Suppose that
φ 1 ( 0 ) φ 2 ( 0 ) < 1 .
Then, by (6) and (37), we yield ψ ¯ 1 ( 0 ) < 0 and ψ 1 ¯ ( t ) with t ρ 0 . R 1 is known as the minimal zero of ψ ¯ 1 ( t ) = 0 in ( 0 , ρ 0 ) . We assume
φ 0 g ( t ) t = 1 ,
where g ( t ) = 1 2 1 + ψ 1 ( t ) , has a minimal positive solution ρ 1 . Set I 1 = [ 0 , ρ ) , where ρ = min { ρ 0 , ρ 1 } . We suggest functions ψ 2 and ψ ¯ 2 on I 1 as
ψ 2 ( t ) = [ 0 1 φ ( ( 1 τ ) g ( t ) t ) d τ 1 φ 0 ( g ( t ) t ) + φ 0 ( g ( t ) t ) + φ 0 ( t ) 0 1 φ 1 ( τ g ( t ) t ) d τ ( 1 φ 0 ( g ( t ) ) ) ( 1 φ 0 ( t ) ) + 2 φ 3 t 2 ( 1 + ψ 1 ( t ) ) 0 1 φ 1 ( τ g ( t ) t ) d τ ( 1 φ 0 ( t ) ) 2 + φ 4 ( t ) φ 5 ( t , ψ 1 ( t ) t ) 0 1 φ 1 ( τ g ( t ) t ) d τ ( 1 φ 0 ( t ) ) 2 ] g ( t )
and
ψ ¯ 2 ( t ) = ψ 2 ( t ) 1 .
Suppose that
2 φ 3 ( 0 ) + φ 4 ( 0 ) φ 5 ( 0 , 0 ) φ 1 ( 0 ) < 1 .
By (39) and the definition of I 1 , we have ψ ¯ 2 ( 0 ) < 0 , ψ ¯ 2 ( t ) with t ρ . We assume R 2 as the minimal solution of ψ ¯ 2 ( t ) = 0 . Set
R = min { R 1 , R 2 } .
The study of local convergence of scheme (5) is depend on the conditions ( C ) :
(c1)
= ( a 1 ) .
(c2)
= ( a 2 ) .
(c3)
There exist functions φ : I 1 I , φ 1 : I 0 I , φ 2 : I 0 I , φ 3 : I 0 I , φ 4 : I 0 I , and φ 5 : I 0 × I 0 I , increasing and continuous functions with φ ( 0 ) = φ 3 ( 0 ) = 0 so for all x , y D 0
λ ( s * ) 1 λ ( y ) λ ( x ) φ ( y x ) , λ ( s * ) 1 λ ( x ) φ 1 ( x s * ) , I H ( x ) φ 2 ( x s * ) , λ ( s * ) 1 [ x , y ; λ ] λ ( x ) φ 3 ( y x ) , H 0 ( x ) φ 4 ( x s * ) ,
and
λ ( s * ) 1 [ x , y ; λ ] φ 5 ( x s * , y s * ) ,
(c4)
S ( s * , R ) D ,   ρ 0 , ρ 1 given, respectively by (6), (38) exist, (37) and (38) hold, and R is defined in (40).
(c5)
= ( a 5 ) .
Then, using the estimates
y 0 s * = x 0 s * λ ( x 0 ) 1 λ ( x 0 ) + ( I H 0 ) λ ( x 0 ) 1 λ ( x 0 ) 0 1 φ ( ( 1 τ ) x 0 s * ) d τ x 0 s * 1 φ 0 ( x 0 s * ) + I H 0 λ ( x 0 ) 1 λ ( s * ) λ ( s * ) 1 λ ( x 0 ) 0 1 φ ( ( 1 τ ) x 0 s * ) d τ + φ 2 ( x 0 s * ) 0 1 φ 1 ( τ x 0 s * ) d τ 1 φ 0 ( x 0 s * ) x 0 s * ψ 1 ( x 0 s * ) x 0 s * x 0 s * < R ,
and
x 1 s * = z 0 s * λ ( z 0 ) 1 λ ( z 0 ) + λ ( z 0 ) 1 ( λ ( x 0 ) λ ( z 0 ) ) λ ( x 0 ) 1 λ ( z 0 ) + 2 λ ( x 0 ) 1 ( [ x 0 , z 0 ; λ ] λ ( x 0 ) ) λ ( x 0 ) 1 λ ( z 0 ) + H j 0 λ ( x 0 ) 1 [ x 0 , z 0 ; λ ] λ ( x 0 ) 1 λ ( z 0 ) [ 0 1 φ ( ( 1 τ ) g ( x 0 s * ) x 0 s * ) d τ 1 φ 0 ( g ( x 0 s * ) x 0 s * ) + φ 0 ( x 0 s * ) + φ 0 ( g ( x 0 s * ) x 0 s * ) 0 1 φ 1 ( τ g ( x 0 s * ) x 0 s * ) d τ ( 1 φ 0 ( g ( x 0 s * ) x 0 s * ) ) ( 1 φ 0 ( x 0 s * ) ) + 2 φ 3 1 + ψ 1 ( x 0 s * ) x 0 s * 2 0 1 φ 1 ( τ g ( x 0 s * ) x 0 s * ) d τ ( 1 φ 0 ( x 0 s * ) ) 2 + φ 4 ( x 0 s * ) φ 5 ( x 0 s * , y 0 s * ) 0 1 φ 1 ( τ g ( x 0 s * ) x 0 s * ) d τ ( 1 φ 0 ( x 0 s * ) ) 2 ] z 0 s * ψ 2 ( x 0 s * ) x 0 s * x 0 s * .
Here, recalling that z 0 = x 0 + y 0 2 , we also used the estimates
z 0 s * = x 0 + y 0 2 s * 1 2 ( x 0 s * + y 0 s * ) 1 2 ( 1 + ψ 1 ( x 0 s * ) ) x 0 s * ,
α = λ ( z 0 ) 1 λ ( x 0 ) 1 = λ ( z 0 ) 1 ( λ ( x 0 ) λ ( s * ) ) + ( λ ( s * ) λ ( z 0 ) ) λ ( x 0 ) 1 , β = ( 2 I + H 0 λ ( x 0 ) 1 [ x 0 , z 0 ; λ ] ) λ ( x 0 ) 1 ,
and
γ = 2 I + ( 2 I + H 0 0 ) λ ( x 0 ) 1 [ x 0 , z 0 ; λ ] = 2 I + 2 I λ ( x 0 ) 1 [ x 0 , z 0 ; λ ] + 2 H 0 0 λ ( x 0 ) 1 [ x 0 , z 0 ; λ ] = 2 λ ( x 0 ) 1 ( [ x 0 , z 0 ; λ ] λ ( x 0 ) ) + H 0 0 λ ( x 0 ) 1 [ x 0 , z 0 ; λ ]
to obtain (41) and (42). □
Hence, we arrived at the next following Theorem.
Theorem 4.
Under the conditions ( C ) , the conclusions of Theorem 1 hold for method (5).

3. Numerical Applications

We test the theoretical results on many examples. In addition, we use five examples and out of them: The first one is a counter example where the earlier results are applicable; the next three are real life problems, e.g., a chemical engineering problem, an electron trajectory in the air gap among two parallel surfaces problem, and integral equation of Hammerstein problem, which are displayed in Examples 1–5. The last one compares favorably (5) to the other three methods. Moreover, the solution to corresponding problem are also listed in the corresponding example which is correct up to 20 significant digits. However, the desired roots are available up to several number of significant digits (minimum one thousand), but due to the page restriction only 30 significant digits are displayed.
We compare the four methods namely (2)–(5), denoted by N M ,   H M ,   J M , and B M , respectively on the basis of radii of convergence ball and the approximated computational order of convergence ρ = log x ( j + 1 ) x ( j ) / x ( j ) x ( j 1 ) log x ( j ) x ( j 1 ) / x ( j 1 ) x ( j 2 ) , j = 2 , 3 , 4 , (for the details please see Cordero and Torregrosa [5]) ( A C O C ) . We have included the radii of ball convergence in the following Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 except, the Table 4 that belongs to the values of abscissas t j and weights w j . We use the M a t h e m a t i c a 9 programming package with multiple precision arithmetic for computing work.
We choose in all examples H 0 ( x ) = 0 and H ( x ) = 2 I , so φ 2 ( t ) = 1 and φ 4 ( t ) = 0 . The divided difference is [ x , y ; λ ] = 0 1 λ ( y + θ ( x y ) ) d θ . In addition, we choose the following stopping criteria (i) x j + 1 x j < ϵ and (ii) λ ( x j ) < ϵ , where ϵ = 10 250 .
Example 1.
Set X = Y = R . We suggest a function λ on D = [ 1 π ,   2 π ] as
λ ( x ) = 0 , x = 0 x 5 sin 1 / x + x 3 log ( π 2 x 2 ) , x 0 .
But, λ ( x ) is unbounded on Ω at x = 0 . The solution of this problem is s * = 1 π . The results in Nedzhibov [1], Hueso et al. [2], Junjua et al. [3], and Behl et al. [4] cannot be utilized. In particular, conditions on the 5th derivative of λ or may be even higher are considered there to obtain the convergence of these methods. But, we need conditions on λ according to our results. In additon, we can choose
H = 80 + 16 π + ( π + 12 log 2 ) π 2 2 π + 1 ,   φ 1 ( t ) = 1 + H t ,   φ 0 ( t ) = φ ( t ) = H t ,
φ 5 ( s , t ) = 1 2 φ 1 ( s ) + φ 1 ( t ) a n d   φ 3 ( t ) = 1 2 φ 2 ( t ) .
The distinct radius of convergence, number of iterations n, and COC (ρ) are mentioned in Table 1.
Example 2.
The function
λ 2 ( x ) = x 4 1.674 7.79075 x 3 + 14.7445 x 2 + 2.511 x .
appears in the conversion to ammonia of hydrogen-nitrogen [6,7]. The function λ 2 has 4 zeros, but we choose s * 3.9485424455620457727 + 0.3161235708970163733 i . Moreover, we have
φ 0 ( t ) = φ ( t ) = 40.6469 t ,   φ 1 ( t ) = 1 + 40.6469 t ,   φ 3 ( t ) = 1 2 φ 2 ( t ) ,   a n d   φ 5 ( s , t ) = 1 2 φ 1 ( s ) + φ 1 ( t ) .
The distinct radius of convergence, number of iterations n, and COC (ρ) are mentioned in Table 2.
Example 3.
An electron trajectory in the air gap among two parallel surfaces is formulated given as
x ( t ) = x 0 + v 0 + e E 0 m ω sin ( ω t 0 + α ) ( t t 0 ) + e E 0 m ω 2 cos ( ω t + α ) + sin ( ω + α ) ,
where e, m, x 0 , v 0 , and E 0 sin ( ω t + α ) are the charge, the mass of the electron at rest, the position, velocity of the electron at time t 0 , and the RF electric field among two surfaces, respectively. For particular values of these parameters, the following simpler expression is provided:
f 3 ( x ) = x + π 4 1 2 cos ( x ) .
The solution of function f 3 is s * 0.309093271541794952741986808924 . Moreover, we have
φ ( t ) = φ 0 ( t ) = 0.5869 t ,   φ 1 ( t ) = 1 + 0.5869 t ,   φ 3 ( t ) = 1 2 φ 2 ( t ) a n d   φ 5 ( s , t ) = 1 2 φ 1 ( s ) + φ 1 ( t ) .
The distinct radius of convergence, number of iterations n, and COC (ρ) are mentioned in Table 3.
Example 4.
Considering mixed Hammerstein integral equation Ortega and Rheinbolt [8], as
x ( s ) = 1 + 1 5 0 1 U ( s , t ) x ( t ) 3 d t ,   x C [ 0 , 1 ] ,   s , t [ 0 , 1 ] ,
where the kernel U is
U ( s , t ) = s ( 1 t ) , s t , ( 1 s ) t , t s .
We phrase (47) by using the Gauss-Legendre quadrature formula with 0 1 ϕ ( t ) d t k = 1 10 w k ϕ ( t k ) , where t k and w k are the abscissas and weights respectively. Denoting the approximations of x ( t i ) with x i   ( i = 1 , 2 , 3 , , 10 ) , then we yield the following 8 × 8 system of nonlinear equations
5 x i 5 k = 1 10 a i k x k 3 = 0 ,   i = 1 , 2 , 3 , 10 ,
a i k = w k t k ( 1 t i ) ,   k i , w k t i ( 1 t k ) ,   i < k .
The values of t k and w k can be easily obtained from Gauss-Legendre quadrature formula when k = 8 mentioned in Table 4.
The required approximate root is s * ( 1.001377 , , 1.006756 , , 1.014515 , , 1.021982 , , 1.026530 , , 1.026530 , , 1.021982 , , 1.014515 , , 1.006756 , , 1.001377 , ) T . Moreover, we have
φ 0 ( t ) = φ ( t ) = 3 20 t ,   φ 1 ( t ) = 1 + 3 20 t ,   φ 3 ( t ) = 1 2 φ 2 ( t ) a n d   φ 5 ( s , t ) = 1 2 φ 1 ( s ) + φ 1 ( t ) .
The distinct radius of convergence, number of iterations n, and COC (ρ) are mentioned in Table 5.
Example 5.
We consider a boundary value problem from [8], which is defined as follows:
t = 1 2 t 3 + 3 t 3 2 x + 1 2 ,   t ( 0 ) = 0 ,   t ( 1 ) = 1 .
We assume the following partition on [ 0 ,   1 ]
x 0 = 0 < x 1 < x 2 < < x j ,   w h e r e   x j + 1 = x j + h ,   h = 1 j .
We discretize this BVP (48) by
t i t i + 1 t i 1 2 h ,   t i t i 1 2 t i + t i + 1 h 2 ,   i = 1 ,   2 ,   ,   j 1 .
Then, we obtain a ( k 1 ) × ( k 1 ) order nonlinear system, given by
t i + 1 2 t i + t i 1 h 2 2 t i 3 3 2 x i h 2 3 t i + 1 t i 1 2 h 1 h 2 = 0 ,   i = 1 , 2 , , j 1 ,
where t 0 = t ( x 0 ) = 0 ,   t 1 = t ( x 1 ) ,   ,   t j 1 = t ( x j 1 ) ,   t j = t ( x j ) = 1 and initial approximation t 0 ( 0 ) = 1 2 ,   1 2 ,   1 2 ,   1 2 ,   1 2 ,   1 2 T . In particular, we choose k = 6 so that we can obtain a 5 × 5 nonlinear system. The required solution of this problem is
x ¯ 0.09029825 ,   0.1987214 ,   0.3314239 ,   0.4977132 ,   0.7123306 T .
The distinct radius of convergence, number of iterations n, and COC (ρ) are mentioned in Table 6.

4. Conclusions

The convergence order of iterative methods involves Taylor series, and the existence of high order derivatives. Consequently, upper error bounds on x j s * and uniqueness results are not reported with this technique. Hence, the applicability of these methods is limited to functions with high order derivatives. To address these problems, we present local convergence results based on the first derivative. Moreover, we compare methods (2)–(5). Notice that our convergence criteria are sufficient but not necessary. Therefore, if e.g., the radius of convergence for the method (5) is zero, that does not necessarily imply that the method does not converge for a particular numerical example. Our method can be adopted in order to expand the applicability of other methods in an analogous way.

Author Contributions

Both the authors have equal contribution for this paper.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nedzhibov, G.H. A family of multi-point iterative methods for solving systems of nonlinear equations. Comput. Appl. Math. 2008, 222, 244–250. [Google Scholar] [CrossRef] [Green Version]
  2. Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, Efficiency and Dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. Comp. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  3. Junjua, M.; Akram, S.; Yasmin, N.; Zafar, F. A New Jarratt-Type Fourth-Order Method for Solving System of Nonlinear Equations and Applications. Appl. Math. 2015, 2015, 805278. [Google Scholar] [CrossRef]
  4. Behl, R.; Cordero, A.; Torregrosa, J.R.; Alshomrani, A.S. New iterative methodsfor solving nonlinear problems with one and several unknowns. Mathematics 2018, 6, 296. [Google Scholar] [CrossRef]
  5. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  6. Balaji, G.V.; Seader, J.D. Application of interval Newton’s method to chemical engineering problems. Reliab. Comput. 1995, 1, 215–223. [Google Scholar] [CrossRef]
  7. Shacham, M. An improved memory method for the solution of a nonlinear equation. Chem. Eng. Sci. 1989, 44, 1495–1501. [Google Scholar] [CrossRef]
  8. Ortega, J.M.; Rheinbolt, W.C. Iterative Solutions of Nonlinears Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
Table 1. Comparison on the basis of different radius of convergence for Example 1.
Table 1. Comparison on the basis of different radius of convergence for Example 1.
Schemes R 1 R 2 R x 0 n ρ
N M 0.0119710.0102530.0102530.3083144.0000
H M 0.0119710.013290.0119710.3232144.0000
J M 0.0119710.0254830.0119710.3252144.0000
B M 000---
Equation (39) is violated with these choices of φ i . This is the reason that R is zero in the method B M . Therefore, our results hold only, if x 0 = s * .
Table 2. Comparison on the basis of different radius of convergence for Example 2.
Table 2. Comparison on the basis of different radius of convergence for Example 2.
Schemes R 1 R 2 R x 0 n ρ
N M 0.00988410.00487740.00487743.953 + 0.3197i44.0000
H M 0.00988410.0164730.0164733.9524 + 0.32i44.0000
J M 0.00988410.00590940.00590943.9436 + 0.3112i44.0000
B M 000---
Equation (39) is violated with these choices of φ i . This is the reason that R is zero in the method B M . Therefore, our results hold only, if x 0 = s * .
Table 3. Comparison on the basis of different radius of convergence for Example 3.
Table 3. Comparison on the basis of different radius of convergence for Example 3.
Schemes R 1 R 2 R x 0 n ρ
N M 0.6783230.334730.334730.00144.0000
H M 0.6783231.130540.678323−0.57944.0000
J M 0.6783230.405550.405550.09154.0000
B M 07.6006 × 10−180---
Equation (39) is violated with these choices of φ i . This is the reason that R is zero in the method B M . Therefore, our results hold only, if x 0 = s * .
Table 4. Values of abscissas t j and weights w j .
Table 4. Values of abscissas t j and weights w j .
j t j w j
10.013046735741414139961017990.03333567215434406879678440
20.067468316655507744633951650.07472567457529029657288816
30.160295215850487796882836320.10954318125799102199776746
40.283302302935376404600367030.13463335965499817754561346
50.425562830509184394557587000.14776211235737643508694649
60.574437169490815605442413000.14776211235737643508694649
70.716697697064623595399632970.13463335965499817754561346
80.839704784149512203117163680.10954318125799102199776746
90.932531683344492255366048340.07472567457529029657288816
100.986953264258585860038982010.03333567215434406879678440
Table 5. Comparison on the basis of different radius of convergence for Example 4.
Table 5. Comparison on the basis of different radius of convergence for Example 4.
Schemes R 1 R 2 R x 0 n ρ
N M 2.66671.31591.3159(1,1,…,1)44.0000
H M 2.66674.44442.6667(1.9,1.9,…,1.9)54.0000
J M 2.66671.59431.5943(2.1,2.1,…,2.1)54.0000
B M 000---
Equation (39) is violated with these choices of φ i . This is the reason that R is zero in the method B M . Therefore, our results hold only, if x 0 = s * .
Table 6. Convergence behavior of distinct fourth-order methods for Example 5.
Table 6. Convergence behavior of distinct fourth-order methods for Example 5.
Methods j F ( x ( j ) ) x ( j + 1 ) x ( j ) ρ
M M 1 8.1 ( 6 ) 2.0 ( 4 )
2 1.0 ( 23 ) 3.1 ( 23 )
3 9.1 ( 95 ) 2.4 ( 94 )
4 3.7 ( 379 ) 9.0 ( 379 ) 3.9996
H M 1 7.8 ( 6 ) 1.9 ( 5 )
2 7.6 ( 24 ) 2.4 ( 23 )
3 2.7 ( 95 ) 7.2 ( 95 )
4 2.6 ( 381 ) 6.3 ( 381 ) 3.9997
J M 1 7.8 ( 6 ) 1.9 ( 5 )
2 7.6 ( 24 ) 2.4 ( 23 )
3 2.7 ( 95 ) 7.2 ( 95 )
4 2.6 ( 381 ) 6.3 ( 381 ) 3.9997
B M 1 7.2 ( 6 ) 1.7 ( 5 )
2 4.2 ( 24 ) 1.3 ( 23 )
3 1.9 ( 96 ) 5.2 ( 96 )
4 5.6 ( 386 ) 1.4 ( 385 ) 3.9997

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Behl, R. Ball Comparison for Some Efficient Fourth Order Iterative Methods Under Weak Conditions. Mathematics 2019, 7, 89. https://doi.org/10.3390/math7010089

AMA Style

Argyros IK, Behl R. Ball Comparison for Some Efficient Fourth Order Iterative Methods Under Weak Conditions. Mathematics. 2019; 7(1):89. https://doi.org/10.3390/math7010089

Chicago/Turabian Style

Argyros, Ioannis K., and Ramandeep Behl. 2019. "Ball Comparison for Some Efficient Fourth Order Iterative Methods Under Weak Conditions" Mathematics 7, no. 1: 89. https://doi.org/10.3390/math7010089

APA Style

Argyros, I. K., & Behl, R. (2019). Ball Comparison for Some Efficient Fourth Order Iterative Methods Under Weak Conditions. Mathematics, 7(1), 89. https://doi.org/10.3390/math7010089

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop