Next Article in Journal
Product Operations on q-Rung Orthopair Fuzzy Graphs
Next Article in Special Issue
Time-Fractional Heat Conduction in a Plane with Two External Half-Infinite Line Slits under Heat Flux Loading
Previous Article in Journal
Lightweight Architecture for Real-Time Hand Pose Estimation with Deep Supervision
Previous Article in Special Issue
Extending the Adapted PageRank Algorithm Centrality to Multiplex Networks with Data Using the PageRank Two-Layer Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Derivative Free Fourth Order Solvers of Equations with Applications in Applied Disciplines

by
Ramandeep Behl
1,
Ioannis K. Argyros
2,
Fouad Othman Mallawi
1 and
J. A. Tenreiro Machado
3,*
1
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA
3
ISEP-Institute of Engineering, Polytechnic of Porto Department of Electrical Engineering, 431 4294-015 Porto, Portugal
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(4), 586; https://doi.org/10.3390/sym11040586
Submission received: 27 March 2019 / Revised: 16 April 2019 / Accepted: 17 April 2019 / Published: 23 April 2019
(This article belongs to the Special Issue Symmetry in Complex Systems)

Abstract

:
This paper develops efficient equation solvers for real- and complex-valued functions. An earlier study by Lee and Kim, used the Taylor-type expansions and hypotheses on higher than first order derivatives, but no derivatives appeared in the suggested method. However, we have many cases where the calculations of the fourth derivative are expensive, or the result is unbounded, or even does not exist. We only use the first order derivative of function Ω in the proposed convergence analysis. Hence, we expand the utilization of the earlier scheme, and we study the computable radii of convergence and error bounds based on the Lipschitz constants. Furthermore, the range of starting points is also explored to know how close the initial guess should be considered for assuring convergence. Several numerical examples where earlier studies cannot be applied illustrate the new technique.

1. Introduction

We look for a unique root p * of the equation:
Ω ( υ ) = 0 ,
where Ω is a continuous operator defined on a convex subset P of S with values in S , and S = R or S = C . This is a relevant issue since several problems from mathematics, physics, chemistry, and engineering can be reduced to Equation (1).
In general, either the lack, or the intractability of analytic solutions force researchers to adopt iterative techniques. However, when using that type of approach, we find problems such as slow convergence, converge to undesired root, divergence, computational inefficiency, or failure (see Traub [1] and Petkovíc et al. [2]). The study of the convergence of iterative algorithms can be classified into two categories, namely the semi-local and local convergence analysis. The first case is based on the information in the neighborhood of the starting point. This also gives criteria for guaranteeing the convergence of iteration algorithms. Therefore, a relevant issue is the convergence domain, as well as the radii of convergence of the algorithm.
Herein, we deal with the second case, that is the local convergence analysis. Let us consider a fourth order algorithm defined for n = 0 , 1 , 2 , , as:
λ s = δ s + β Ω ( δ s ) k , with β 0 R , μ s = λ s Ω ( λ s ) [ δ s , λ s ; Ω ] , δ s + 1 = μ s H ( v s , w s ) Ω ( μ s ) [ δ s , λ s ; Ω ] ,
where λ 0 P is an initial point, k N (k is an arbitrary natural number), [ δ s , λ s ; Ω ] : P × P L ( S , S ) satisfies [ δ s , λ s ; Ω ] = Ω ( x ) Ω ( y ) x y for x y , v s = Ω ( μ s ) Ω ( λ s ) , w s = Ω ( μ s ) Ω ( δ s ) , and H : S × S S is a continuous function. The fourth order convergence for Method (2) was studied by Lee and Kim [3] with Taylor series, hypotheses up to the fourth order derivative of function Ω , and hypotheses on the first and second partial derivatives of function H. However, only the divided difference of the first order appears in (2). Favorable computations were also given with related Kung–Traub methods [1] of the form:
λ s = δ s + β Ω ( δ s ) 4 , with β 0 R , μ s = λ s Ω ( λ s ) [ δ s , λ s ; Ω ] , δ s + 1 = μ s Ω ( δ s ) Ω ( δ s ) 2 Ω ( μ s ) Ω ( μ s ) [ λ s , μ s ; Ω ] .
Notice that (3) is obtained from (2), if we define function H as H ( v , w ) = 1 1 2 w . The assumptions on the derivatives of Ω and H restrict the suitability of Algorithms (2) and (3). For instance, let us consider Ω on P = S = R , P 1 = [ 1 π , 2 π ] as:
Ω ( υ ) = υ 3 log ( π 2 υ 2 ) + υ 5 sin 1 υ , υ 0 0 , υ = 0 .
From this expression, we obtain:
Ω ( υ ) = 2 υ 2 υ 3 cos 1 υ + 3 υ 2 log ( π 2 υ 2 ) + 5 υ 4 sin 1 υ ,
Ω ( υ ) = 8 υ 2 cos 1 υ + 2 υ ( 5 + 3 log ( π 2 υ 2 ) ) + υ ( 20 υ 2 1 ) sin 1 υ ,
Ω ( υ ) = 1 υ ( 1 36 υ 2 ) cos 1 υ + υ 22 + 6 log ( π 2 υ 2 ) + ( 60 υ 2 9 ) sin 1 υ .
We find that Ω ( υ ) is unbounded on P 1 at the point υ = 0 . Therefore, the results in [3] cannot be applied for the analysis of the convergence of Methods (2) or (3). Notice that there are numerous algorithms and convergence results available in the literature [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]. Nonetheless, practice shows that the initial prediction must be in the neighborhood of the root for achieving convergence. However, how close must it be to the starting point? Indeed, local results do not give any information about the ball convergence radii.
We broaden the suitability of Methods (2) and (3) by using only assumptions on the first derivative of function Ω . Moreover, we estimate the computable radii of convergence and the error bounds from Lipschitz constants. Additionally, we discuss the range of initial estimate p * that tells us how close it must be to achieve a granted convergence of (2). This problem was not addressed in [3], but is of capital importance in practical applications.
In what follows: Section 2 addresses the study of local convergence (2) and (3). Section 3 contains three numerical examples that illustrate the theoretical formulation. Finally, Section 4 gives the concluding remarks.

2. Convergence Analysis

Let b > 0 , α > 0 , γ > 0 , β S , k N and M 1 be given constants. Furthermore, we consider that H : S × S S , h : [ 0 , ) [ 0 , ) are continuous functions such that:
| H ( υ , η ) | | H ( | υ | , | η | ) | h ( υ ) ,
for each υ , η S with | η | υ , and that | H | and h are nondecreasing functions on the interval 0 , 1 γ 2 , 0 , 1 γ 2 , respectively. For the local convergence analysis of (2), we need to introduce a few functions and parameters. Let us define the parameters R 0 and R 1 given by:
R 0 = 1 ( 1 + α ) γ , R 1 = 1 ( 1 + α ) γ + γ α ( b | β | M + α ) ,
and function g 1 on the interval [ 0 , R 1 ) by:
g 1 ( υ ) = γ α ( b | β | M + α ) υ 1 ( 1 + α ) γ υ .
From the above functions, it is easy to see that R 1 < R 0 < 1 γ , g 1 ( R 1 ) = 1 and 0 g 1 ( υ ) < 1 , for υ 0 , R 1 . Moreover, we consider the functions q and q ¯ on [ 0 , R 1 ) as:
q ( υ ) = γ ( α + g 1 ( υ ) ) υ and q ¯ ( υ ) = q ( υ ) 1 .
It is straightforward to find that q ¯ ( 0 ) = 1 < 0 and that q ¯ ( υ ) + as υ r 1 . By the intermediate value theorem, we know that q ¯ has zeros in the interval 0 , R 1 . Let us assume that R q is the smallest zero of function q ¯ on ( 0 , R 1 ) , and set:
r ¯ = min { R 1 , R q } .
Furthermore, let us define functions g 2 and g ¯ 2 on [ 0 , r ¯ ) such that:
g 2 ( υ ) = 1 + M h ( υ ) 1 q ( υ ) g 1 ( υ )
and:
g ¯ 2 ( υ ) = g 2 ( υ ) 1 .
Suppose that:
g ¯ 2 ( υ ) a positive number or + , as υ r ¯ .
From (8), we have that g ¯ 2 ( 0 ) < 0 and from (10) that g ¯ 2 ( υ ) > 0 as υ r ¯ 1 . Further, we assume that R is the smallest zero of function g ¯ 2 on ( 0 , r ¯ ) . Therefore, we have that for each υ [ 0 , r ) :
0 g 1 ( υ ) < 1 ,
0 g 2 ( υ ) < 1 ,
0 q ( υ ) < 1 .
Let us denote by U ( μ , r ) and U ¯ ( μ , r ) the open and closed balls in S with center μ S and of radius r > 0 , respectively.
Theorem 1.
Let us assume that Ω : P S S is a differentiable function and [ · , · ; Ω ] : P × P L ( S , S ) is a divided difference of first order of Ω. Furthermore, we consider that h and H are functions satisfying (4), (9), p * P , b > 0 , α > 0 , γ > 0 , M 1 , k N , β S and that for each x , y P , we have:
Ω ( p * ) = 0 , Ω ( p * ) 0 , | Ω ( p * ) | b ,
| Ω ( p * ) 1 ( [ x , y , Ω ] Ω ( p * ) | γ ( | x p * | + | y p * | ) ,
h ( υ ) = H M γ ( | β | M b + α ) υ ( 1 γ α υ ) ( 1 γ ( 1 + α ) υ ) , M g 1 ( υ ) 1 γ υ
| I + β [ x , p * ; Ω ] k ( x p * ) k 1 | α ,
| Ω ( p * ) 1 [ x , p * , Ω ] | M ,
U ¯ ( p * , α r ) P .
Then, the sequence { δ s } obtained for λ 0 U ( p * , R ) { x * } by (2) is well defined, remains in U ( p * , R ) for each n = 0 , 1 , 2 , , and converges to p * , so that:
| λ s p * | α | δ s p * | < R ,
| μ s p * | g 1 ( | δ s p * | ) | δ s p * | | δ s p * | < R ,
| δ s + 1 p * | g 2 ( | δ s p * | ) | δ s p * | < | δ s p * | ,
and G [ R , 1 γ ) . Moreover, the limit point p * is the unique root of equation Ω ( x ) = 0 in U ¯ ( p * , G ) P .
Proof. 
By hypotheses λ 0 U ( p * , r ) { x * } , (14), (17) and (19), we further obtain:
δ 0 p * = λ 0 p * + β Ω ( λ 0 ) Ω ( p * ) k = I + β [ λ 0 , p * ; Ω ] k ( λ 0 p * ) k 1 ( λ 0 p * ) ,
so that:
| δ 0 p * | = I + β [ λ 0 , p * ; Ω ] k ( λ 0 p * ) k 1 | λ 0 p * | α r ,
which leads to (20) for s = 0 and δ 0 U ( p * , α r ) . We need to show that [ λ 0 , δ 0 ; Ω ] 0 . Using (15) and the definition of R, we obtain:
Ω ( p * ) 1 ( [ λ 0 , δ 0 ; Ω ] Ω ( p * ) γ | λ 0 p * | + | δ 0 p * | γ | λ 0 p * | + α | λ 0 p * | γ ( 1 + α ) | λ 0 p * | < γ ( 1 + α ) R < 1 .
From the Banach lemma on invertible functions [7,14], it follows that [ λ 0 , δ 0 ; Ω ] 0 and:
[ λ 0 , δ 0 ; Ω ] 1 Ω ( p * ) 1 1 γ ( 1 + α ) | λ 0 p * | .
In view of (14) and (18), we have:
Ω ( p * ) 1 Ω ( λ 0 ) = Ω ( p * ) 1 Ω ( λ 0 ) Ω ( p * ) = Ω ( p * ) 1 [ λ 0 , p * , Ω ] ( λ 0 p * ) M | λ 0 p * |
and similarly:
Ω ( p * ) 1 Ω ( δ 0 ) M | δ 0 p * | ,
since δ 0 P . Then, using the second substep of Methods (2), (11), (14), (16), (25) and (27), we obtain:
| μ 0 p * | = δ 0 p * [ λ 0 , δ 0 , Ω ] 1 Ω ( δ 0 ) [ λ 0 , δ 0 , Ω ] 1 Ω ( p * ) Ω ( p * ) 1 [ λ 0 , δ 0 , Ω ] ( δ 0 p * ) ( Ω ( δ 0 ) Ω ( p * ) ) [ λ 0 , δ 0 , Ω ] 1 Ω ( p * ) Ω ( p * ) 1 [ λ 0 , δ 0 , Ω ] [ δ 0 , p * , Ω ] ( δ 0 p * ) γ | λ 0 δ 0 | + | δ 0 p * | | δ 0 p * | 1 γ ( 1 + α ) | λ 0 p * | γ | β | b M | λ 0 p * | + α | λ 0 p * | α | λ 0 p * | 1 γ ( 1 + α ) | λ 0 p * | γ α | β | b M + α | λ 0 p * | 2 1 γ ( 1 + α ) | λ 0 p * | = g 1 ( | λ 0 p * | ) | λ 0 p * | < | λ 0 p * | < R ,
and so, (21) is true for s = 0 and μ 0 U ( p * , R ) . Next, we need to show that Ω ( λ 0 ) 0 and Ω ( δ 0 ) 0 , for δ 0 p * . Using (14) and (15), and the definition of R, we obtain:
( λ 0 p * ) Ω ( p * ) 1 [ Ω ( λ 0 ) Ω ( p * ) Ω ( p * ) ( λ 0 p * ) ] | λ 0 p * | 1 Ω ( p * ) 1 [ λ 0 , p * ; Ω ] Ω ( p * ) ( λ 0 p * ) γ | λ 0 p * | 1 | λ 0 p * | 2 = γ | λ 0 p * | < γ R < 1 .
Hence, Ω ( λ 0 ) 0 and:
| Ω ( λ 0 ) 1 Ω ( p * ) | 1 | λ 0 p * | ( 1 γ | λ 0 p * | ) .
Similarly, we have that:
| Ω ( δ 0 ) 1 Ω ( p * ) | 1 | δ 0 p * | ( 1 γ | δ 0 p * | ) 1 | δ 0 p * | ( 1 α γ | λ 0 p * | ) .
Then, by using (4) and (12) (for δ 0 = μ 0 ), (16), (27), (28), (30) and (31), we have:
| H ( υ 0 , η 0 ) | | H ( | υ 0 | , | η 0 | ) | H M | μ 0 p * | | δ 0 p * | ( 1 γ | δ 0 p * | ) , M | μ 0 p * | | λ 0 p * | ( 1 γ | λ 0 p * | ) H M γ ( | β | M b + α ) | λ 0 p * | | δ 0 p * | | δ 0 p * | ( 1 α γ | λ 0 p * | ) ( 1 γ ( 1 + α ) | λ 0 p * | ) , M g 1 ( | λ 0 p * | ) | λ 0 p * | | λ 0 p * | ( 1 γ | λ 0 p * | ) H M γ ( | β | M b + α ) | λ 0 p * | ( 1 γ ( 1 + α ) | λ 0 p * | ) ( 1 α γ | λ 0 p * | ) , M g 1 ( | λ 0 p * | ) 1 γ | λ 0 p * | = h ( | λ 0 p * | ) .
Adopting (13), we get:
Ω ( p * ) 1 [ δ 0 , μ 0 ; Ω ] Ω ( p * ) γ | δ 0 p * | + | μ 0 p * | γ α | λ 0 p * | + g 1 ( | λ 0 p * | ) | λ 0 p * | = q ( | λ 0 p * | ) < q ( R ) < 1 .
Hence, we have:
[ δ 0 , μ 0 ; Ω ] 1 Ω ( p * ) 1 1 q ( | λ 0 p * | ) .
Furthermore, λ 1 is well defined by (24), (32) and (34). Using the third substep of (2), (12), (27) (for δ 0 = μ 0 ), (28), (32) and (34), we get:
| λ 1 p * | | μ 0 p * | + | H ( υ 0 , η 0 ) | [ λ 0 , δ 0 ; Ω ] 1 Ω ( p * ) Ω ( p * ) 1 Ω ( μ 0 ) 1 + M h ( | λ 0 p * | ) 1 q ( | λ 0 p * | ) | μ 0 p * | 1 + M h ( | λ 0 p * | ) 1 q ( | λ 0 p * | ) g 1 ( | λ 0 p * | ) | λ 0 p * | g 2 ( | λ 0 p * | ) | λ 0 p * | < | λ 0 p * | < R ,
showing that (22) is true for s = 0 and λ 1 U ( p * , R ) . Replacing λ 0 , δ 0 , and μ 0 by λ s , δ s , and μ s , respectively, in the preceding estimates, we arrive at (20)–(22). From the estimates δ s + 1 p * < δ s p * < r , we conclude that lim s δ s = p * and x s + 1 U ( p * , R ) . Finally, to illustrate the uniqueness, let p * * U ¯ ( p * , T ) such that Ω ( p * * ) = 0 . We assume Q = [ p * , p * * ; Ω ] . Adopting (15), we get:
Ω ( p * ) 1 ( Q Ω ( p * ) ) γ | p * p * | + | p * * p * | = γ T < 1 .
Therefore, Q 0 , and in view of the identity Ω ( p * ) Ω ( p * * ) = Q ( p * p * * ) , we conclude that p * = p * * . □
Remark 1.
(a) 
It follows from condition (15) and the estimate:
Ω ( p * ) 1 [ x , p * ; Ω ] = | Ω ( p * ) 1 ( [ x , p * ; Ω ] Ω ( p * ) Ω ( p * ) ) + I | 1 + Ω ( p * ) 1 ( [ x , p * ; Ω ] Ω ( p * ) ) 1 + γ | λ 0 p * |
and Condition (14) can be discarded and M substituted by:
M = M ( υ ) = 1 + γ υ
or M = 2 , since υ [ 0 , 1 γ ) .
(b) 
We note that (2) does not change if we adopt the conditions of Theorem 1 instead of the stronger ones given in [3]. In practice, for the error bounds, we can consider the computational order of convergence (COC) [10]:
ξ = l n | δ s + 2 p * | | δ s + 1 p * | l n | δ s + 1 p * | | δ s p * | , for each s = 0 , 1 , 2 ,
or the approximated computational order of convergence (ACOC) [10]:
ξ * = l n | δ s + 2 δ s + 1 | | δ s + 1 δ s | l n | δ s + 1 δ s | | δ s δ s 1 | , for each s = 1 , 2 ,
In practice, we obtain the order of convergence that, avoiding the bounds, involves estimates higher than the first Fréchet derivative.

3. Numerical Examples

We consider some of the weight functions to solve a variety of univariate problems that are depicted in Examples 1–3.
Table 1, Table 2 and Table 3 display the minimum number of iterations necessary to obtain the required accuracy for the zeros of the functions Ω ( x ) in Examples 1–3. Moreover, we include also the initial guess, the radius of convergence of the corresponding function, and the theoretical order of convergence. Additionally, we calculate the C O C approximated by means of (37) and (38).
All computations used the package M a t h e m a t i c a 9 with multiple precision arithmetic, adopting ϵ = 10 50 as a tolerance error and the stopping criteria:
( i ) | δ s + 1 δ s | < ϵ and ( i i ) | Ω ( δ s ) | < ϵ .
Example 1.
Let S = R , P = [ π , π ] , x * = 0 . Let us define function Ω on P by:
Ω ( x ) = cos x x 1 .
Consequently, it results α = 1 + | β | + M k | Ω ( p * ) | k 1 γ k 1 , γ = 1 2 , b = | Ω ( p * ) | = 1 and M = 2 . We obtain a different radius of convergence when using distinct types of weight functions (for details, please see [3]), COC (ξ) and s presented in Table 1.
Example 2.
Let S = R , P = [ 1 , 1 ] , x * = 0.714806 (approximated root), and let us assume function Ω on P by
Ω ( x ) = e x 4 x 2 .
As a consequence, we get α = 1 + | β | + M k | Ω ( p * ) | k 1 γ k 1 , γ = 2 , b = | Ω ( p * ) | = | e x * 8 p * | 3.67466 and M = 2 . We have the distinct radius of convergence when using several weight functions (for details, please see [3]), COC (ξ) and s listed in Table 2.
Example 3.
Using the example of the introduction, we have α = 1 + | β | + M k | Ω ( p * ) | k 1 γ k 1 , γ = 2 , b = | Ω ( p * ) | = 2 π + 1 π 3 0.23489 , M = 2 , and the required zero is p * = 1 π 0.318309886 . We have different radii of convergence by adopting distinct types of weight functions (for details, please see [3]), COC (ξ) and s in Table 3.

4. Conclusions

Locating the range or interval of the required root that provides sure convergence of an iterative method is one of the difficult problems in computational analysis. This paper addressed this problem and expanded the applicability of Methods (2) and (3) using hypotheses only on the functions appearing in these techniques. Further, we provided the radii of ball convergence and error bounds using Lipschitz conditions. This type of study was not addressed in the earlier work. With the help of the radius of convergence, we can find the range of initial estimate p * that tells us how close it must be for granting the convergence of Methods (2) and (3). Finally, the applicability of new approach was illustrated with several numerical examples.

Author Contributions

All co-authors contributed to the conceptualization, methodology, validation, formal analysis, writing the original draft preparation, and editing.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  2. Petkovic, M.S.; Neta, B.; Petkovic, L.; Džunič, J. Multipoint Methods For Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  3. Lee, M.Y.; Kim, Y.I. A family of fast derivative-free fourth order multipoint optimal methods for nonlinear equations. Int. J. Comput. Math. 2012, 89, 2081–2093. [Google Scholar] [CrossRef]
  4. Amat, S.; Busquier, S.; Plaza, S. Dynamics of the King and Jarratt iterations. Aequ. Math. 2005, 69, 212–223. [Google Scholar] [CrossRef]
  5. Amat, S.; Busquier, S.; Plaza, S. Chaotic dynamics of a third-order Newton-type method. J. Math. Anal. Appl. 2010, 366, 24–32. [Google Scholar] [CrossRef] [Green Version]
  6. Amat, S.; Hernández, M.A.; Romero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
  7. Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  8. Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publ. Comp: River Edge, NJ, USA, 2013. [Google Scholar]
  9. Behl, R.; Motsa, S.S. Geometric construction of eighth-order optimal families of Ostrowski’s method. Recent Theor. Appl. Approx. Theory 2015, 2015, 614612. [Google Scholar] [CrossRef] [PubMed]
  10. Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
  11. Kanwar, V.; Behl, R.; Sharma, K.K. Simply constructed family of a Ostrowski’s method with optimal order of convergence. Comput. Math. Appl. 2011, 62, 4021–4027. [Google Scholar] [CrossRef] [Green Version]
  12. Magreñán, Á.A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  13. Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef] [Green Version]
  14. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. Pol. Acad. Sci. Banach Cent. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef] [Green Version]
  15. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
Table 1. Radii of convergence according to the adopted weight function.
Table 1. Radii of convergence according to the adopted weight function.
CasesDifferent Values of the Parameters that Satisfy Theorem 1
β k H ( υ , η ) R 1 R q R λ 0 s ξ
1. 1 1 1 + υ 1 η 0.105260.270080.025350.02444
2.32 1 + 2 υ 0.002500.037490.000820.000734
3. 3 3 1 + 2 η 0.000200.010130.000040.000334
4. 0.1 4 1 1 2 η 0.009620.070900.001600.000534
Table 2. Radii of convergence according to the adopted weight function.
Table 2. Radii of convergence according to the adopted weight function.
CasesDifferent Values of the Parameters that Satisfy Theorem 1
β k H ( υ , η ) R 1 R q R λ 0 s ξ
1. 1 1 1 + υ 1 η 0.014270.054980.003180.71344
2.32 1 + 2 υ 0.000470.008960.000150.741744
3. 3 3 1 + 2 η 0.000060.002860.000010.741834
4. 0.1 4 1 1 2 η 0.003590.022010.000600.741344
Table 3. Radii of convergence according to the adopted weight function.
Table 3. Radii of convergence according to the adopted weight function.
CasesDifferent Values of the Parameters that Satisfy Theorem 1
β k H ( υ , η ) R 1 R q R λ 0 s ξ
1. 1 1 1 + υ 1 η 0.034700.073910.008840.32544
2.32 1 + 2 υ 0.039650.083560.012250.32944
3. 3 3 1 + 2 η 0.083630.131400.024370.29854
4. 0.1 4 1 1 2 η 0.163670.189120.052680.35854

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K.; Mallawi, F.O.; Tenreiro Machado, J.A. Derivative Free Fourth Order Solvers of Equations with Applications in Applied Disciplines. Symmetry 2019, 11, 586. https://doi.org/10.3390/sym11040586

AMA Style

Behl R, Argyros IK, Mallawi FO, Tenreiro Machado JA. Derivative Free Fourth Order Solvers of Equations with Applications in Applied Disciplines. Symmetry. 2019; 11(4):586. https://doi.org/10.3390/sym11040586

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, Fouad Othman Mallawi, and J. A. Tenreiro Machado. 2019. "Derivative Free Fourth Order Solvers of Equations with Applications in Applied Disciplines" Symmetry 11, no. 4: 586. https://doi.org/10.3390/sym11040586

APA Style

Behl, R., Argyros, I. K., Mallawi, F. O., & Tenreiro Machado, J. A. (2019). Derivative Free Fourth Order Solvers of Equations with Applications in Applied Disciplines. Symmetry, 11(4), 586. https://doi.org/10.3390/sym11040586

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop