Next Article in Journal
Supplier Selection by Fuzzy Assessment and Testing for Process Quality under Consideration with Data Imprecision
Previous Article in Journal
Mean Square Convergent Non-Standard Numerical Schemes for Linear Random Differential Equations with Delay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Iteration Function Having Optimal Eighth-Order of Convergence for Multiple Roots and Local Convergence

1
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
4
Center for Dynamics and Institute for Analysis, Department of Mathematics, Technische Universität Dresden, 01062 Dresden, Germany
5
Department of Mathematics, Simon Fraser University, 8888 University Dr., Burnaby, BC V5A 1S6, Canada
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(9), 1419; https://doi.org/10.3390/math8091419
Submission received: 15 July 2020 / Revised: 19 August 2020 / Accepted: 21 August 2020 / Published: 24 August 2020
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
In the study of dynamics of physical systems an important role is played by symmetry principles. As an example in classical physics, symmetry plays a role in quantum physics, turbulence and similar theoretical models. We end up having to deal with an equation whose solution we desire to be in a closed form. But obtaining a solution in such form is achieved only in special cases. Hence, we resort to iterative schemes. There is where the novelty of our study lies, as well as our motivation for writing it. We have a very limited literature with eighth-order convergent iteration functions that can handle multiple zeros m 1 . Therefore, we suggest an eighth-order scheme for multiple zeros having optimal convergence along with fast convergence and uncomplicated structure. We develop an extensive convergence study in the main theorem that illustrates eighth-order convergence of our scheme. Finally, the applicability and comparison was illustrated on real life problems, e.g., Van der Waal’s equation of state, Chemical reactor with fractional conversion, continuous stirred reactor and multi-factor problems, etc., with existing schemes. These examples further show the superiority of our schemes over the earlier ones.

1. Introduction

One of the problems of great significance and difficulty in the subject of computational mathematics is finding the multiple zeros for f ( x ) ( f : D R R a sufficiently differentiable function in D ). It is difficult to obtain the exact solution in analytic form for such problems such that we can just say that it is almost fictitious. That is why in practice, we obtain an approximated and efficient solution up to any specific degree of accuracy by the means of an iterative procedure.
This is one of the main reasons that researchers have been making great efforts to develop iteration functions over the past few decades. Additionally, this accuracy also depends on some other facts such as: the considered iterative function, structure of the considered problem, initial guess with software like Maple, Fortran, MATLAB, Mathematica, etc. Further, the practitioners or researchers using these iterative schemes face many problems, like: choice of initial guess/approximation, slower convergence, derivative is zero about the root (in the case of derivative free multipoint schemes), divergence, oscillation, difficulty near the initial point, failure of the iterative method, etc., (for more details please see [1,2,3,4,5]).
In addition, there is not a single iteration function until now which is applicable to every problem. This is the main reason that there is an excessive amount of literature on the iteration functions for scalar equations. Here, we are concerned with the multiple zeros of the involved function in this study. Unfortunately, we have a small amount of literature belonging to higher-order iteration function in real equations that can handle multiple roots. The tough calculation work and more time considerations are the main reason behind this. Moreover, it is a more challenging task to construct an iterative procedure for multiple zeros as compared to simple ones.
Eighth-order multi-point schemes are faster and have a better efficiency index [6,7,8,9,10,11] if compared to fourth-order [12,13,14,15,16,17,18,19,20,21] and sixth-order [22,23] iteration functions. We mean that we can save computational time and cost by using them and obtain the estimate root within a small number of iterations as compared to the others. However, there are only few articles [24,25,26,27] discussing the eighth-order convergence for multiple roots. But, we know that there is always a scope in the research to obtain better approximation techniques with simple and compact body structure.
While keeping all these things in our mind, we not only present an eighth-order iteration scheme having optimal convergence for obtaining the multiple solutions of scalar equation which is better than the existing ones. Furthermore, our schemes achieve the minimum error among two consecutive iterations, minimum residual errors, and more balanced computational order of convergence when compared with existing ones of identical order of convergence. Moreover, we present a main theorem which demonstrates the eighth-order convergence provided multiplicity of roots is known. A practical exhibition of our proposed schemes to real life problems is also given.
We usually categorize schemes with local, semi-local and global converges. In local convergence, information about the solution is used to get determine a ball containing suitable (for convergence) initial points. In the semi-local convergence, convergence criteria are obtained using the initial point and the function involved. Finally, in the global convergence all solutions are sought and the ball of convergence usually coincides with the domain of the function. We are interested in local convergence, since in this case schemes are faster, the initial point is picked from the convergence ball and is close to the solution. However, we should mention that there is a plethora of global results, such as [28,29], to mention a few. Global results are more expensive, but return all roots in a given domain. The conditions (2) in our main Theorem 1 seem to be restrictive. But, they are very general and include many well studied schemes for special choices of the free parameters involved. In fact in Table 1, we present numerous such cases which satisfy the conditions (2) of Theorem 1. Our scheme applies to finding roots of functions not necessity of polynomial nature (see Examples 2, 3, 6 and 8).
In the rest of the examples (used to test the convergence criteria) polynomial clipping schemes may do better. However, we did not investigate this, since the main focus of our paper is in scheme (1). Another benefit of our local results is that we obtain estimates on x n ξ not given in the aforementioned papers, so we know in advance the number of iterations needed to obtain a desired error tolerance.

2. Construction of Higher-Order Scheme

We develop an eighth-order scheme for multiple zeros with simple and compact body design. Therefore, we consider the new scheme in the following way:
y σ = x σ m f ( x σ ) f ( x σ ) , w σ = y σ μ H ν f ( x σ ) f ( x σ ) , x σ + 1 = w σ κ μ G ( μ ) + m κ 1 4 μ f ( x σ ) f ( x σ ) ,
where α , β are real numbers. In addition, two functions H : C C and G : C C are analytic in neighborhoods of ( 1 ) and (0) for ν = 1 + α μ 1 + β μ , μ = f ( y σ ) f ( x σ ) 1 m , κ = f ( w σ ) f ( y σ ) 1 m with μ and κ multi-valued function. Suppose their principal analytic branches (see [30,31]) μ as a principal root given by μ = exp 1 m log f ( y σ ) f ( x σ ) , with log f ( y σ ) f ( x σ ) = log f ( y σ ) f ( x σ ) + i Arg f ( y σ ) f ( x σ ) for π < Arg f ( y σ ) f ( x σ ) π . The choice of Arg ( z ) for z C agrees with that of log z to be employed later in numerical experiments of section. We have in an analogous way μ = f ( y σ ) f ( x σ ) 1 m . exp 1 m Arg f ( y σ ) f ( x σ ) = O ( e σ ) , and κ = f ( w σ ) f ( y σ ) 1 m . exp 1 m Arg f ( w σ ) f ( y σ ) = O ( e σ ) .
In Theorem 1, we illustrate that the constructed scheme (1) attains maximum eighth-order of convergence for all α , β R ( α β ) , without adopting any supplementary evaluation of function or its derivatives. Notice that the weight functions H and G play significant roles in the progress of the scheme (details can be found in Theorem 1).
Theorem 1.
Suppose ξ is a solution of multiplicity m 1 of f. Consider that function f : D R R is analytic in D surrounding the required zero ξ. Then, the scheme given by (1) is of eighth-order convergence, provided
H ( 1 ) = m , H ( 1 ) = 2 m α β ( α β ) , G ( 0 ) = m , G ( 0 ) = 2 m , G ( 0 ) = H ( 1 ) ( α β ) 2 + ( 2 4 β ) m , G ( 0 ) = ( α β ) 2 H ( 1 ) ( α β ) 6 ( β 1 ) H ( 1 ) + 12 m ( β 2 2 β 2 ) .
Proof. 
Let us consider that e σ = x σ ξ and c k = m ! ( m 1 + k ) ! f m 1 + k ( ξ ) f m ( ξ ) , k = 2 , 3 , 4 , 8 are the error in σ th iteration and asymptotic error constant numbers, respectively. Now, we adopt Taylor’s series expansions for the functions f ( x σ ) and f ( x σ ) around x = ξ , which are given by
f ( x σ ) = f ( m ) ( ξ ) m ! e σ m 1 + c 1 e σ + c 2 e σ 2 + c 3 e σ 3 + c 4 e σ 4 + c 5 e σ 5 + c 6 e σ 6 + c 7 e σ 7 + c 8 e σ 8 + O ( e σ 9 )
and
f ( x σ ) = f m ( ξ ) m ! e σ m 1 ( m + ( m + 1 ) c 1 e σ + ( m + 2 ) c 2 e σ 2 + ( m + 3 ) c 3 e σ 3 + ( m + 4 ) c 4 e σ 4 + ( m + 5 ) c 5 e σ 5 + ( m + 6 ) c 6 e σ 6 + ( m + 7 ) c 7 e σ 7 + ( m + 8 ) c 8 e σ 8 + O ( e σ 9 ) ) ,
respectively.
We have the following expression in view of expressions (3) and (4) from the scheme (1)
y σ ξ = c 1 m e σ 2 + 1 m 2 2 m c 2 ( m + 1 ) c 1 2 e σ 3 + i = 0 4 θ i e σ i + 4 + O ( e σ 9 ) ,
where θ i = θ i ( m , c 1 , c 2 , , c 8 ) , for example θ 0 = 1 m 3 3 m 2 c 3 + ( m + 1 ) 2 c 1 3 m ( 3 m + 4 ) c 1 c 2 and θ 1 = 1 m 4 2 c 2 c 1 2 m ( 2 m 2 + 5 m + 3 ) 2 c 3 c 1 m 2 ( 2 m + 3 ) 2 m 2 c 2 2 ( m + 2 ) 2 c 4 m c 1 4 ( m + 1 ) 3 , etc.
Expression (5) and Taylor Series expansion leads us to
f ( y σ ) = f ( m ) ( ξ ) e σ 2 m [ c 1 m m m ! + ( 2 m c 2 ( m + 1 ) c 1 2 ) c 1 m m e σ m ! c 1 + c 1 m 1 + m 1 2 m ! c 1 3 { ( 3 + 3 m + 3 m 2 + m 3 ) c 1 4 2 m ( 2 + 3 m + 2 m 2 ) c 1 2 c 2 + 4 ( m 1 ) m 2 c 2 2 + 6 m 2 c 1 c 3 } e σ 2 + i = 0 4 θ ¯ i e σ i + 3 + O ( e σ 8 ) ] ,
where θ ¯ i = θ ¯ i ( θ 0 , θ 1 , θ 2 , θ 3 , θ 4 ) .
We obtain the following expression from the expressions (3) and (6)
μ = c 1 e σ m + 2 m c 2 ( m + 2 ) c 1 2 m 2 e σ 2 + i = 0 4 θ ¯ ¯ i e σ i + 3 + O ( e σ 8 ) ,
which in turn leads us to
ν = α μ + 1 β μ + 1 = 1 + ( α β ) k = 1 8 γ k e σ k + O ( e σ 9 ) ,
where θ ¯ ¯ i = θ ¯ ¯ i ( θ ¯ 0 , θ ¯ 1 , θ ¯ 2 , θ ¯ 3 , θ ¯ 4 ) and γ k = γ k ( m , α , β , c 1 , c 2 , , c 8 ) , for example γ 1 = c 1 m , γ 2 = 1 m 2 2 c 2 m c 1 2 ( β + m + 2 ) , γ 3 = 1 2 m 3 2 β 2 + 8 β + 2 m 2 + ( 4 β + 7 ) m + 7 c 1 3 + 6 c 3 m 2 2 c 2 c 1 m ( 4 β + 3 m + 7 ) , etc.
Next, we set ν = 1 + Ω . Then, we expand the weight function H ( ν ) as:
H ( ν ) = H ( 1 ) + H ( 1 ) Ω + 1 2 ! H ( 1 ) Ω 2 + 1 3 ! H ( 1 ) Ω 3 .
Adopting expressions (3)–(9) and the second substep of (1), we obtain
w σ ξ = c 1 H ( 1 ) m m 2 e σ 2 + i = 0 5 A i e σ i + 3 + O ( e σ 9 ) ,
where A i = A i ( m , c 1 , c 2 , , c 8 , α , β , H ( 1 ) , H ( 1 ) , H ( 1 ) , H ( 1 ) ) . For example, the first coefficient is explicitly written as A 0 = 1 m 3 2 c 2 m m H ( 1 ) c 1 2 m 2 + m H ( 1 ) ( m + 3 ) + ( α β ) H ( 1 ) and we can also write other ones in the similar way.
By (10), we deduce at least third-order convergence, provided
H ( 1 ) = m .
By using expression (11) and A 0 = 0 , we obtain
c 1 2 H ( 1 ) ( β α ) + 2 m m 3 = 0 ,
which further yields to
H ( 1 ) = 2 m α β , α β .
Hence, our scheme reaches at fourth-order of optimal convergence.
Next, by using (11) and (13) in (10), we have
w σ ξ = m 2 H ( 1 ) ( α β ) 2 + ( 4 β + 9 ) m c 1 3 2 c 1 c 2 m 2 2 m 4 e σ 4 + i = 2 5 A i e σ i + 3 + O ( e σ 9 ) .
We obtain the following expression by adopting the Taylor series and (14)
f ( w σ ) = f ( m ) ( ξ ) e σ 4 m [ 2 m c 1 3 H ( 1 ) ( α β ) 2 + m 2 + ( 4 β + 9 ) m 2 c 1 c 2 m 2 m 4 m m ! + i = 1 5 A ¯ i e σ i + O ( e σ 6 ) ] .
From the expressions (6) and (15), we further have
κ = c 1 2 m 2 H ( 1 ) ( α β ) 2 + ( 4 β + 9 ) m 2 c 2 m 2 2 m 3 e σ 2 + i = 1 5 A ¯ ¯ i e σ i + 2 + O ( e σ 8 ) .
The κ is of order e σ 2 by (16). Hence, extending G ( μ ) about origin ( 0 ) up to third-order terms in the following way:
G ( μ ) = G ( 0 ) + G ( 0 ) μ + 1 2 ! G ( 0 ) μ 2 + 1 3 ! G ( 0 ) μ 3 .
Inserting (3)–(17) into (1), we obtain
e σ + 1 = c 1 G ( 0 ) m c 1 2 m 2 H ( 1 ) ( α β ) 2 + ( 4 β + 9 ) m 2 c 2 m 2 2 m 5 e σ 4 + i = 1 4 L i e σ i + 4 + O ( e σ 9 ) ,
where L i = L i ( α , β , m , c 1 , c 2 , , c 8 , H ( 1 ) , H ( 1 ) , G ( 0 ) , G ( 0 ) , G ( 0 ) ) .
Notice, we attain convergence order at least fifth, provided
G ( 0 ) = m .
We have the following expression by choosing G ( 0 ) = m and L 1 = 0
c 1 2 G ( 0 ) 2 m c 1 2 m 2 H ( 1 ) ( α β ) 2 + ( 4 β + 9 ) m 2 c 2 m 2 2 m 6 = 0 ,
which further yield
G ( 0 ) = 2 m .
Again, we yield by inserting the value of G ( 0 ) and G ( 0 ) into L 2 = 0
c 1 3 c 1 2 m 2 H ( 1 ) ( α β ) 2 + ( 4 β + 9 ) m 2 c 2 m 2 G ( 0 ) H ( 1 ) ( α β ) 2 + ( 4 β 2 ) m 4 m 7 = 0 ,
which further gives
G ( 0 ) = H ( 1 ) ( α β ) 2 + ( 2 4 β ) m .
By using the expressions (19), (21) and (23) with L 3 = 0 , we get
c 1 4 c 1 2 H ( 1 ) ( α β ) 2 + m 2 + ( 4 β + 9 ) m 2 c 2 m 2 12 m 8 × G ( 0 ) + ( α β ) 2 ( 6 ( β 1 ) H ( 1 ) + H ( 1 ) ( β α ) ) 12 m ( β 2 2 β 2 ) = 0 ,
which further provides
G ( 0 ) = ( α β ) 2 H ( 1 ) ( α β ) 6 ( β 1 ) H ( 1 ) + 12 m ( β 2 2 β 2 ) .
The asymptotic error constant term is obtained if we insert (19), (21), (23) and (25) in (18). Then, we have
e σ + 1 = c 1 c 1 2 m 2 H ( 1 ) ( α β ) 2 + ( 4 β + 9 ) m 2 c 2 m 2 24 m 9 [ c 1 4 { ( α β ) 2 ( 3 ( 6 β 2 8 β + 15 ) H ( 1 ) 2 ( 3 β 2 ) ( α β ) H ( 1 ) ) m ( 24 β 3 48 β 2 + 180 β + 3 H ( 1 ) ( α β ) 2 + 433 ) + 6 ( 2 β + 1 ) m 2 + 7 m 3 } 6 c 2 c 1 2 m 4 m 2 H ( 1 ) ( α β ) 2 + ( 4 β + 2 ) m + 12 c 3 c 1 m 3 + 12 c 2 2 m 3 ] e σ 8 + O ( e σ 9 ) .
Next, we want to demonstrate that our scheme (1) has optimal eighth-order of convergence. According to Kung–Traub conjecture [2], any iterative method without memory using n functional evaluations has maximum convergence order 2 n 1 . If any method attains this maximum order of convergence it is known as an optimal method. Hence, our scheme (1) has an optimal convergence for all α , β ( provided α β ) in the sense of Kung–Traub conjecture, since it uses only four functional evaluations i . e . , f ( x n ) , f ( x n ) , f ( y n ) , and f ( w n ) and attains maximum convergence order ( 2 4 1 = 8 ) . □

3. Local Convergence

In order for us to provide the convergence of scheme (1), we first need to simplify it as
x σ + 1 = x σ μ σ γ f ( x σ ) f ( x σ ) ,
where
μ σ = μ γ m μ + H ( v ) + k ϵ ( μ ) + m k 1 4 m f ( x σ ) f ( x σ ) .
Other choices of γ and μ σ lead to Newton’s scheme ( γ = 1 , μ σ = 0 ), modified Newton’s scheme ( γ = m , μ σ = 0 ). That is why we study the convergence of (27) instead of (1) in this section.
The following standard auxiliary results on divided differences help the local convergence analysis of (27), see ([32] Section 2) for the next five lemmas.
Lemma 1.
Consider σ + 1 distinct arguments w 0 , w 1 , , w σ of a function f. Then, the divided differences f [ w 0 , , w σ ] are
f [ w 0 ] = f ( w 0 ) , f [ w 0 , w 1 ] = f ( w 0 ) f ( w 1 ) w 0 w 1 , f [ w 0 , w 1 , , w σ ] = f [ w 0 , w 1 , , w σ 1 ] f [ w 0 , w 1 , , w σ ] w 0 w σ .
Moreover, provided say that f is σ-th differentiable, we have
f [ w 0 , w 1 , , w σ ] = f ( σ ) ( w 0 ) σ ! ,
although some w i may be coincide.
Furthermore, f [ w 0 , , w σ ] are symmetric with respect to w 0 , , w σ .
Lemma 2.
Let α be a zero with multiplicity m, and f has ( n + 1 ) -th derivative. Then,
f ( x ) = f [ w 0 ] + i = 1 σ f [ w 0 , w 1 , , w σ ] j = 0 i 1 ( x w j ) + f [ w 0 , w 1 , , w σ , x ] i = 0 σ ( x w i ) ,
holds for all x.
Lemma 3.
Assume the function f has ( m + 1 ) -th derivative and α is a zero with multiplicity m. Then,
f ( x ) = f [ α , m t i m e s , α , x ] ( x α ) m
and
f ( x ) = f [ α , m t i m e s , α , x , x ] ( x α ) m + m f [ α , m t i m e s , α , x ] ( x α ) m 1 .
The next result is due to Genocchi.
Lemma 4.
Assume f has m-th derivative continuous, then
f [ w 0 , w 1 , , w σ ] = 0 1 0 1 f ( σ ) w 0 + i = 1 σ ( w i w i 1 ) j = 1 i τ i i = 1 σ ( τ i n i d τ i ) .
Taylor’s representation follows.
Lemma 5.
Assume f is σ-times differentiable on S ( w 0 , ϱ ) , ϱ > 0 , and f ( σ ) is integrable from ξ to x S ( ξ , ϱ ) . Then, we yield
f ( x ) = f ( ξ ) + f ( ξ ) ( x ξ ) + 1 2 f ( ξ ) ( x ξ ) 2 + + 1 σ ! f ( σ ) ( ξ ) ( x ξ ) σ + 1 ( σ 1 ) ! 0 1 f ( σ ) ( ξ + τ ( x ϱ ) ) f ( σ ) ( ξ ) ( x ξ ) σ ( 1 τ ) σ 1 d τ ,
f ( x ) = f ( ξ ) + f ( ξ ) ( x ξ ) + 1 2 f ( ξ ) ( x ξ ) 2 + + 1 ( σ 1 ) ! f ( σ ) ( ξ ) ( x ξ ) σ 1 + 1 ( n 2 ) ! 0 1 f ( σ ) ( ξ + τ ( x ξ ) ) f ( σ ) ( ξ ) ( x ξ ) σ 1 ( 1 τ ) σ 2 d τ ,
hold.
Set A = [ 0 , ) , B = ( , ) . Consider Ψ 0 : A B to be non-decreasing, and continuous function with Ψ 0 ( 0 ) = 0 . Consider also functions b 0 , b : A B as
b 0 ( t ) = ( m 1 ) ! ( m 1 ) 0 1 0 1 Ψ 0 t i = 1 m τ i i = 1 m τ i m i d τ i ,
b ( t ) = ( m 1 ) ! 0 1 0 1 Ψ 0 t i = 1 m 1 τ m i = 1 m 1 τ i m i d τ i + b 0 ( t ) .
Clearly b 0 , b are non-decreasing, continuous with b 0 ( 0 ) = b ( 0 ) = 0 . Assume
b ( t ) a positive real or as t .
Then b ( t ) = 1 has a minimal zero in ( 0 , ) , say ϱ 0 . Let λ 1 ( t ) = 1 b ( t ) . Consider Ψ : [ 0 , ϱ 0 ) A to be non-decreasing, continuous with Ψ ( 0 ) = 0 . Define functions a, λ 0 and λ on [ 0 , ϱ 0 ) as
a ( t ) = ( m 1 ) ! 0 1 0 1 Ψ t i = 1 m 1 τ i ( 1 τ m ) i = 1 m τ i m i d τ i d τ m ,
λ 0 ( t ) = m 1 a ( t ) t + b 0 ( t ) + m 1 a ( t ) c t c 0 + b 0 ( t ) c t c 0 1 ,
λ ( t ) = λ 0 ( t ) λ 1 ( t ) 1 for c 0 , and c 0 1 .
By these definitions λ ( t ) = 1 and λ ( t ) with t ϱ 0 . Then, let ϱ be the minimal zero of λ ( t ) = 0 in ( 0 , ϱ 0 ) . We get
0 b ( t ) < 1
and
0 λ ( t ) < 1
for all t [ 0 , ϱ ) .
The conditions ( H ) shall be used:
( H 1 )
f : Ω R R is differentiable m-times.
( H 2 )
f has a zero α with known multiplicity m.
( H 3 )
Ψ 0 : A B is non-decreasing, continuous and Ψ 0 ( 0 ) = 0 so that each x Ω satisfies
f ( m ) ( α ) 1 f ( m ) ( α ) f ( m ) ( x ) Ψ 0 ( α x ) .
Consider Ω 0 = Ω S ( α , ϱ 0 ) with ϱ 0 given earlier.
( H 4 )
Ψ : [ 0 , ϱ 0 ) B is non-decreasing, continuous, Ψ ( 0 ) = 0 and for each x , y Ω satisfying
f ( m ) ( α ) 1 f ( m ) ( y ) f ( m ) ( x ) Ψ ( y x ) .
( H 5 )
Implication (36) holds.
( H 6 )
S ¯ ( α , ϱ ) Ω
( H 7 )
μ σ c x σ α c 0 .
Theorem 2.
Assume conditions ( H ) , and choose x 0 S ( α , ϱ ) { α } . Then, sequence { x σ } S ( α , ϱ ) for all n 0 , and lim σ x σ = α .
Proof. 
We shall show that sequence
δ σ = x σ α
is non-increasing and converges to zero. Using δ σ = x σ α , scheme (1) for σ = 0 , Lemma 3 and the following formulas:
h ( x ) = f [ α , α , m t i m e s , α , x ] , h 0 ( x ) = f [ α , α , m t i m e s α , x , x ] ,
f ( x 0 ) = h ( x 0 ) δ 0 m ,
and
f ( x 0 ) = [ h 0 ( x 0 ) δ 0 + m h ( x 0 ) ] δ 0 m 1 .
We can write
δ 1 = h ( α ) 1 N h ( α ) 1 D ,
where
N = h 0 ( x 0 ) δ 2 + [ | m γ | h ( x 0 ) h 0 ( x 0 ) μ 0 ] δ 0 m h ( x 0 ) μ 0
and
D = h 0 ( x 0 ) δ 0 + m h ( x 0 ) .
In view of the definition of divided differences, we have
h 0 ( x 0 ) δ 0 = f [ α , α , ( m 1 ) t i m e s , α , x 0 , x 0 ] h ( x 0 ) .
Then, we obtain from (29) and (45) that
1 ( m h ( α ) ) 1 [ h 0 ( x 0 ) δ 0 + m h ( x 0 ) ] = ( m h ( α ) ) 1 [ h 0 ( x 0 ) δ 0 + m h ( x 0 ) m g ( α ) ] = ( m 1 ) f ( m ) ( α ) 1 f [ α , α , ( m 1 ) t i m e s , α , x 0 , x 0 ] h ( α ) + ( m 1 ) [ h ( x 0 ) h ( α ) ] .
We have, by Lemma 3
f [ α , α , ( m 1 ) t i m e s , α , x 0 , x 0 ] = 0 1 0 1 f ( m ) α + δ 0 i = 1 m 1 τ i i = 1 m ( τ i m 1 d τ i ) ,
h ( x 0 ) = 0 1 0 1 f ( m ) α + δ 0 i = 1 m 1 τ i i = 1 m ( τ i m 1 d τ i ) ,
h ( α ) = 0 1 0 1 f ( m ) ( α ) i = 1 m ( τ i m 1 d τ i ) .
Substituting (46)–(49) using condition ( H 3 ) , x 0 S ( α , ϱ ) , and the definition of ϱ , we get
1 ( m h ( α ) ) 1 [ h 0 ( x 0 ) δ 0 + m h ( x 0 ) ] = ( m 1 ) ! 0 1 0 1 f ( m ) ( α ) 1 ( f ( m ) ( α + δ 0 i = 1 m 1 τ i ) f ( m ) ( α ) ) i = 1 m ( τ i m i d τ i ) + ( m 1 ) f ( m ) ( α ) 1 ( f ( m ) ( α + δ 0 i = 1 m 1 τ i ) f ( m ) ( α ) ) i = 1 m ( τ i m i d τ i ) ( m 1 ) ! ( 0 1 0 1 f ( m ) ( α ) 1 ( f ( m ) ( α + δ 0 i = 1 m 1 τ i ) f ( m ) ( α ) ) i = 1 m ( τ i m i d τ i ) + ( m 1 ) 0 1 0 1 f ( m ) ( α ) 1 ( f ( m ) ( α + δ 0 i = 1 m 1 τ i ) f ( m ) ( α ) ) i = 1 m ( τ i m i d τ i ) ( m 1 ) ! ( 0 1 0 1 Ψ ( δ 0 i = 1 m 1 τ i ) i = 1 m ( τ i m i d τ i ) + ( m 1 ) 0 1 0 1 Ψ ( δ 0 i = 1 m 1 τ i ) i = 1 m ( τ i m i d τ i ) ) b ( δ 0 ) < b ( ϱ ) < 1 .
By a Banach result [33] and (50) then h 0 ( x 0 ) δ 0 + m h ( x 0 ) 0 and
( m h ( α ) 1 h 0 ( x 0 ) δ 0 + m h ( x 0 ) ) 1 1 1 β ( δ 0 ) < 1 1 β ( ϱ ) .
Moreover, using (45), (47), (48) and ( H 4 ), we have in turn that
( m h ( α ) ) 1 h 0 ( x 0 ) δ 0 = ( m 1 ) ! 0 1 0 1 f ( m ) ( α ) 1 [ f ( m ) α + δ 0 i = 1 m τ i f ( m ) α + δ 0 i = 1 m τ i ] i = 1 m ( τ i m i d τ i ) = ( m 1 ) ! 0 1 0 1 f ( m ) ( α ) 1 [ f ( m ) α + δ 0 i = 1 m 1 τ i f ( m ) α + δ 0 i = 1 m τ i ] i = 1 m ( τ i m i d τ i ) ( m 1 ) ! 0 1 0 1 Ψ 0 δ 0 i = 1 m 1 τ i ( 1 τ i ) i = 1 m ( τ i m i d τ i d τ m ) = a ( δ 0 ) < a ( ϱ ) < 1 .
Furthermore, we have
h ( α ) 1 h ( x 0 ) = h ( α ) 1 h ( x 0 ) g ( α ) = ( m 1 ) ! f ( m ) ( α ) 1 ( m 1 ) h ( x 0 ) h ( α ) = ( m 1 ) ( m 1 ) ! 0 1 0 1 f ( m ) ( α ) 1 f ( m ) α + δ 0 i = 1 m τ i f ( m ) ( α ) i = 1 m τ i m i d τ i ( m 1 ) ( m 1 ) ! 0 1 0 1 Ψ 0 | δ 0 | i = 1 m 1 τ i i = 1 m τ i m i d τ i .
Using (50)–(53), we obtain that
δ 1 d δ 0 < δ 0 < ϱ ,
where d = λ ( | δ 0 | ) [ 0 , 1 ) , so x 1 S ( α , ϱ ) . By simply replacing x 0 , x 1 by x σ , x σ + 1 , we get
x σ + 1 α d x σ α < ϱ ,
so lim n x σ = α and x σ + 1 S ( α , ϱ ) . □
Concerning the uniqueness of the solution α , we have
Proposition 1.
Suppose that conditions ( H ) and
m s 2 s 1 m s 1 s 2 Ψ 0 t s 1 s 2 t m 1 d t < 1
for all s 1 , t, s 2 with 0 s 1 t s 2 ϱ ¯ for some ϱ ¯ ϱ hold. Then, the zero α is unique in Ω 1 = Ω S ¯ ( α , ϱ ¯ ) .
Proof. 
Assume that α * Ω 0 solves equation f ( x ) = 0 with α α * . Without loss of generality, assume α < α * . We have
f ( α * ) f ( α ) = 1 ( m 1 ) ! α α * f ( m ) ( t ) ( α * t ) m 1 d t .
Using ( H 3 ) and (55), we get in turn that
1 ( α * α ) m m f ( m ) ( α ) 1 α α * f ( m ) ( t ) ( α * t ) m 1 d t = ( α * α ) m m f ( m ) ( α ) 1 α α * f ( m ) ( t ) f ( m ) ( α ) ( α * t ) m 1 d t m ( α * α ) m α α * Ψ 0 ( t α ) α * t m 1 d t < 1 ,
so ( α * α ) m m f ( m ) ( α ) 1 α α * f ( m ) ( t ) ( α * t ) m 1 d t is invertible, i.e., α α * f ( m ) ( t ) ( α * t ) m 1 d t exists. □

4. Numerical Examples

Two numerical experiments demonstrate the local convergence results are given below:
Example 1.
Consider Ω = 1 2 , 3 2 and a function f [32] on Ω, is given as
f ( x ) = ( x 5 2 1 ) 2 .
We consider the case α = 1 , and m = 2 . We obtain by using (58)
f ( α ) = 0 , f ( x ) = 5 x 4 5 x 3 2 , f ( α ) = 0 , f ( x ) = 20 x 3 15 2 x 1 2
and
f ( α ) = 25 2 .
We are looking for L so that Ψ ( x y ) = L x y . By (58), we get
f ( x ) f ( y ) = 20 x 3 15 2 x 20 y 3 + 15 2 y 20 ( x y ) ( x 2 + x y + y 2 ) + 15 2 x y 20 ( x 2 + x y + y 2 ) + 15 2 1 x + y x y .
We obtain for each x , y Ω
1 2 + 1 2 x + y 3 2 + 3 2 2 x + y 6 1 6 1 x + y 1 2
and
x 2 + x y + y 2 27 4 .
By using (62) and (63) in (61), we get
f ( x ) f ( y ) 135 + 15 2 2 x y .
We obtain by adopting (60)–(64) in ( H 3 )
f ( α ) 1 f ( x ) f ( y ) 2 25 135 + 15 2 2 x y L x y ,
where L = 11.224264 and
f ( α ) 1 f ( x ) f ( y ) L x y , x , y Ω .
Similarly, we find an upper bound in the form of Ψ 0 ( x y ) L 0 x y for f ( x ) f ( α ) . In view of (58), we have
f ( x ) f ( α ) = 20 x 3 15 2 x 20 α 3 + 15 2 α 20 ( x α ) ( x 2 + x α + α 2 ) + 15 2 x α 20 x 2 + x α + α 2 + 15 2 1 x + α x α .
Then, we get for all x Ω
1 + 1 5 x + α 3 2 + 1 2 + 1 2 x + α 3 + 2 2 2 2 + 3 1 x + α 2 2 + 1
and
| | x 2 + x α + α 2 19 4 .
Furthermore, we obtain by using (68) and (69) in (67),
f ( x ) f ( α ) 95 + 15 2 ( 5 + 1 ) x α .
We have by using (60) and (68) in ( H 3 )
f ( α ) 1 f ( x ) f ( α ) 2 25 95 + 15 2 ( 5 + 1 ) x α L 0 x α ,
where L 0 = 7.951471 and
f ( α ) 1 f ( x ) f ( α ) L 0 x y , x Ω .
Therefore, we get b 0 ( t ) = 7 12 L 0 t , b ( t ) = 13 12 L 0 t , λ 1 ( t ) = 1 b ( t ) = 1 13 12 L 0 t , a ( t ) = 1 6 L t . For C 0 = C = 1 , we obtain
λ 0 ( t ) = 1 6 L t 2 + 7 6 L 0 t
and
λ ( t ) = 2 ( L t 2 + 7 L 0 t ) 12 13 L 0 t 1 = 0 .
The values of parameters are
ϱ 0 = 0.116089 a n d ϱ = 0.0555717 .
Example 2.
Consider function f on Ω = R as follows:
f ( x ) = 0 x G ( x ) d x ,
with
G ( x ) = 0 x 1 + x sin π x d x .
We show α = 0 is a zero of f with m = 2 . By (73) and (74), we have f ( α ) = 0 , f ( x ) = G ( x ) ,
f ( x ) = 1 + x sin π x , x 0 , 1 , x = 0 .
Hence, we get f ( α ) = 0 and f ( α ) = 1 . Hence, we conclude m = 2 . For all x , y Ω , we can obtain that
f ( α ) 1 f ( x ) f ( α ) = x sin π x x α
and
f ( α ) 1 f ( x ) f ( y ) = x sin π x y sin π y x y .
Then, we have that Ψ 0 ( t ) = L 0 t , Ψ ( t ) = L t , where L 0 = L = 1 . The values of parameters are
ϱ 0 = 0.923077 a n d ϱ = 0.430703 .

Some Special Studies

Next, we specialize functions H and G. The resulting choices satisfy the conditions of Theorem 1. The parameters α and β are arbitrary but α β .

5. Numerical Experimentation

We specialize α and β to conduct specific numerical calculations. More precisely, we use case-1 for α = 1 2 , β = 3 2 , case-2 for α = 0 , β = 2 and case-7 for α = 0 , β = 2 in scheme (1), known by P M 1 , P M 2 and P M 3 , respectively. We choose four real life problems having multiple and simple zeros and two standard academic problems with multiple zeros and can be found in examples (3)–(8).
We consider several existing schemes of order six and eight (optimal). Firstly, we compare our schemes with a sixth-order iteration functions given by Geum et al. [23], in particular, choose 5YD, defined as
y σ = x σ m f ( x σ ) f ( x σ ) , m 1 , w σ = x σ m u σ 2 2 u σ 1 u σ 1 5 u σ 2 f ( x σ ) f ( x σ ) , x σ + 1 = x σ m u σ 2 2 u σ 1 5 u σ 2 u σ + v σ 1 f ( x σ ) f ( x σ ) ,
where u σ = f ( y σ ) f ( x σ ) 1 m and v σ = f ( w σ ) f ( x σ 1 m . We denote this scheme by ( G M ) for computational work.
In addition, we demonstrate the same with an optimal eighth-order iteration function developed by Behl et al. [26], which is given by
y σ = x σ m f ( x σ ) f ( x σ ) , w σ = y σ m u σ f ( x σ ) f ( x σ ) 1 + β u σ ( β 2 ) u σ + 1 , x σ + 1 = w σ u σ v σ f ( x σ ) f ( x σ ) 1 2 m ( 2 v σ + 1 ) 4 ( β 2 6 β + 6 ) u σ 3 + ( 10 4 β ) u σ 2 + 4 u σ + 1 + 1
where u σ = f ( y σ ) f ( x σ ) 1 m and v σ = f ( w σ ) f ( y σ ) 1 m . We shall call this scheme ( B M ) . This ( B M ) scheme is called by (78) in [26] and claimed to be the best scheme among all other family members.
Moreover, a comparison is given with optimal eighth-order iterative schemes constructed in [27]. Consider the specializations
y σ = x σ m f ( x σ ) f ( x σ ) , w σ = y σ m u σ 6 u σ 3 u σ 2 + 2 u σ + 1 f ( x σ ) f ( x σ ) , x σ + 1 = w σ m u σ v σ ( 1 + 2 u σ ) ( 1 + v σ ) 2 w σ + 1 A 2 P 0 f ( x σ ) f ( x σ )
and
y σ = x σ m f ( x σ ) f ( x σ ) , w σ = y σ m u σ 1 5 u σ 2 + 8 u σ 3 1 2 u σ f ( x σ ) f ( x σ ) , x σ + 1 = w σ m u σ v σ ( 1 + 2 u σ ) ( 1 + v σ ) 3 w σ + 1 A 2 P 0 ( 1 + w σ ) f ( x σ ) f ( x σ ) ,
with u σ = f ( y σ ) f ( x σ ) 1 m , v σ = f ( w σ ) f ( y σ ) 1 m , w σ = f ( w σ ) f ( x σ ) 1 m , with A 2 = P 0 = 1 . Both the schemes (79) and (80) are standing as ( F M 1 ) and ( F M 2 ) , respectively.
Consider in contrast with another family of eighth-order schemes presented by Behl et al. [24]. We choose the following expression
y σ = x σ m f ( x σ ) f ( x σ ) , w σ = x σ m u σ 1 + 2 u σ f ( x σ ) f ( x σ ) , x σ + 1 = w σ u σ w σ 1 w σ m u σ 8 v σ + 6 + 9 u σ 2 + 2 v σ + 1 4 u σ + 1 f ( x σ ) f ( x σ )
and
y σ = x σ m f ( x σ ) f ( x σ ) , w σ = y σ m u σ 1 + 2 u σ f ( x σ ) f ( x σ ) , x σ + 1 = w σ u σ w σ 1 w σ 4 u 4 3 u 4 2 2 u 4 2 v 4 1 f ( x σ ) f ( x σ ) ,
where u σ = f ( y σ ) f ( x σ ) 1 m , v σ = f ( w σ ) f ( y σ ) 1 m , w σ = f ( w σ ) f ( x σ ) 1 m . We denote these schemes (81) and (82) by ( R M 1 ) and ( R M 2 ) , respectively.
In Table 2 and Table 3, we report our findings using many significant digits (minimum 5000 significant digits) in order to minimize the errors. Due to the limited paper space, we depicted the value up to specific number of significant digits. We adopted M a t h e m a t i c a 11 with multiple precision arithmetic for calculating the required values. In the Table 2 and Table 3, a ( ± b ) stands for a × 10 ( ± b ) .
Example 3.
Chemical reactor with fractional conversion
We assume the expression (see [34]), given by
f 1 ( x ) = x 1 x 5 log 0.4 ( 1 x ) 0.4 0.5 x + 4.45977 ,
Here, x serve as a fractional conversion of particular species B in the chemical reactor. If we yield either x < 0 or x > 1 then these values have no physical description. Therefore, x is bounded in [ 0 , 1 ] and our needed zero of (83) is ξ = 0.7573962462537538794596413 . In addition, the function f 1 is not defined for x [ 0.8 , 1 ] that is very near to the required zero. Moreover, some other properties that related to f 1 are discussed in details in [34] that make the solution more tough. We have to be very careful while choosing the initial approximation for this function because the derivative tends to zero for x [ 0 , 0.5 ] and an infeasible zero for x = 1.098 . Keeping all these problems in our mind, we assume x 0 = 0.76 as the starting point for f 1 .
On the basis of obtained results in Table 2 and Table 3, we conclude that our scheme ( P M 2 ) has the minimum error difference between two iterations and residual error among all the other mentioned schemes in the case of Example 3.
Example 4.
Continuous stirred tank reactor (CSTR)
Here, we assume an isothermal continuous stirred tank reactor (CSTR) problem. Let us consider that components M 1 and M 2 stand for feed rates to the reactors A 1 and A 2 A 1 , respectively. Then, we obtain the following reaction scheme in the reactor (for more details see [35]):
M 1 + M 2 B 1 B 1 + M 2 C 1 C 1 + M 2 D 1 C 1 + M 2 E 1
Douglas [36] studied the above model, when he was designing a simple model for feedback control systems. He converted the above model in to the following mathematical expression:
R C 1 2.98 ( x + 2.25 ) ( x + 1.45 ) ( x + 2.85 ) 2 ( x + 4.35 ) = 1 ,
with R C 1 as the gain of proportional controller. The expression (84) is balanced for the negative real values of values of R C 1 . In particular, by choosing R C 1 = 0 , we yield
f 2 ( x ) = x 4 + 11.50 x 3 + 47.49 x 2 + 83.06325 x + 51.23266875 .
the zeros of function f 2 are known as the poles of the open-loop transfer function. The function f 2 has 4 zeros ξ = 1.45 , 2.85 , 2.85 , 4.35 . But, our desired ones is ξ = 2.85 with multiplicity m = 2 . We assume x 0 = 2.7 as the starting point for f 2 .
The results obtained from Table 2 and Table 3 conclude that all the schemes behave similarly to each other in terms of the difference between two iterations, residual error and computational order of convergence in the Example 4.
Example 5.
Van der Waals equation of state
P + a 1 n 2 V 2 ( V n a 2 ) = n R T
describes the nature of a real gas comprising two gases, namely α 1 and α 2 , when we introduce the ideal gas equations. For calculating the volume V of gases, we need the solution of the preceding expression in terms of the remaining constants
P V 3 ( n a 2 P + n R T ) V 2 + α 1 n 2 V α 1 α 2 n 2 = 0 .
For choosing the particular values of gases α 1 and α 2 , we can easily obtain the values for n, P and T. Then, we yield
f 3 ( x ) = x 3 5.22 x 2 + 9.0825 x 5.2675 .
The function f 3 has 3 zeros and among them ξ = 1.75 is a multiple zero of multiplicity and m = 2 and ξ = 1.72 is a simple zero. We choose the starting guess x 0 = 0.76 for the required zero ξ = 1.75 in f 3 .
We conclude on the basis of obtained results in Table 2 and Table 3 that our scheme’s P M 1 and P M 2 have the minimum error difference between two iterations and residual error among all the other mentioned schemes for the Example 5.
Example 6.
Multi-factor problem
The unwanted RF disruption that occurs in high power microwave equipment which is working under vacuum conditions is called multi-factor [37]. For instance, multi-factor can be found inside the parallel plate wave guide. An electric field exists with an electric potential difference that originates from the movement of electrons between two sheets or plates. We can find an interesting case when we are studying trajectories of the electrons that reach the plate having a zero of multiplicity m = 2 . The mathematical formation of the trajectory of an electron between two parallel sheets that have some air gap is given by
y ( t ) = y 0 + v 0 + e E 0 m ω sin ( ω t 0 + α ) ( t t 0 ) + e E 0 m ω 2 cos ( ω t + α ) cos ( ω t 0 + α )
where m and e are the mass and charge of the electron at rest, E 0 sin ( ω t + α ) is the RF electric field between plates and y 0 and v 0 are the position and velocity of the electron at time t 0 . By choosing some particular values in (86), we have:
f 4 ( x ) = x + cos ( x ) π 2
with the zero ξ = π 2 of multiplicity 3. For the function f 4 , we assume the initial guess as x 0 = 1.6 .
On the basis of the results obtained in Table 2 and Table 3, we conclude that our methods R M 1 and R M 2 have the minimum error difference between two iterations and residual error among all the other mentioned schemes in the case of Example 6.
Example 7.
Now, we study a polynomial equation [3], describes as follows:
f 5 ( x ) = ( ( x 1 ) 3 1 ) 100 .
Function f 5 having ξ = 2 a multiple zero of multiplicity m = 100 . We choose the starting point x 0 = 2.1 for f 5 .
From Table 2 and Table 3, we deduced that the minimum error difference between two iterations and residual errors among all the other mentioned schemes belongs to our scheme P M 3 in the case of Example 7.
Example 8.
Finally, we introduced the function
f 6 ( x ) = 1 1 x 2 + x + cos π x 2 3 .
Function f 6 having a multiple zero ξ = 0.7285840464448267167123331 of multiplicity m = 3 . We assume x 0 = 0.6 as starting guess for f 6 .
We conclude on the basis of obtained results in Table 2 and Table 3 that our scheme P M 2 has the minimum error difference between two iterations and residual error among all the other mentioned schemes in the case of Example 7.

6. Conclusions

We developed a new 8th-order iteration function having optimal eight-order convergence for multiple zeros of a univariate function with faster convergence and simple and compact body structure. The present scheme is based on weight functions that play a fruitful role in the establishment of 8th-order convergence. In addition, we presented local convergence analysis showing 8th-order of convergence. Each member of our scheme is optimal as stated in the conjecture by Kung–Traub. Moreover, we can obtain several new specializations by adopting weight functions in the suggested scheme (1). Minimum residual errors, minimum errors among two consecutive iterations and balanced ρ were identified with our schemes while comparing to the existing ones on real problems like continuous stirred tank reactor, chemical conversion, multi factor problem, Vander Waal’s equation of state, etc. Based on the obtained results, we deduce that our schemes are more efficient and useful than the earlier ones.

Author Contributions

R.B. and I.K.A.: Conceptualization; Methodology; Validation; Writing—Original Draft Preparation; Writing—Review & Editing. M.S., M.A. and A.J.A.: Validation; Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1964. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  3. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  4. Petkovic, M.; Neta, B.; Petkovic, L.; Dzunic, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  5. Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
  6. Behl, R.; Salimi, M.; Ferrara, M.; Sharifi, S.; Samaher, K.A. Some real life applications of a newly constructed derivative free iterative scheme. Symmetry 2019, 11, 239. [Google Scholar] [CrossRef] [Green Version]
  7. Salimi, M.; Lotfi, T.; Sharifi, S.; Siegmund, S. Optimal Newton-Secant like methods without memory for solving nonlinear equations with its dynamics. Int. J. Comput. Math. 2017, 94, 1759–1777. [Google Scholar] [CrossRef] [Green Version]
  8. Salimi, M.; Long, N.M.A.N.; Sharifi, S.; Pansera, B.A. A multi-point iterative method for solving nonlinear equations with optimal order of convergence. Jpn. J. Ind. Appl. Math. 2018, 35, 497–509. [Google Scholar] [CrossRef]
  9. Matthies, G.; Salimi, M.; Sharifi, S.; Varona, J.L. An optimal eighth-order iterative method with its dynamics. Jpn. J. Ind. Appl. Math. 2016, 33, 751–766. [Google Scholar] [CrossRef] [Green Version]
  10. Sharifi, S.; Ferrara, M.; Salimi, M.; Siegmund, S. New modification of Maheshwari method with optimal eighth order of convergence for solving nonlinear equations. Open Math (Former. Cent. Eur. J. Math.) 2016, 14, 443–451. [Google Scholar]
  11. Lotfi, T.; Sharifi, S.; Salimi, M.; Siegmund, S. A new class of three point methods with optimal convergence order eight and its dynamics. Numer. Algor. 2016, 68, 261–288. [Google Scholar] [CrossRef]
  12. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R.; Kanwar, V. An optimal fourth-order family of methods for multiple roots and its dynamics. Numer. Algor. 2016, 71, 775–796. [Google Scholar] [CrossRef]
  13. Hueso, J.L.; Martinez, E.; Teruel, C. Determination of multiple roots of nonlinear equations and applications. J. Math. Chem. 2015, 53, 880–892. [Google Scholar] [CrossRef] [Green Version]
  14. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef] [Green Version]
  15. Neta, B. Extension of Murakami’s high-order non-linear solver to multiple roots. Int. J. Comput. Math. 2010, 87, 1023–1031. [Google Scholar] [CrossRef]
  16. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef] [Green Version]
  17. Soleymani, F.; Babajee, D.K.R. Computing multiple zeros using a class of quartically convergent methods. Alex. Eng. J. 2013, 52, 531–541. [Google Scholar] [CrossRef] [Green Version]
  18. Soleymani, F.; Babajee, D.K.R.; Lofti, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
  19. Thukral, R. Introduction to higher-order iterative methods for finding multiple roots of nonlinear equations. J. Math. 2013. [Google Scholar] [CrossRef] [Green Version]
  20. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Math. Appl. 2011, 235, 4199–4206. [Google Scholar] [CrossRef] [Green Version]
  21. Zhou, X.; Chen, X.; Song, Y. Families of third and fourth order methods for multiple roots of nonlinear equations. Appl. Math. Comput. 2013, 219, 6030–6038. [Google Scholar] [CrossRef]
  22. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef] [Green Version]
  23. Geum, Y.H.; Kim, Y.I.; Neta, B. A sixth-order family of three-point modified Newton-like multiple-root finders and the dynamics behind their extraneous fixed points. Appl. Math. Comput. 2016, 283, 120–140. [Google Scholar] [CrossRef] [Green Version]
  24. Behl, R.; Alshomrani, A.S.; Motsa, S.S. An optimal scheme for multiple roots of nonlinear equations with eighth-order convergence. J. Math. Chem. 2018, 56, 2069–2084. [Google Scholar] [CrossRef]
  25. Behl, R.; Cordero, R.A.; Motsa, S.S.; Torregrosa, J.R. An eighth-order family of optimal multiple root finders and its dynamics. Numer. Algor. 2018, 77, 1249–1272. [Google Scholar] [CrossRef]
  26. Behl, R.; Zafar, F.; Alshomrani, A.S.; Junjuaz, M.; Yasmin, N. An optimal eighth-order scheme for multiple zeros of univariate functions. Int. J. Comput. Methods 2018, 16, 1843002. [Google Scholar] [CrossRef]
  27. Zafar, F.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. Optimal iterative methods for finding multiple roots of nonlinear equations using free parameters. J. Math. Chem. 2017, 56, 1884–1901. [Google Scholar] [CrossRef] [Green Version]
  28. Liu, L.; Zhang, L.; Lin, B.; Wang, G. Fast approach for computing roots of polynomials using cubic clipping. Comput. Aided Geom. Des. 2009, 26, 547–559. [Google Scholar] [CrossRef]
  29. Bartoň, M.; Jüttler, B. Computing roots of polynomials by quadratic clipping. Comput. Aided Geom. Des. 2007, 24, 125–141. [Google Scholar]
  30. Ahlfors, L.V. Complex Analysis; McGraw-Hill Book, Inc.: New York, NY, USA, 1979. [Google Scholar]
  31. Geum, Y.H.; Kim, Y.I.; Neta, B. Constructing a family of optimal eighth-order modified Newton-type multiple-zero finders along with the dynamics behind their purely imaginary extraneous fixed points. J. Comput. Appl. Math. 2018, 333, 131–156. [Google Scholar] [CrossRef]
  32. Ren, H.; Argyros, I.K. Convergnece radius of the Newton method for multiple zeros under Hölder continuous derivative. Appl. Math. Comput. 2010, 217, 612–621. [Google Scholar]
  33. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  34. Shacham, M. Numerical solution of constrained nonlinear algebraic equations. Int. J. Numer. Method Eng. 1986, 23, 1455–1481. [Google Scholar] [CrossRef]
  35. Constantinides, A.; Mostoufi, N. Numerical Methods for Chemical Engineers with MATLAB Applications; Prentice Hall PTR: Englewood Cliffs, NJ, USA, 1999. [Google Scholar]
  36. Douglas, J.M. Process Dynamics and Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1972; Volume 2. [Google Scholar]
  37. Anza, S.; Vicente, C.; Gimeno, B.; Boria, V.E.; Armendáriz, J. Long-term multi-factor discharge in multicarrier systems. Phys. Plasmas 2007, 14, 82–112. [Google Scholar] [CrossRef]
Table 1. Some special cases of the proposed scheme (1).
Table 1. Some special cases of the proposed scheme (1).
Cases H ( ν ) G ( μ )
Case-1 m ( α β + 2 ν 2 ) α β m 1 + 2 μ + ( 1 2 β ) μ 2 + 2 ( β 2 2 β 2 ) μ 3 .
Case-2 m ( α β + 2 ν 2 ) α β m 2 β 2 μ + β 2 4 μ 2 ( 3 μ + 1 ) 2 2 β 2 μ + β ( 2 4 μ ) 4 μ 1
Case-3 a 1 + a 2 ν , m 1 + 2 μ + ( 1 2 α ) μ 2 + 2 ( α 2 2 α 2 ) μ 3
where , a 1 = 2 m α β , a 2 = m ( α β + 2 ) α β
Case-4 a 1 + a 2 ν , m 2 α 2 μ + α ( 2 4 μ 2 ) ( 3 μ + 1 ) 2 2 α 2 μ + α ( 2 4 μ ) 4 μ 1
where , a 1 = 2 m α β , a 2 = m ( α β + 2 ) α β
Case-5 b 1 ν + b 2 1 + ν , m 4 4 + 8 μ 2 b 3 μ 2 + b 4 μ 3
where , b 1 = m ( α + β 4 ) α β , b 2 = 4 m ( α β + 2 ) α β b 3 = α 2 2 α ( β 3 ) + β 2 2 β 2 ,
b 4 = 3 α 3 5 α 2 ( β 2 ) + α ( β 2 + 4 β 24 ) + β 3
6 β 2 + 8 β 16
Case-6 m ν 2 ( 3 α 3 β + 14 ) + ν ( 3 α 3 β 16 ) + 2 3 ν ( ν + 1 ) ( α β ) m μ 3 2 α ( 4 β 7 ) + 4 β 2 28 β 9 + μ 2 ( 4 α 8 β + 27 ) + 21 μ + 6 3 ( μ + 1 ) ( μ + 2 )
Case-7 m ν 2 ( α β + 6 ) + ν ( α β 8 ) + 2 ν ( ν + 1 ) ( α β ) m μ 3 2 α 2 + 4 α β + 2 β 2 14 β 3 + ( 9 4 β ) μ 2 + 7 μ + 2 ( μ + 1 ) ( μ + 2 )
In all the above cases α β .
Table 2. Errors between iterations ( | x σ + 1 x σ | ) among different iteration functions.
Table 2. Errors between iterations ( | x σ + 1 x σ | ) among different iteration functions.
f ( x ) σ GM BM FM 1 FM 2 RM 1 RM 2 PM 1 PM 2 PM 3
f 1 ( x ) 11.8 (−10)5.1 (−12)5.1 (−11)7.7 (−11)8.0 (−12)1.4 (−11)9.4 (−13)1.3 (−14)8.4 (−13)
21.7 (−53)1.2 (−81)1.6 (−72)5.9 (−71)1.4 (−79)9.4 (−78)5.8 (−88)4.3 (−105)7.8 (−89)
31.3 (−311)1.5 (−638)1.5 (−564)7.3 (−552)1.2 (−621)4.7 (−607)1.3 (−689)7.4 (−829)4.0 (−697)
ρ 6.00008.00008.00008.00008.00008.00008.00008.00008.0000
f 2 ( x ) 19.5 (−3)2.0 (−2)2.0 (−2)2.0 (−2)2.7 (−4)2.7 (−4)2.0 (−2)2.0 (−2)2.0 (−2)
28.1 (−16)4.2 (−18)5.2 (−18)5.2 (−18)9.1 (−14)9.1 (−14)4.2 (−18)4.2 (−18)4.2 (−18)
33.9 (−94)3.1 (−143)1.9 (−142)1.7 (−142)3.4 (−42)3.4 (−42)3.0 (−143)3.0 (−143)3.0 (−143)
ρ 5.99297.98587.98467.98473.00053.00057.98617.98627.9862
f 3 ( x ) 13.9 (−4)2.6 (−4)3.9 (−4)4.1 (−4)2.6 (−4)2.7 (−4)2.9 (−5)3.3 (−5)4.3 (−5)
21.0 (−14)3.6 (−19)5.2 (−17)9.8 (−17)1.4 (−19)1.1 (−18)1.1 (−27)2.4 (−27)1.2 (−25)
33.9 (−78)6.1 (−138)5.9 (−120)1.2 (−117)1.0 (−141)6.1 (−134)3.3 (−207)7.5 (−207)5.3 (−190)
ρ 5.99757.99777.99457.99418.00267.99717.99967.99957.9996
f 4 ( x ) 12.5 (−6)4.3 (−6)4.3 (−6)4.3 (−6)1.4 (−10)1.4 (−10)4.3 (−6)4.3 (−6)4.3 (−6)
21.5 (−18)1.4 (−30)1.4 (−30)1.4 (−30)3.8 (−52)3.8 (−52)1.4 (−30)1.4 (−30)1.4 (−30)
33.7 (−55)5.9 (−153)5.9 (−153)5.9 (−153)5.3 (−260)5.3 (−260)5.9 (−153)5.9 (−153)5.9 (−153)
ρ 3.00005.00005.00005.0005.00005.00005.00005.00005.0000
f 5 ( x ) 12.0 (−7)9.5 (−8)4.8 (−7)6.5 (−7)6.3 (−8)1.9 (−7)2.3 (−8)1.5 (−8)2.9 (−8)
21.8 (−41)1.6 (−55)5.7 (−49)8.4 (−48)4.2 (−57)8.0 (−53)2.6 (−59)1.7 (−15)7.0 (−60)
31.0 (−245)1.3 (−437)2.2 (−384)6.6 (−375)5.9 (−169)9.6 (−416)3.2 (−454)1.9 (−118)7.5 (−473)
ρ 6.00008.00008.00008.00002.27458.00008.000014.8628.0000
f 6 ( x ) 13.5 (−6)1.7 (−7)2.4 (−7)2.4 (−7)9.3 (−8)9.7 (−8)1.2 (−7)1.1 (−7)1.2 (−7)
21.2 (−32)4.4 (−53)2.0 (−51)2.5 (−51)3.0 (−55)5.8 (−55)1.2 (−54)2.6 (−55)1.0 (−54)
31.8 (−191)9.4 (−418)5.3 (−404)3.6 (−403)3.1 (−435)1.0 (−432)8.7 (−431)2.8 (−436)4.0 (−431)
ρ 6.00008.00008.00008.00008.00008.00008.00008.00008.0000
Table 3. Contrast on the ground of residual errors (i.e., | f ( x σ ) | ).
Table 3. Contrast on the ground of residual errors (i.e., | f ( x σ ) | ).
f ( x ) σ GM BM FM 1 FM 2 RM 1 RM 2 PM 1 PM 2 PM 3
f 1 ( x ) 11.4 (−8)4.1 (−10)4.1 (−9)6.1 (−9)6.4 (−10)1.1 (−9)7.5 (−11)1.0 (−12)6.7 (−11)
21.4 (−51)9.9 (−80)1.3 (−70)4.7 (−69)1.1 (−77)7.5 (−76)4.7 (−86)3.4 (−103)6.2 (−87)
31.0 (−309)1.2 (−636)1.2 (−562)5.8 (−550)9.7 (−620)3.8 (−605)1.0 (−687)5.9 (−827)3.5 (−695)
f 2 ( x ) 11.9 (−4)8.0 (−4)8.5 (−4)8.5 (−4)1.5 (−7)1.5 (−7)8.0 (−4)8.0 (−4)8.0 (−4)
21.4 (−30)3.7 (−35)5.7 (−35)5.6 (−35)1.7 (−26)1.7 (−26)3.7 (−35)3.7 (−35)3.7 (−35)
33.2 (−187)2.0 (−285)7.3 (−284)6.3 (−284)2.5 (−83)2.5 (−83)1.9 (−285)1.9 (−285)1.9 (−285)
f 3 ( x ) 14.6 (−9)2.0 (−9)4.6 (−9)5.1 (−9)2.0 (−9)2.3 (−9)2.5 (−11)3.2 (−11)5.6 (−11)
23.2 (−30)4.0 (−39)8.0 (−35)2.9 (−34)5.9 (−40)3.4 (−38)3.3 (−56)1.7 (−55)4.5 (−52)
34.6 (−157)1.1 (−276)1.1 (−240)4.3 (−236)3.1 (−284)1.2 (−268)3.3 (−415)1.7 (−410)8.4 (−381)
f 4 ( x ) 12.6 (−18)1.3 (−17)1.3 (−17)1.3 (−17)4.7 (−31)4.7 (−31)1.3 (−17)1.3 (−17)1.3 (−17)
26.2 (−55)5.0 (−91)5.0 (−91)5.0 (−91)9.1 (−156)9.1 (−156)5.0 (−91)5.0 (−91)5.0 (−91)
38.4 (−165)3.5 (−458)3.5 (−458)3.5 (−458)2.4 (−779)2.4 (−779)3.5 (−458)3.5 (−458)3.5 (−458)
f 5 ( x ) 11.1 (−622)2.2 (−655)4.4 (−585)5.3 (−572)3.9 (−673)3.1 (−626)1.3 (−709)3.7 (−736)1.6 (−706)
29.7 (−4027)1.4 (−5431)1.2 (−4777)8.7 (−4661)11. (−5590)9.1 (−5163)6.7 (−5376)5.3 (−1429)1.2 (−5868)
35.4 (−24,451)4.1 (−43,641)2.7 (−38,318)5.1 (−37,371)1.1 (−16,775)7.8 (−41,455)6.1 (−41,287)5.9 (−11,726)1.1 (−47,165)
f 6 ( x ) 11.1 (−6)1.3 (−20)3.5 (−20)3.7 (−20)2.1 (−21)2.3 (−21)4.8 (−21)3.5 (−21)4.2 (−21)
24.3 (−96)2.2 (−157)2.1 (−152)4.2 (−152)6.7 (−164)5.1 (−163)4.3 (−162)4.7 (−164)2.9 (−162)
31.5 (−572)2.1 (−1251)3.8 (−1210)1.2 (−1207)7.5 (−1304)2.5 (−1296)1.7 (−1290)5.4 (−1307)1.6 (−1291)

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K.; Argyros, M.; Salimi, M.; Alsolami, A.J. An Iteration Function Having Optimal Eighth-Order of Convergence for Multiple Roots and Local Convergence. Mathematics 2020, 8, 1419. https://doi.org/10.3390/math8091419

AMA Style

Behl R, Argyros IK, Argyros M, Salimi M, Alsolami AJ. An Iteration Function Having Optimal Eighth-Order of Convergence for Multiple Roots and Local Convergence. Mathematics. 2020; 8(9):1419. https://doi.org/10.3390/math8091419

Chicago/Turabian Style

Behl, Ramandeep, Ioannis K. Argyros, Michael Argyros, Mehdi Salimi, and Arwa Jeza Alsolami. 2020. "An Iteration Function Having Optimal Eighth-Order of Convergence for Multiple Roots and Local Convergence" Mathematics 8, no. 9: 1419. https://doi.org/10.3390/math8091419

APA Style

Behl, R., Argyros, I. K., Argyros, M., Salimi, M., & Alsolami, A. J. (2020). An Iteration Function Having Optimal Eighth-Order of Convergence for Multiple Roots and Local Convergence. Mathematics, 8(9), 1419. https://doi.org/10.3390/math8091419

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop