Next Article in Journal
PM2.5 Concentration Prediction Based on CNN-BiLSTM and Attention Mechanism
Previous Article in Journal
Energy Management of a Multi-Source Power System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extended High Order Algorithms for Equations under the Same Set of Conditions

by
Ioannis K. Argyros
1,*,†,
Debasis Sharma
2,*,†,
Christopher I. Argyros
3,†,
Sanjaya Kumar Parhi
4,†,
Shanta Kumari Sunanda
2,† and
Michael I. Argyros
5,†
1
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Mathematics, IIIT Bhubaneswar, Bhubaneswar 751003, Odisha, India
3
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
4
Department of Mathematics, Fakir Mohan University, Balasore 756020, Odisha, India
5
Department of Computer Science, University of Oklahoma, Norman, OK 73071, USA
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2021, 14(7), 207; https://doi.org/10.3390/a14070207
Submission received: 31 May 2021 / Revised: 2 July 2021 / Accepted: 9 July 2021 / Published: 12 July 2021

Abstract

:
A variety of strategies are used to construct algorithms for solving equations. However, higher order derivatives are usually assumed to calculate the convergence order. More importantly, bounds on error and uniqueness regions for the solution are also not derived. Therefore, the benefits of these algorithms are limited. We simply use the first derivative to tackle all these issues and study the ball analysis for two sixth order algorithms under the same set of conditions. In addition, we present a calculable ball comparison between these algorithms. In this manner, we enhance the utility of these algorithms. Our idea is very general. That is why it can also be used to extend other algorithms as well in the same way.

1. Introduction

We consider two Banach spaces Y 1 and Y 2 with an open and convex subset Z ( ) of Y 1 . Let us denote the set { B L : Y 1 Y 2 linear and bounded operators } by L ( Y 1 , Y 2 ) . Suppose X : Z Y 1 Y 2 is Fréchet derivable. Equations of the kind
X ( v ) = 0
are often utilized in science and other applied areas to solve several highly challenging problems. We should not ignore the fact that solving these equations is a difficult process, as the solution could only be discovered analytically on rare instances. This is why iterative processes are generally used for solving these equations. However, it is an arduous task to develop an effective iterative approach for addressing (1). The classical Newton’s iterative strategy is most typically employed for this issue. In addition, a lot of studies on higher order modifications of conventional processes like Newton’s, Jarratt’s, Chebyshev’s, etc. are presented in [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]. Wang et al. [16] presented a sixth order variant of Jarratt’s algorithm, which adds the evaluation of the function at an additional point in the iteration procedure of Jarratt’s method [12]. Grau-Sánchez and Gutiérrez [10] by applying Obreshkov-like techniques described two families of zero-finding iterative approaches. An efficient family of nonlinear system algorithms is suggested by Cordero et al. [8] using a reduced composition technique on Newton’s and Jarratt’s algorithm. Sharma et al. [17] composed two weighted-Newton steps to construct an efficient fourth order weighted-Newton method to solve nonlinear systems. Sharma and Arora [18] constructed iterative algorithms of fourth and sixth convergence order for solving nonlinear systems. Two bi-parametric fourth order families of predictor-corrector iterative algorithms are discussed in [9]. Newton-like iterative approaches of fifth and eighth rate of convergence are also designed by Sharma and Arora [19]. Additional studies on other algorithms with their convergence and dynamics are available in [20,21,22,23,24,25].
Notice that higher convergence order algorithms using Taylor expansions suffer from the following problems:
(1’)
Higher order derivatives (not on the algorithms) should exist although convergence may be possible without these conditions.
(2’)
We do not know in advance how many iterations should be performed to reach a certain error tolerance.
(3’)
The choice of initial points is limited, since we do not know a convergence ball.
(4’)
No information is provided on the uniqueness of v * .
(5’)
Results are limited on the multidimensional Euclidean space.
Hence, there is a need to address these problems. The novelty of our article lies in the fact that we handle (1’)–(5’) as follows.
(1”)
We only use the derivative that actually appears on these algorithms. The convergence order is recovered again, since we by pass Taylor series, (which require the higher order derivatives) and use instead the computational order of convergence (COC) given by
C O C = ln | | v n + 1 v * | | | | v n v * | | ln | | v n v * | | | | v n 1 v * | |
and the approximate computational order of convergence (ACOC) given by
A C O C = ln | | v n + 1 v n | | | | v n v n 1 | | ln | | v n v n 1 | | | | v n 1 v n 2 | | .
These formulae use the algorithms (which depend on the first derivative). In the case of ACOC no knowledge of v * is needed.
(2”)
We use generalized Lipschitz-type conditions which allow us to provide upper bounds on | | v n v * | | which in turn can be used to determine the smallest number of iterations to reach the error tolerance.
(3”)
Under our local convergence analysis a convergence ball is determined. Hence, we know from where to pick the stater v 0 so that convergence to the solution v * can be achieved.
(4”)
A uniqueness ball is provided.
(5”)
The results are presented in the more general setting of Banach space valued operators.
In this article, to demonstrate our technique, we selected the following sixth convergence order algorithms to expand their utility. However, our technique is so general that it can be applied on other algorithms [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]. We also compare their convergence balls and dynamical properties. Define algorithms for all n = 0 , 1 , 2 , , by
SM1:
y n = v n α X ( v n ) 1 X ( v n ) t n = v n 1 2 I + 9 8 X ( y n ) 1 X ( v n ) + 3 8 X ( v n ) 1 X ( y n ) X ( v n ) 1 X ( v n ) v n + 1 = t n 9 4 I + 15 8 X ( y n ) 1 X ( v n ) + 11 8 X ( v n ) 1 X ( y n ) X ( y n ) 1 X ( t n )
SM2:
y n = v n α X ( v n ) 1 X ( v n ) t n = v n 5 8 I + 3 8 ( X ( y n ) 1 X ( v n ) ) 2 X ( v n ) 1 X ( v n ) v n + 1 = t n 9 4 I + 15 8 X ( y n ) 1 X ( v n ) + 11 8 X ( v n ) 1 X ( y n ) X ( y n ) 1 X ( t n )
For α = 2 3 , these algorithms are described in [26] (see also [11]), where the benefits over other algorithm are well explained. Conditions on derivatives of the seventh order and Taylor series expansion have been employed in [11,26] to determine their convergence rate. Because of such results needing derivatives of higher order, these algorithms are very difficult to implement, as their utility is reduced although they may converge. In order to justify this, we have the following function
X ( v ) = v 3 ln ( v 2 ) + v 5 v 4 , if v 0 0 , if v = 0 ,
where Y 1 = Y 2 = R and X is defined on Z = [ 1 2 , 3 2 ] . Then, it is crucial to highlight that X is not bounded. Hence, the existing convergence results for methods SM1 and SM2 based on X ( v i i ) do not work in this scenario, although these algorithms may still converge with convergence order six. Clearly this is the case, since the conditions in the aforementioned studies are only sufficient.
The other parts of this work can be summarized as: In Section 2, the main convergence theorems on the ball convergence of SM1 and SM2 are discussed. Section 3 deals with comparison of the attraction basins for these procedures. Numerical applications are placed in Section 4. This study is concluded with final comments in Section 5.

2. Ball Convergence

Our ball convergence analysis is requires the development of some scalar parameters and functions. Set M = [ 0 , ) .
Suppose function:
(i)
λ 0 ( s ) 1
has a smallest root r 0 in M \ { 0 } for some function λ 0 : M M which is non-decreasing and continuous. Set M 0 = [ 0 , r 0 ) .
(ii)
U 1 ( s ) 1
has a smallest root ρ 1 in M 0 \ { 0 } for some functions λ : [ 0 , 2 r 0 ) M , λ 1 : M 0 M which are non-decreasing and continuous with U 1 : M 0 M defined by
U 1 ( s ) = 0 1 λ ( ( 1 θ ) s ) d θ + | 1 α | 0 1 λ 1 ( θ s ) d θ 1 λ 0 ( s ) .
(iii)
λ 0 ( U 1 ( s ) s ) 1
has a smallest root r 1 in M 0 \ { 0 } . Set r 2 = min { r 0 , r 1 } and M 1 = [ 0 , r 2 ) .
(iv)
U 2 ( s ) 1
has a smallest root ρ 2 in M 1 \ { 0 } , where
U 2 ( s ) = 0 1 λ ( ( 1 θ ) s ) d θ 1 λ 0 ( s ) + 3 8 3 ( λ 0 ( s ) + λ 0 ( U 1 ( s ) s ) ) 1 λ 0 ( U 1 ( s ) s ) + λ 0 ( s ) + λ 0 ( U 1 ( s ) s ) 1 λ 0 ( s ) 1 1 λ 0 ( s ) .
(v)
λ 0 ( U 2 ( s ) s ) 1
has a smallest root r 3 in M 1 \ { 0 } . Set r = min { r 2 , r 3 } and M 2 = [ 0 , r ) .
(vi)
U 3 ( s ) 1
has a smallest root ρ 3 in M 2 \ { 0 } , where
U 3 ( s ) = [ 0 1 λ ( ( 1 θ ) U 2 ( s ) s ) d θ 1 λ 0 ( U 2 ( s ) s ) ) + ( λ 0 ( U 1 ( s ) s ) + λ 0 ( U 2 ( s ) s ) ) 0 1 λ 1 ( θ U 2 ( s ) s ) d θ ( 1 λ 0 ( U 1 ( s ) s ) ) ( 1 λ 0 ( U 2 ( s ) s ) ) + 1 8 15 ( λ 0 ( s ) + λ 0 ( U 1 ( s ) s ) ) 1 λ 0 ( U 1 ( s ) s ) + 11 ( λ 0 ( s ) + λ 0 ( U 1 ( s ) s ) ) 1 λ 0 ( s ) + 16 × 0 1 λ 1 ( θ U 2 ( s ) s ) d θ 1 λ 0 ( U 1 ( s ) s ) ] U 2 ( s ) .
The scalar ρ given as
ρ = min { ρ k } , k = 1 , 2 , 3
shall be shown to be a convergence radius for SM1. Set T = [ 0 , ρ ) . It is implied by (5) that
0 λ 0 ( s ) < 1 ,
0 λ 0 ( U 1 ( s ) s ) < 1 ,
0 λ 0 ( U 2 ( s ) s ) < 1
and
0 U k ( s ) < 1
hold for each s in T.
The notation U ¯ ( v * , r ) stands for the closure of a ball of radius r > 0 and center v * Y 1 .
We suppose from now on that v * is a simple root of X , scalar functions λ are as given previously and X : Z Y 2 is differentiable. Further, conditions ( C ) hold:
( C 1 )
For each v Z
| | X ( v * ) 1 ( X ( v ) X ( v * ) ) | | λ 0 ( | | v v * | | ) .
Set Z 0 = Z U ( v * , r 0 ) .
( C 2 )
For each u , v Z 0
| | X ( v * ) 1 ( X ( u ) X ( v ) ) | | λ ( | | u v | | )
and
| | X ( v * ) 1 X ( v ) | | λ 1 ( | | v v * | | ) .
( C 3 )
U ¯ ( v * , ρ ˜ ) Z for some ρ ˜ to be defined later.
( C 4 )
There exists ρ * ρ ˜ satisfying
0 1 λ 0 ( θ ρ * ) d θ < 1 .
Set Z 1 = Z U ¯ ( v * , ρ * ) .
Next, the main convergence result for SM1 is developed utilizing conditions ( C ) .
Theorem 1.
Suppose that the conditions ( C ) hold for ρ ˜ = ρ . Then, iteration { v n } given by SM1 exists in U ( v * , ρ ) , stays in U ( v * , ρ ) and converges to v * provided the initial guess v 0 is in U ( v * , ρ ) \ { v * } . Moreover, we have
| | y n v * | | U 1 ( | | v n v * | | ) | | v n v * | | | | v n v * | | < ρ ,
| | t n v * | | U 2 ( | | v n v * | | ) | | v n v * | | | | v n v * | |
and
| | v n + 1 v * | | U 3 ( | | v n v * | | ) | | v n v * | | | | v n v * | | ,
where radius r and functions U k are as given previously. Furthermore, v * is the only zero of X in the set Z 1 given in ( C 4 ) is v * .
Proof. 
Mathematical induction shall be used to show the existence of iteration { v n } so that items (10)–(12) hold. Let z U ( v * , ρ ) \ { v * } . Using ( C 1 ) , (5) and (6) we get
| | X ( v * ) 1 ( X ( z ) X ( v * ) ) | | λ 0 ( | | z v * | | ) λ 0 ( ρ ) < 1 ,
leading to X ( z ) 1 L ( Y 2 , Y 1 ) by a lemma due to Banach for mappings [5] that are invertible, and
| | X ( z ) 1 X ( v * ) | | 1 1 λ 0 ( | | z v * | | ) .
The iterate y 0 exists by (14) for z = v 0 , and we can write
y 0 v * = v 0 v * X ( v 0 ) 1 X ( v 0 ) + ( 1 α ) X ( v 0 ) 1 X ( v 0 ) = ( X ( v 0 ) 1 X ( v * ) ) × 0 1 X ( v * ) 1 ( X ( v * + θ ( v 0 v * ) ) X ( v 0 ) ) d θ ( v 0 v * ) + ( 1 α ) ( X ( v 0 ) 1 X ( v * ) ) ( X ( v * ) 1 X ( v 0 ) ) .
Using (5), (9) (for k = 1 ), (14) (for z = v 0 ), ( C 2 ) and (15), we have in turn
| | y 0 v * | | ( 0 1 λ ( ( 1 θ ) | | v 0 v * | | ) d θ + | 1 α | 0 1 λ 1 ( θ | | v 0 v * | | ) d θ ) | | v 0 v * | | 1 λ 0 ( | | v 0 v * | | ) U 1 ( | | v 0 v * | | ) | | v 0 v * | | | | v 0 v * | | < ρ ,
showing y 0 U ( v * , ρ ) and (10) holds if n = 0 . Notice also that X ( y 0 ) 1 L ( Y 2 , Y 1 ) , and t 0 exists, so we can write
t 0 v * = v 0 v * X ( v 0 ) 1 X ( v 0 ) + 3 2 I 9 8 X ( y 0 ) 1 X ( v 0 ) 3 8 X ( v 0 ) 1 X ( y 0 ) X ( v 0 ) 1 X ( v 0 ) = v 0 v * X ( v 0 ) 1 X ( v 0 ) + 3 8 ( 4 I 3 X ( y 0 ) 1 X ( v 0 ) X ( v 0 ) 1 X ( y 0 ) ) X ( v 0 ) 1 X ( v 0 ) = v 0 v * X ( v 0 ) 1 X ( v 0 ) + 3 8 ( 3 X ( y 0 ) 1 ( X ( y 0 ) X ( v 0 ) ) + X ( v 0 ) 1 ( X ( v 0 ) X ( y 0 ) ) ) X ( v 0 ) 1 X ( v 0 ) .
By (5), (9) (for k = 2 ), (14) (for z = v 0 , y 0 ), (16) and (17), we obtain in turn
| | t 0 v * | | [ 0 1 λ ( ( 1 θ ) | | v 0 v * | | ) d θ 1 λ 0 ( | | v 0 v * | | ) + 3 8 ( 3 ( λ 0 ( | | v 0 v * | | ) + λ 0 ( | | y 0 v * | | ) ) 1 λ 0 ( | | y 0 v * | | ) + λ 0 ( | | v 0 v * | | ) + λ 0 ( | | y 0 v * | | ) 1 λ 0 ( | | v 0 v * | | ) ) 1 1 λ 0 ( | | v 0 v * | | ) ] | | v 0 v * | | U 2 ( | | v 0 v * | | ) | | v 0 v * | | | | v 0 v * | | ,
showing t 0 U ( v * , ρ ) and (11) if n = 0 . Notice also that X ( t 0 ) 1 L ( Y 2 , Y 1 ) , v 1 exists, and we can write
v 1 v * = t 0 v * X ( t 0 ) 1 X ( t 0 ) + ( X ( t 0 ) 1 X ( y 0 ) 1 ) X ( t 0 ) + ( 5 4 I 15 8 X ( y 0 ) 1 X ( v 0 ) 11 8 X ( v 0 ) 1 X ( y 0 ) ) X ( y 0 ) 1 X ( t 0 ) = t 0 v * X ( t 0 ) 1 X ( t 0 ) + X ( t 0 ) 1 ( X ( y 0 ) X ( t 0 ) ) X ( y 0 ) 1 X ( t 0 ) + 1 8 ( 15 X ( y 0 ) 1 ( X ( y 0 ) X ( v 0 ) ) + 11 X ( v 0 ) 1 ( X ( v 0 ) X ( y 0 ) ) 16 I ) X ( y 0 ) 1 X ( t 0 ) .
In view of (5), (9) (for k = 3 ), (14) (for z = y 0 , t 0 ), (16), (18) and (19), we have
| | v 1 v * | | [ 0 1 λ ( ( 1 θ ) | | t 0 v * | | ) d θ 1 λ 0 ( | | t 0 v * | | ) + ( λ 0 ( | | y 0 v * | | ) + λ 0 ( | | t 0 v * | | ) ) 0 1 λ 1 ( θ | | t 0 v * | | ) d θ ( 1 λ 0 ( | | y 0 v * | | ) ) ( 1 λ 0 ( | | t 0 v * | | ) ) + 1 8 ( 15 ( λ 0 ( | | v 0 v * | | ) + λ 0 ( | | y 0 v * | | ) ) 1 λ 0 ( | | y 0 v * | | ) + 11 ( λ 0 ( | | v 0 v * | | ) + λ 0 ( | | y 0 v * | | ) ) 1 λ 0 ( | | y 0 v * | | ) + 16 ) × 0 1 λ 1 ( ( θ | | t 0 v * | | ) d θ 1 λ 0 ( | | y 0 v * | | ) ] | | z 0 v * | | U 3 ( | | v 0 v * | | ) | | v 0 v * | | | | v 0 v * | | ,
showing v 1 U ( v * , ρ ) , and (12) if n = 0 .
Simply switch v 0 , y 0 , t 0 , v 1 by v m , y m , t m , v m + 1 in the previous calculations to finish the induction for items (10)–(12). Then, the estimation
| | x m + 1 v * | | b | | v n v * | | < ρ ,
where b = U 3 ( | | v 0 v * | | ) is in [ 0 , 1 ) implies v m + 1 U ( v * , ρ ) and lim m v m = v * .
Finally, set G = 0 1 X ( v * + θ ( v * * v * ) ) d θ for v * * Z 1 with X ( v * * ) = 0 .
By ( C 1 ) and ( C 4 ) , we obtain
| | X ( v * ) 1 ( G X ( v * ) ) | | 0 1 λ 0 ( θ | | v * * v * | | ) d θ 0 1 λ 0 ( θ ρ * ) d θ < 1 ,
leading to v * * = v * , since G 1 L ( Y 2 , Y 1 ) and 0 = X ( v * * ) X ( v * ) = G ( v * * v * ) . □
Next, the convergence analysis of algorithm SM2 is developed in an analogous fashion. But this time the functions are:
U 1 ¯ ( s ) = U 1 ( s ) , U 2 ¯ ( s ) = 0 1 λ ( ( 1 θ s ) ) d θ 1 λ 0 ( s ) + 3 8 λ 0 ( s ) + λ 0 ( U 1 ¯ ( s ) s ) 1 λ 0 ( U 1 ¯ ( s ) s ) 2 + 2 λ 0 ( s ) + λ 0 ( U 1 ¯ ( s ) s ) 1 λ 0 ( U 1 ¯ ( s ) s ) 0 1 λ 1 ( θ s ) d θ 1 λ 0 ( s ) ,
U 3 ¯ ( s ) = [ 0 1 λ ( ( 1 θ ) U 2 ¯ ( s ) s ) d θ 1 λ 0 ( U 2 ¯ ( s ) s ) ) + ( λ 0 ( U 1 ¯ ( s ) s ) + λ 0 ( U 2 ¯ ( s ) s ) ) 0 1 λ 1 ( θ U 2 ¯ ( s ) s ) d θ ( 1 λ 0 ( U 1 ¯ ( s ) s ) ) ( 1 λ 0 ( U 2 ¯ ( s ) s ) ) + 1 8 15 ( λ 0 ( s ) + λ 0 ( U 1 ¯ ( s ) s ) ) 1 λ 0 ( U 1 ¯ ( s ) s ) + 11 ( λ 0 ( s ) + λ 0 ( U 1 ¯ ( s ) s ) ) 1 λ 0 ( s ) + 16 × 0 1 λ 1 ( θ U 2 ¯ ( s ) ( s ) s ) d θ 1 λ 0 ( U 1 ¯ ( s ) s ) ] U 2 ¯ ( s ) ( s ) .
Suppose functions U k ¯ ( s ) 1 have least positive solutions ρ k ¯ , respectively as before, and set
ρ ¯ = min { ρ k ¯ } .
Then, under conditions ( C ) for ρ ˜ = ρ ¯ , the choice of the U k ¯ functions is justified by the estimates
| | y n v * | | U 1 ( | | v n v * | | ) | | v n v * | | = U 1 ¯ ( | | v n v * | | ) | | v n v * | | | | v n v * | | < ρ ¯ , t n v * = v n v * X ( v n ) 1 X ( v n ) 3 8 ( ( X ( y n ) 1 X ( v n ) ) 2 I ) X ( v n ) 1 X ( v n ) = v n v * X ( v n ) 1 X ( v n ) 3 8 [ ( X ( y n ) 1 ( X ( v n ) X ( y n ) ) ) 2 + 2 X ( y n ) 1 ( X ( v n ) X ( y n ) ) ] X ( v n ) 1 X ( v n ) ,
so
| | t n v * | | [ 0 1 λ ( ( 1 θ ) | | v n v * | | ) d θ 1 λ 0 ( | | v n v * | | ) + 3 8 ( λ 0 ( | | v n v * | | ) + λ 0 ( | | y n v * | | ) 1 λ 0 ( | | y n v * | | ) 2 + 2 λ 0 ( | | v n v * | | ) + λ 0 ( | | y n v * | | ) 1 λ 0 ( | | y n v * | | ) ) 0 1 λ 1 ( θ | | v n v * | | ) d θ 1 λ 0 ( | | v n v * | | ) ] | | v n v * | | U 2 ¯ ( | | v n v * | | ) | | v n v * | | | | v n v * | | ,
v n + 1 v * = t n v * X ( t n ) 1 X ( t n ) + ( X ( t n ) 1 X ( y n ) 1 ) X ( t n ) + ( 5 4 I 15 8 X ( y n ) 1 X ( v n ) 11 8 X ( v n ) 1 X ( y n ) ) X ( y n ) 1 X ( t n ) = t n v * X ( t n ) 1 X ( t n ) + X ( t n ) 1 ( X ( y n ) X ( t n ) ) X ( y n ) 1 X ( t n ) + 1 8 ( 15 X ( y n ) 1 ( X ( y n ) X ( v n ) ) + 11 X ( v n ) 1 ( X ( v n ) X ( y n ) ) 16 I ) X ( y n ) 1 X ( t n ) ,
so
| | v n + 1 v * | | [ 0 1 λ ( ( 1 θ ) | | t n v * | | ) d θ 1 λ 0 ( | | t n v * | | ) + ( λ 0 ( | | y n v * | | ) + λ 0 ( | | t n v * | | ) ) 0 1 λ 1 ( θ | | t n v * | | ) d θ ( 1 λ 0 ( | | y n v * | | ) ) ( 1 λ 0 ( | | t n v * | | ) ) + 1 8 ( 15 ( λ 0 ( | | v n v * | | ) + λ 0 ( | | y n v * | | ) ) 1 λ 0 ( | | y n v * | | ) + 11 ( λ 0 ( | | v n v * | | ) + λ 0 ( | | y n v * | | ) ) 1 λ 0 ( | | y n v * | | ) + 16 ) × 0 1 λ 1 ( ( θ | | t n v * | | ) d θ 1 λ 0 ( | | y n v * | | ) ] | | z n v * | | U 3 ¯ ( | | v n v * | | ) | | v n v * | | | | v n v * | | .
Hence, we arrived at the ball convergence result for SM2.
Theorem 2.
Suppose that the conditions ( C ) hold with ρ ˜ = ρ ¯ . Then, the conclusions of Theorem 1 hold for SM2 with ρ ¯ , U k ¯ , replacing ρ, U k , respectively.
Remark 1.
The continuity assumption
| | X ( v * ) 1 ( X ( u ) X ( v ) ) | | λ ¯ ( | | u v | | ) , f o r a l l u , v Z
on X is employed in existing studies. But then, since Z 0 Z , we have
λ ( s ) λ ¯ ( s ) , f o r a l l s [ 0 , 2 r 0 ) .
This is a significant achievement. All results, which are obtained earlier, can be presented in terms of λ, since u i Z 0 . This is a more specific location about v n . This improves the convergence radii; tightens the upper error | | v n v * | | and produces a better knowledge about v * . To demonstrate this, let us take the example X ( v ) = e v 1 for Z = U ( 0 , 1 ) . Then, we have
λ 0 ( s ) = ( e 1 ) s < λ ( s ) = e 1 e 1 s < λ ¯ ( s ) = e s ,
and using Rheinboldt or Traub [14,15] (for λ 0 = λ = λ ¯ ), we get R T R = 0.081751 , previous studies by Argyros [5] (for λ = λ ¯ ), R E = 0.108316 and with this study ρ 1 = ρ 1 ¯ = 0.127564 , so
ρ 1 = ρ 1 ¯ > R E > R T R .

3. Comparison of Attraction Basins

Comparison of the dynamical qualities of SM1 and SM2 are provided in this section by employing the tool attraction basin. Suppose M ( z ) is the notation for a second or higher degree complex polynomial. Then, the set { z 0 C : z j z * a s j } represents the attraction basin corresponding to a zero z * of M , where { z j } j = 0 is formed by an iterative algorithm with a starting choice z 0 C . Let us select a region E = [ 4 , 4 ] × [ 4 , 4 ] on C with a grid of 400 × 400 points. To prepare attraction basins, we apply SM1 and SM2 on variety of complex polynomials by selecting every point z 0 E as a stater. The point z 0 remains in the basin of a zero z * of a considered polynomial if lim j z j = z * . Then, we display z 0 with a fixed color corresponding to z * . As per the number of iterations, we employ the light to dark colors to each z 0 . Black color is the sign of non-convergence zones. The terminating condition of the iteration is | | z j z * | | < 10 6 with the maximum limit of 300 iterations. We used MATLAB 2019a to design the fractal pictures.
This numerical experiment begins with polynomials M 1 ( z ) = z 2 1 and M 2 ( z ) = z 2 z 1 of degree two. These polynomials are used to compare the attraction basins for SM1 and SM2. The results of comparison are displayed in Figure 1 and Figure 2. In Figure 1, green and pink areas indicate the attraction basins corresponding to the zeros 1 and 1, respectively, of M 1 ( z ) . The basins of the solutions 1 + 5 2 and 1 5 2 of M 2 ( z ) = 0 are shown in Figure 2 by applying pink and green colors, respectively. Figure 3 and Figure 4 offer the attraction basins for SM1 and SM2 associated to the zeros of M 3 ( z ) = z 3 1 and M 4 ( z ) = z 3 + ( 0.7250 + 1.6500 i ) z 0.275 1.65 i . In Figure 3, the basins of the solutions 1 2 0.866025 i , 1 and 1 2 + 0.866025 i of M 3 ( z ) = 0 are painted in blue, green and pink, respectively. The basins for SM1 and SM2 associated to the zeros 1, 1.401440 + 0.915201 i and 0.4014403 0.915201 i of M 4 ( z ) are given in Figure 4 by means of green, pink and blue regions, respectively. Next, we use polynomials M 5 ( z ) = z 4 1 and M 6 ( z ) = z 4 10 z 2 + 9 of degree four to compare the attraction basins for SM1 and SM2. Figure 5 provides the comparison of basins for these algorithms associated to the solutions 1, i , i and 1 of M 5 ( z ) = 0 , which are denoted in yellow, green, pink and blue regions. The basins for SM1 and SM2 corresponding to the zeros 1 , 3, 3 and 1 of M 6 ( z ) are demonstrated in Figure 6 using yellow, pink, green and blue colors, respectively. Moreover, we select polynomials M 7 ( z ) = z 5 5 z 3 + 4 z and M 8 ( z ) = z 5 + z of degree five to design and compare the attraction basins for SM1 and SM2. Figure 7 gives the basins of zeros 0, 2, 1 , 2 and 1 of M 7 ( z ) in yellow, magenta, red, green and cyan colors, respectively. In Figure 8, green, cyan, red, pink and yellow regions illustrate the attraction basins of the solutions 0.707106 0.707106 i , 0.707106 + 0.707106 i , 0.707106 + 0.707106 i , 0.707106 0.707106 i and 0, respectively, of M 8 ( z ) = 0 . Lastly, sixth degree complex polynomials M 9 ( z ) = z 6 0.5 z 5 + 11 4 ( 1 + i ) z 4 1 4 ( 19 + 3 i ) z 3 + 1 4 ( 11 + i ) z 2 1 4 ( 19 + 3 i ) z + 3 2 3 i and M 10 ( z ) = z 6 + z 1 are considered. In Figure 9, the attraction basins for SM1 and SM2 corresponding to the zeros 1 i , 1 2 i 2 , 3 2 i , 1, i and 1 + 2 i of M 9 ( z ) are given in blue, yellow, green, magenta, cyan and red colors, respectively. In Figure 10, green, pink, red, yellow, cyan and blue colors are applied to illustrate the basins related to the solutions 1.134724 , 0.629372 0.735755 i , 0.7780895 , 0.451055 1.002364 i , 0.629372 + 0.735755 i and 0.451055 + 1.002364 i of M 10 ( z ) = 0 , respectively. In these Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10, the roots of the considered polynomials are displayed using white *.

4. Numerical Examples

We apply the proposed techniques to estimate the convergence radii for SM1 and SM2 when α = 2 3 .
Example 1.
Let us consider Y 1 = Y 2 = C [ 0 , 1 ] and Z = U ¯ ( 0 , 1 ) . Define X on Z by
X ( v ) ( a ) = v ( a ) 5 0 1 a x v ( x ) 3 d x ,
where v ( a ) C [ 0 , 1 ] . We have v * = 0 . In addition, λ 0 ( s ) = 7.5 s , λ ( s ) = 15 s and λ 1 ( s ) = 2 . The values of ρ and ρ ¯ are produced by the application of proposed theorems and summarized in Table 1.
Example 2.
Let Y 1 = Y 2 = R 3 and Z = U ¯ ( 0 , 1 ) . Consider X on Z for v = ( v 1 , v 2 , v 3 ) t as
X ( v ) = ( e v 1 1 , e 1 2 v 2 2 + v 2 , v 3 ) t
We have v * = ( 0 , 0 , 0 ) t . In addition, λ 0 ( s ) = ( e 1 ) s , λ ( s ) = e 1 e 1 s and λ 1 ( s ) = 2 . Using the newly proposed theorems the values of ρ and ρ ¯ are calculated and displayed in Table 2.
Example 3.
Finally, the motivational problem described in the first section is addressed with v * = 1 , λ 0 ( s ) = λ ( s ) = 96.662907 s and λ 1 ( s ) = 2 . We apply the suggested theorems to compute ρ and ρ ¯ . These values are shown in Table 3.
It is worth noticing that if we stop at the first iterate in both algorithms (i.e., restrict ourselves to the first substep of Jarratt’s algorithm), then the radius is largest (see ρ 1 and ρ 1 ¯ ). Moreover, if we increase the convergence order to four (i.e., consider only the first two substeps of the algorithms), then the radii get smaller (see ρ 2 and ρ 2 ¯ ). Furthermore, if we increase the order to six (i.e., use all the substeps of these algorithms), then, we obtain the smallest radii (see ρ and ρ ¯ ). This is expected when the order increases. Concerning the corresponding error estimates | | v n v * | | we see clearly that fewer iterates are needed to reach v * as order increases. We solved Example 3 with v 0 = 0.9993 using these algorithms and the results are presented in Table 4 and Table 5. In addition, we executed four iterations of these schemes ten times in MATLAB 2019a. Then, we obtained the average elapsed time and average CPU time (in seconds) for SM1 and SM2 and these values are presented in Table 6.

5. Conclusions

Major problems appear when studying high convergence order algorithms for solving equations. One of them is that the order is shown assuming the existence of higher order derivatives that do not appear on the algorithms. In particular in the case of SM1 ans SM2 derivatives up to the order seven have been utilized. Hence (see also our example in the introduction) these derivative restrict the utilization of these algorithms. We also do not know how many iterates needed to arrive at a prearranged accuracy. Moreover, no uniqueness of the solution is known about a certain ball. This is not only the case for the algorithms we studied but all the high convergence algorithms whose convergence order is shown using Taylor series. That is why we addressed all these concerns in the more general situation of a Banach space, under generalized continuity conditions and using only the derivative appearing on these algorithms. Our technique can be applied to extend the utilization of other algorithms since it is so general. We also present the convergence ball and dynamical comparison between these schemes.

Author Contributions

Conceptualization, I.K.A. and D.S.; methodology, I.K.A., D.S., C.I.A., S.K.P., S.K.S. and M.I.A.; software, I.K.A., C.I.A. and D.S.; validation, I.K.A., D.S., C.I.A., S.K.P. and S.K.S.; formal analysis, I.K.A., D.S., C.I.A. and M.I.A.; investigation, I.K.A., D.S., C.I.A., S.K.P. and S.K.S.; resources, I.K.A., D.S., C.I.A., S.K.P. and S.K.S.; data curation, C.I.A., S.K.P., S.K.S. and M.I.A.; writing—original draft preparation, I.K.A., D.S., C.I.A., S.K.P., S.K.S. and M.I.A.; writing—review and editing, I.K.A. and D.S.; visualization, I.K.A., D.S., C.I.A., S.K.P., S.K.S. and M.I.A.; supervision, I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

This study did not report any data.

Conflicts of Interest

The researchers have no conflict of interest.

References

  1. Abro, H.A.; Shaikh, M.M. A new time-efficient and convergent nonlinear solver. Appl. Math. Comput. 2019, 355, 516–536. [Google Scholar] [CrossRef]
  2. Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Design, Analysis, and Applications of Iterative Methods for Solving Nonlinear Systems. In Nonlinear Systems-Design, Analysis, Estimation and Control; IntechOpen: Rijeka, Croatia, 2016. [Google Scholar]
  3. Waseem, M.; Noor, M.A.; Noor, K.I. Efficient method for solving a system of nonlinear equations. Appl. Math. Comput. 2016, 275, 134–146. [Google Scholar] [CrossRef]
  4. Said Solaiman, O.; Hashim, I. Efficacy of optimal methods for nonlinear equations with chemical engineering applications. Math. Probl. Eng. 2019, 2019, 1728965. [Google Scholar] [CrossRef] [Green Version]
  5. Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis; World Scientific Publishing House: New Jersey, NJ, USA, 2013. [Google Scholar]
  6. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  7. Argyros, I.K.; Magreñán, Á.A. A Contemporary Study of Iterative Methods; Elsevier: New York, NY, USA, 2018. [Google Scholar]
  8. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  9. Cordero, A.; García-Maimó, J.; Torregrosa, J.R.; Vassileva, M.P. Solving nonlinear problems by Ostrowski-Chun type parametric families. J. Math. Chem. 2015, 53, 430–449. [Google Scholar] [CrossRef] [Green Version]
  10. Grau-Sánchez, M.; Gutiérrez, J.M. Zero-finder methods derived from Obreshkovs techniques. Appl. Math. Comput. 2009, 215, 2992–3001. [Google Scholar]
  11. Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  12. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comp. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  13. Petković, M.S.; Neta, B.; Petković, L.; Džunić, D. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  14. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. Math. Model. Numer. Methods 1978, 3, 129–142. [Google Scholar] [CrossRef]
  15. Traub, J.F. Iterative Methods for Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  16. Wang, X.; Kou, J.; Li, Y. A variant of Jarratt method with sixth-order convergence. Appl. Math. Comput. 2008, 204, 14–19. [Google Scholar] [CrossRef]
  17. Sharma, J.R.; Guna, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
  18. Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
  19. Sharma, J.R.; Arora, H. Improved Newton-like methods for solving systems of nonlinear equations. SeMA J. 2016, 74, 1–7. [Google Scholar] [CrossRef]
  20. Argyros, I.K.; George, S. Local convergence of Jarratt-type methods with less computation of inversion under weak conditions. Math. Model. Anal. 2017, 22, 228–236. [Google Scholar] [CrossRef]
  21. Argyros, I.K.; Sharma, D.; Argyros, C.I.; Parhi, S.K.; Sunanda, S.K. A Family of Fifth and Sixth Convergence Order Methods for Nonlinear Models. Symmetry 2021, 13, 715. [Google Scholar] [CrossRef]
  22. Kou, J.; Li, Y. An improvement of the Jarratt method. Appl. Math. Comput. 2007, 189, 1816–1821. [Google Scholar] [CrossRef]
  23. Magreñán, Á.A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
  24. Sharma, D.; Parhi, S.K.; Sunanda, S.K. Extending the convergence domain of deformed Halley method under ω condition in Banach spaces. Bol. Soc. Mat. Mex. 2021, 27, 32. [Google Scholar] [CrossRef]
  25. Soleymani, F.; Lotfi, T.; Bakhtiari, P. A multi-step class of iterative methods for nonlinear systems. Optim. Lett. 2014, 8, 1001–1015. [Google Scholar] [CrossRef]
  26. Chun, C.; Neta, B. Developing high order methods for the solution of systems of nonlinear equations. Appl. Math. Comput. 2019, 342, 178–190. [Google Scholar] [CrossRef]
Figure 1. Attraction basins comparison between SM1 and SM2 related to M 1 ( z ) .
Figure 1. Attraction basins comparison between SM1 and SM2 related to M 1 ( z ) .
Algorithms 14 00207 g001
Figure 2. Attraction basins comparison between SM1 and SM2 corresponding to M 2 ( z ) .
Figure 2. Attraction basins comparison between SM1 and SM2 corresponding to M 2 ( z ) .
Algorithms 14 00207 g002
Figure 3. Attraction basins comparison between SM1 and SM2 related to M 3 ( z ) .
Figure 3. Attraction basins comparison between SM1 and SM2 related to M 3 ( z ) .
Algorithms 14 00207 g003
Figure 4. Attraction basins comparison between SM1 and SM2 corresponding to M 4 ( z ) .
Figure 4. Attraction basins comparison between SM1 and SM2 corresponding to M 4 ( z ) .
Algorithms 14 00207 g004
Figure 5. Attraction basins comparison between SM1 and SM2 related to M 5 ( z ) .
Figure 5. Attraction basins comparison between SM1 and SM2 related to M 5 ( z ) .
Algorithms 14 00207 g005
Figure 6. Attraction basins comparison between SM1 and SM2 corresponding to M 6 ( z ) .
Figure 6. Attraction basins comparison between SM1 and SM2 corresponding to M 6 ( z ) .
Algorithms 14 00207 g006
Figure 7. Attraction basins comparison between SM1 and SM2 related to M 7 ( z ) .
Figure 7. Attraction basins comparison between SM1 and SM2 related to M 7 ( z ) .
Algorithms 14 00207 g007
Figure 8. Attraction basins comparison between SM1 and SM2 corresponding to M 8 ( z ) .
Figure 8. Attraction basins comparison between SM1 and SM2 corresponding to M 8 ( z ) .
Algorithms 14 00207 g008
Figure 9. Attraction basins comparison between SM1 and SM2 related to M 9 ( z ) .
Figure 9. Attraction basins comparison between SM1 and SM2 related to M 9 ( z ) .
Algorithms 14 00207 g009
Figure 10. Attraction basins comparison between SM1 and SM2 corresponding to M 10 ( z ) .
Figure 10. Attraction basins comparison between SM1 and SM2 corresponding to M 10 ( z ) .
Algorithms 14 00207 g010
Table 1. Comparison of convergence radii for Example 1.
Table 1. Comparison of convergence radii for Example 1.
SM1SM2
ρ 1 = 0.022222 ρ 1 ¯ = 0.022222
ρ 2 = 0.023424 ρ 2 ¯ = 0.021456
ρ 3 = 0.006858 ρ 3 ¯ = 0.006703
ρ = 0.006858 ρ ¯ = 0.006703
Table 2. Comparison of convergence radii for Example 2.
Table 2. Comparison of convergence radii for Example 2.
SM1SM2
ρ 1 = 0.127564 ρ 1 ¯ = 0.127564
ρ 2 = 0.113416 ρ 2 ¯ = 0.102805
ρ 3 = 0.033574 ρ 3 ¯ = 0.032642
ρ = 0.033574 ρ ¯ = 0.032642
Table 3. Comparison of convergence radii for Example 3.
Table 3. Comparison of convergence radii for Example 3.
SM1SM2
ρ 1 = 0.002299 ρ 1 ¯ = 0.002299
ρ 2 = 0.002026 ρ 2 ¯ = 0.001835
ρ 3 = 0.000600 ρ 3 ¯ = 0.000583
ρ = 0.000600 ρ ¯ = 0.000583
Table 4. | | v n v * | | for algorithm SM1.
Table 4. | | v n v * | | for algorithm SM1.
n1 Substep2 Substeps3 Substeps (SM1)
1 2.32 × 10 4 1.29 × 10 11 3.65 × 10 14
2 7.73 × 10 5 1.50 × 10 42 1.38 × 10 65
3 2.58 × 10 5 2.74 × 10 166 1.06 × 10 322
4 8.59 × 10 6 3.00 × 10 661 2.87 × 10 1608
Table 5. | | v n v * | | for algorithm SM2.
Table 5. | | v n v * | | for algorithm SM2.
n1 Substep2 Substeps3 Substeps (SM2)
1 2.32 × 10 4 7.12 × 10 12 2.01 × 10 14
2 7.73 × 10 5 7.59 × 10 44 3.83 × 10 67
3 2.58 × 10 5 9.81 × 10 172 9.67 × 10 331
4 8.59 × 10 6 2.73 × 10 683 9.96 × 10 1649
Table 6. Elapsed time and CPU time comparison between SM1 and SM2.
Table 6. Elapsed time and CPU time comparison between SM1 and SM2.
AlgorithmElapsed TimeCPU Time
S M 1 3.2925 3.4875
S M 2 3.4524 3.6187
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Sharma, D.; Argyros, C.I.; Parhi, S.K.; Sunanda, S.K.; Argyros, M.I. Extended High Order Algorithms for Equations under the Same Set of Conditions. Algorithms 2021, 14, 207. https://doi.org/10.3390/a14070207

AMA Style

Argyros IK, Sharma D, Argyros CI, Parhi SK, Sunanda SK, Argyros MI. Extended High Order Algorithms for Equations under the Same Set of Conditions. Algorithms. 2021; 14(7):207. https://doi.org/10.3390/a14070207

Chicago/Turabian Style

Argyros, Ioannis K., Debasis Sharma, Christopher I. Argyros, Sanjaya Kumar Parhi, Shanta Kumari Sunanda, and Michael I. Argyros. 2021. "Extended High Order Algorithms for Equations under the Same Set of Conditions" Algorithms 14, no. 7: 207. https://doi.org/10.3390/a14070207

APA Style

Argyros, I. K., Sharma, D., Argyros, C. I., Parhi, S. K., Sunanda, S. K., & Argyros, M. I. (2021). Extended High Order Algorithms for Equations under the Same Set of Conditions. Algorithms, 14(7), 207. https://doi.org/10.3390/a14070207

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop