Previous Article in Journal
Fixed Point and Stability Analysis of a Tripled System of Nonlinear Fractional Differential Equations with n-Nonlinear Terms
Previous Article in Special Issue
A Note on Fractional Third-Order Partial Differential Equations and the Generalized Laplace Transform Decomposition Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction

1
School of Mathematical Sciences, Zhejiang Normal University, Jinhua 321004, China
2
School of Advanced Science and Languages, VIT Bhopal University, Kothri-Kalan, Sehore 466114, India
3
Department of Mathematics, Thapar Institute of Engineering and Technology, Patiala 147004, India
4
School of Engineering, Universidade Federal do Rio Grande, Rio Grande 96201-900, Brazil
5
Institute of Geophysics and Geomatics, China University of Geosciences, Wuhan 321004, China
*
Authors to whom correspondence should be addressed.
Fractal Fract. 2024, 8(12), 698; https://doi.org/10.3390/fractalfract8120698 (registering DOI)
Submission received: 6 October 2024 / Revised: 22 November 2024 / Accepted: 23 November 2024 / Published: 26 November 2024

Abstract

:
This paper investigates the design and stability of Traub–Steffensen-type iteration schemes with and without memory for solving nonlinear equations. Steffensen’s method overcomes the drawback of the derivative evaluation of Newton’s scheme, but it has, in general, smaller sets of initial guesses that converge to the desired root. Despite this drawback of Steffensen’s method, several researchers have developed higher-order iterative methods based on Steffensen’s scheme. Traub introduced a free parameter in Steffensen’s scheme to obtain the first parametric iteration method, which provides larger basins of attraction for specific values of the parameter. In this paper, we introduce a two-step derivative free fourth-order optimal iteration scheme based on Traub’s method by employing three free parameters and a weight function. We further extend it into a two-step eighth-order iteration scheme by means of memory with the help of suitable approximations of the involved parameters using Newton’s interpolation. The convergence analysis demonstrates that the proposed iteration scheme without memory has an order of convergence of 4, while its memory-based extension achieves an order of convergence of at least 7.993 , attaining the efficiency index 7 . 993 1 / 3 2 . Two special cases of the proposed iteration scheme are also presented. Notably, the proposed methods compete with any optimal j-point method without memory. We affirm the superiority of the proposed iteration schemes in terms of efficiency index, absolute error, computational order of convergence, basins of attraction, and CPU time using comparisons with several existing iterative methods of similar kinds across diverse nonlinear equations. In general, for the comparison of iterative schemes, the basins of iteration are investigated on simple polynomials of the form z n 1 in the complex plane. However, we investigate the stability and regions of convergence of the proposed iteration methods in comparison with some existing methods on a variety of nonlinear equations in terms of fractals of basins of attraction. The proposed iteration schemes generate the basins of attraction in less time with simple fractals and wider regions of convergence, confirming their stability and superiority in comparison with the existing methods.

1. Introduction

Several real-life problems in engineering and applied sciences involve nonlinear equations of the form ϕ ( ω ) = 0 , where ϕ : I R R and I is an open interval. The solution of these nonlinear equations is the basic aim of this research, which has a simple zero, say α . Since the roots of a nonlinear equation cannot always be determined accurately, we have to find a numerical solution by using numerical methods. For this purpose, iteration methods, like Newton’s method, are frequently used [1,2,3]. Traub [3] classified these iterative methods into two categories; one-point (one-step) iterative methods and multi-point (multi-step) iterative methods. Newton’s method [1] and Steffensen’s method [4] are famous examples of one-step, one-point iterative methods, given by (1) and (2), respectively.
ω j + 1 = ω j ϕ ( ω j ) ϕ ( ω j ) , j 0 ,
χ j = ω j + ϕ ( ω j ) , j 0 , ω j + 1 = ω j ϕ ( ω j ) ϕ [ ω j , χ j ] ,
where ϕ [ ω j , χ j ] = ϕ ( ω j ) ϕ ( χ j ) ω j χ j .
The investigation of the dynamical behavior of iterative methods using basins of attraction provides information about the regions of convergence and the selection of initial guesses for which a method converges or fails to converge. To investigate the regions of convergence of an iteration scheme for solving a nonlinear equation ϕ ( z ) = 0 , we plot its basins of attraction, i.e., the set of initial guesses for which the iteration scheme converges to the roots [5,6], as follows. We chose an initial guess z 0 from a grid of 500 × 500 points within the rectangle D C , which contains all of the roots of ϕ ( z ) = 0 , each allocated by a unique color. Starting with an initial point in D, an iteration method may either converge to one of the roots or diverge after a specified number of iterations ‘20’, usually marked with the color black. For more details regarding basins of attraction, one should refer to [6,7]. For instance, we plot the basins of attraction of ϕ ( z ) = z 3 1 for Steffensen’s method (2), which has three roots— 1 , 0.5 0.866025 ι , and 0.5 + 0.866025 ι —contained in D = [ 3 , 3 ] × [ 3 , 3 ] and represented by cyan, magenta, and yellow, respectively. Figure 1 represents the basins of attraction of ϕ ( z ) = z 3 1 using Newton’s method (1) and Steffensen’s method (2) with 1335 and 226 , 616 black points, respectively, from a total of 251,001 points in the region.
Steffensen’s method overcomes the difficulty of derivative calculation in Newton’s scheme, but, in general, it has smaller sets of initial guesses that converge to the desired roots (basins of attraction) as shown in Figure 2. In recent years, several researchers have developed higher-order variants of Steffensen’s scheme despite this drawback of Steffensen’s method. Traub [3] introduced a free parameter in Steffensen’s scheme (2) to obtain the first parametric derivative free iteration method, which provides larger basins of attraction for specific values of the parameter. Traub [3] further presented an iteration method with memory by using a suitable approximation of the free parameter β j as follows:
χ j = ω j + β j ϕ ( ω j ) , β j 0 , ω j + 1 = ω j ϕ ( ω j ) ϕ [ ω j , χ j ] , j 0 ,
where ω 0 , β 0 are given, ϕ [ ω , χ ] = ϕ ( ω ) ϕ ( χ ) ( ω χ ) denotes the divided difference of first order and β j = 1 N 1 ( ω j ) , j 1 , where N 1 = ϕ ( ω j ) + ( ω ω j ) ϕ [ ω j , χ j ] is the first degree of Newton’s interpolation polynomial. The iterative scheme with memory given by (3) has a convergence order of 2.41 . Figure 2 represents the basins of attraction of ϕ ( z ) = z 3 1 using Traub’s method (3) for β 0 = 0.01 and β 0 = 0.001 with 5825 and 2177 black points, respectively.
The concept of an optimal root finding method was stated by Kung and Traub [8], that a multi-step iterative method without memory using j + 1 function evaluations per iteration has an order of convergence of 2 j (optimal method). Ostrowski [2] defined the efficiency index, i.e., E I = ρ 1 / j , to compute the efficiency of a root-finding iteration scheme, where ρ and j denote the order of convergence and the total number of function evaluations used by an iterative scheme per iterative step, respectively. For an optimal j-step iterative method (based on j + 1 function evaluations) without memory, the efficiency index is E I = lim j 2 j ρ + 1 = 2 . For instance, the efficiency index of the two-step optimal fourth order King’s method [9] is 4 1 3 1.587 (and requires three functional evaluations).
Since multi-step (multi-point) methods have advantages over the one-step (one-point) iteration methods in terms of the efficiency index and convergence order, several optimal multi-step (multi-point) iteration methods without memory for solving nonlinear equations have been derived in recent years (see, for example, [10,11,12,13,14,15,16,17,18]).
Traub [3] pointed out that in some cases, the convergence order and efficiency index E I of an iteration scheme can be improved without using additional functional evaluations based on the approximation of an accelerating parameter, which appears in its error term by using an interpolating polynomial, which passes through the available points at current and previous iterations. Such iteration methods are defined as methods with memory [3]. Inspired by this idea, in recent years, several two- and three-step iterative methods with memory have been developed by employing free parameters [19,20,21,22,23,24,25,26,27,28,29,30]. Recently, Abdullah et al. [30] have developed a two-point iterative method with-memory by using Hermite interpolation polynomials in an existing sixth-order method without memory. They improved the R —order of convergence of a sixth-order method to 7.2749 and the efficiency index from 1.37 to 1.64 . For more details regarding the improvement of convergence order by means of memory, one should see, e.g., [14,24].
In this paper, we present a family of two-step iterative root-finding methods with memory with a convergence order of 7.993 8 and an efficiency index of 7 . 993 1 / 3 2 , which is equal to an efficiency index of an j —point optimal method without memory of order 2 j . In addition, the proposed methods possess wider regions of convergence, illustrated in terms of basins of attraction. The remaining contents of the paper proceed as follows. In Section 2, based on Traub’s scheme (3) and the second step of King’s method [9] and by using a parametric approximation of a derivative along with a weight function, we obtain a new optimal fourth-order derivative-free iteration scheme. In Section 3, we employ three self-accelerating parameters in the new optimal fourth-order scheme such that the convergence order is improved from 4 to 8 without using additional functional evaluations (i.e., using only three functional evaluations). It is necessary to remark that the efficiency index of the fourth-order method is improved from 1.587 to 2. Section 4 is devoted to presenting some particular cases of the proposed family and weight functions. In Section 5, some numerical examples and real-life applications are reported to test the efficiency and performance of proposed methods and to justify the theoretical results. Section 6 presents an extensive analysis and comparison of proposed methods with existing ones in terms of fractals of basins of attractions in the complex plane on a variety of nonlinear functions. Finally, some concluding remarks are given in Section 7.

2. Two-Step Traub-Steffensen Type Iterative Scheme

In this section, we design a derivative-free two-step fourth-order optimal iteration scheme without memory. We introduce a free parameter q in Traub’s method without memory and combine it with the second step of King’s scheme [9] as follows:
χ j = ω j + p ϕ ( ω j ) , p 0 , j 0 , ω j + 1 = ω j ϕ ( ω j ) ϕ [ ω j , χ j ] + q ϕ ( ω j ) , ω j + 1 = z j ϕ ( z j ) ϕ ( ω j ) ϕ ( ω j ) + λ ϕ ( z j ) ϕ ( ω j ) + ( λ 2 ) ϕ ( z j ) , λ R .
By approximating ϕ ( ω j ) with ϕ [ z j , χ j ] + q ϕ ( χ j ) + s ( z j χ j ) ( z j ω j ) in the second step of the scheme (4), the following derivative-free two-step iteration scheme is obtained:
z j = ω j ϕ ( ω j ) ϕ [ ω j , χ j ] + q ϕ ( χ j ) , χ j = ω j + p ϕ ( ω j ) , j 0 , ω j + 1 = z j ϕ ( z j ) ϕ [ z j , χ j ] + q ϕ ( χ j ) + s ( z j χ j ) ( z j ω j ) ϕ ( ω j ) + λ ϕ ( z j ) ϕ ( ω j ) + ( λ 2 ) ϕ ( z j ) ,
where the scalars p 0 , q and s are free parameters. With the help of Taylor series expansions, one can obtain the following error equation for the iteration scheme (5):
e j + 1 = ( c 2 + q ) 2 ( 1 + p c 1 ) 2 e j 3 ,
where e j = ω j α ( ω j and α being approximate and exact roots, respectively) is the error at jth iteration and c k = ϕ k ( α ) k ! ϕ ( α ) . Note that the scheme (5) is not optimal as it provides convergence order 3 by using three functional evaluations. To make it optimal, we use a real valued weight function G ( t j ) (where t j = ϕ ( z j ) ϕ ( ω j ) ) in (5) and achieve the following family of optimal fourth-order methods:
z j = ω j ϕ ( ω j ) ϕ [ ω j , χ j ] + q ϕ ( χ j ) , χ j = ω j + p ϕ ( ω j ) , j 0 , ω j + 1 = z j G ( t j ) ϕ ( z j ) ϕ [ z j , χ j ] + q ϕ ( χ j ) + s ( z j χ j ) ( z j ω j ) ϕ ( ω j ) + λ ϕ ( z j ) ϕ ( ω j ) + ( λ 2 ) ϕ ( z j ) .
The subsequent theorem demonstrates the conditions on the weight function G ( t j ) to obtain optimal fourth-order convergence of the scheme (6).
Theorem 1.
Let α I be a simple root of a sufficiently differentiable nonlinear function ϕ such that ϕ : I R R , where I R is an open set. Let an initial approximation ω 0 be close enough to α and if G ( 0 ) = 1 , G ( 0 ) = 1 and G ( 0 ) < , then the iteration scheme (6) has convergence order 4 with the error equation as follows:
e j + 1 = 1 2 c 1 ( 1 + p c 1 ) 2 ( c 2 + q ) ( 4 λ q 2 p c 1 2 + G ( 0 ) q 2 p c 1 2 + 2 q 2 p c 1 2 8 λ q c 2 p c 1 2 + 4 q c 1 2 c 2 p + 2 G ( 0 ) q c 2 p c 1 2 4 λ c 2 2 p c 1 2 + G ( 0 ) c 2 2 p c 1 2 + 2 c 2 2 p c 1 2 4 c 1 λ q 2 + c 1 G ( 0 ) q 2 + 2 c 1 q 2 8 c 1 λ q c 2 + 2 c 1 G ( 0 ) q c 2 4 c 1 λ c 2 2 + 2 c 1 c 3 2 c 1 c 2 2 + c 1 G ( 0 ) c 2 2 2 s ) e j 4 ,
where λ R , p 0 , q and s are free parameters, c 1 = ϕ ( α ) and c k = ϕ ( k ) ( α ) k ! ϕ ( α ) , k 2 .
Proof. 
Let the error at jth iteration be e j = ω j α . By using Taylor’s series expansions of the function ϕ in the jth iteration, the proof is similar to those given in [14,19,21]. Hence, it is omitted. □
Remark 1.
Theorem 1 demonstrated that convergence order of the iteration scheme (6) is 4 and it’s efficiency index is 4 1 3 1.587 .
Remark 2.
If we chose p = 1 c 1 and q = c 2 , then error Equation (7) becomes
e j + 1 = c 1 2 c 2 2 c 3 2 + s c 1 c 2 2 c 3 c 1 2 e j 7 + O ( e j 8 ) .
Further, by choosing s = c 1 c 3 , the obtained method has a convergence order of 8. Therefore, it is concluded from the error analysis that the free parameters p , q and s in (7) perform a significant role in the with-memorization of the method without memory (6). These parameters are called self-accelerating parameters. Hence, the scheme (6) is extendable to a novel method with memory with an accelerated order of convergence 8 and a very high-efficiency index 2.

3. Two-Step Tri-Parametric Iterative Scheme With-Memory

In this section, without requiring any additional functional evaluations, we extend the Traub–Steffensen type fourth-order tri-parametric iteration scheme (6) to an eighth-order iteration scheme with memory. To achieve this goal, we employ Newton’s interpolation polynomials of an appropriate degree to recursively determine the self-accelerating parameters p , q , and s utilizing the already saved points at the current and previous iterations. We select the associated parameters p, q, and s in a manner that increases the fourth order of convergence of the scheme (6), as previously discussed.
If we choose p = 1 c 1 = , q = c 2 and s = c 1 c 3 , the order of the scheme (6) increase up to eight. Since the root α and consequently the values of ϕ ( α ) , ϕ ( α ) and ϕ ( α ) are not known, we approximate the self-accelerators p, q, and s in (6) recursively by using Newton’s interpolation polynomials of an appropriate degree at each iterative step as:
p = p j = 1 N 9 ( ω j ) 1 ϕ ( α ) , q = q j = N 10 ( χ j ) 2 N 10 ( χ j ) c 2 , s = s j = N 11 ( z j ) 6 c 1 c 3 .
where N 9 ξ = N 9 ξ , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 is a ninth degree Newton’s interpolation polynomial that passes through the points ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 , for any j 3 , given by:
N 9 ξ , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 = ϕ ( ω j ) + ξ ω j ϕ ω j , z j 1 + ξ ω j ξ z j 1 ϕ ω j , z j 1 , χ j 1 + ξ ω j ξ z j 1 ξ χ j 1 ϕ ω j , z j 1 , χ j 1 , ω j 1 + ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ϕ ω j , z j 1 , χ j 1 , ω j 1 , z j 2 + ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ϕ ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 + ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ξ χ j 2 ϕ ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 + ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ξ χ j 2 ξ ω j 2 ϕ ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 + ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ξ χ j 2 ξ ω j 2 ξ z j 3 ϕ ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 + ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ξ χ j 2 ξ ω j 2 ξ z j 3 ξ χ j 3 ϕ ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 .
N 10 ξ = N 10 ( ξ , χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 ) is a tenth degree Newton’s interpolation polynomial that passes through the points, χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 , for any j 3 , given by:
N 10 ( ξ , χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 ) = ϕ χ j + ξ χ j ϕ χ j , ω j + ξ χ j ξ ω j ϕ χ j , ω j , z j 1 + ξ χ j ξ ω j ξ z j 1 ϕ χ j , ω j , z j 1 , χ j 1 + ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ϕ χ j , ω j , z j 1 , χ j 1 , ω j 1 + ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ϕ χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 + ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ϕ χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 + ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ξ χ j 2 ϕ χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 + ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ξ χ j 2 ξ ω j 2 ϕ χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 + ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ξ χ j 2 ξ ω j 2 ξ z j 3 ϕ χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 + ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ξ χ j 2 ξ ω j 2 ξ z j 3 ξ χ j 3 ϕ χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 .
N 11 ξ = N 11 ( ξ , z j , χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 ) is an eleventh degree Newton’s interpolating polynomial that passes through the points, z j , χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 , for any j 3 , given by:
N 11 ( ξ , z j , χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 ) = ϕ z j + ξ z j ϕ z j , χ j + ξ z j ξ χ j ϕ z j , χ j , ω j + ξ z j ξ χ j ξ ω j ϕ z j , χ j , ω j , z j 1 + ξ z j ξ χ j ξ ω j ξ z j 1 ϕ z j , χ j , ω j , z j 1 , χ j 1 + ξ z j ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ϕ z j , χ j , ω j , z j 1 , χ j 1 , ω j 1 + ξ z j ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ϕ z j , χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 + ξ z j ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ϕ z j , χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 + ξ z j ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ξ χ j 2 ϕ z j , χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 + ξ z j ξ χ j ξ ω j ξ v n 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ξ χ j 2 ξ ω j 2 ϕ z j , χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 + ξ z j ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ξ χ j 2 ξ ω j 2 ξ z j 3 ϕ z j , χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 + ξ z j ξ χ j ξ ω j ξ z j 1 ξ χ j 1 ξ ω j 1 ξ z j 2 ξ χ j 2 ξ ω j 2 ξ z j 3 ξ χ j 3 ϕ z j , χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 .
Finally, we present the following two-step tri-parametric family of iterative methods with-memory, i.e., by replacing the parameters p , q , and s in the scheme (6) with self-accelerators p j , q j , and s j , given in (9):
z j = ω j ϕ ( ω j ) ϕ [ ω j , χ j ] + q j ϕ ( χ j ) , χ j = ω j + p j ϕ ( ω j ) , j 0 , p j = 1 N 9 t j , q j = N 10 χ j 2 N 10 χ j , s j = N 11 χ j 6 , ω j + 1 = z j G ( t j ) ϕ ( z j ) ϕ [ z j , χ j ] + q j ϕ ( χ j ) + s j ( z j χ j ) ( z j ω j ) ϕ ( ω j ) + λ ϕ ( z j ) ϕ ( ω j ) + ( λ 2 ) ϕ ( z j ) .
It is worth mentioning that the initial values p 0 , q 0 , and s 0 could be taken as very small positive values. Additionally, the self-accelerator p j is to be computed exactly before the start of each iteration, q j is computed after χ j , and s j is computed after the computation of z j .
The following theorem demonstrates that the newly presented iterative scheme with-memory (13) has a convergence order of 7.993 with a computational efficiency index of 1.999 2 .
Theorem 2.
Let ω 0 be an initial guess near enough to the simple zero α of a sufficiently differentiable function ϕ. If self-accelerators p j , q j , and s j are iteratively computed by using the formulae given in (9), then the R —order of convergence of the proposed iterative scheme with memory (13) is at least 7.993 with an efficiency index of 1.999 2 .
Proof. 
The R-order of convergence of the iteration method (13) is ascertained using the Herzberger’s matrix method [31]. The lower bound of convergence order of one-step m —point method with memory
ω j = ψ ω j 1 , ω j 2 , , ω l m ,
is the spectral radius of its associated matrix Q m = l i , j , 1 i , j , m , with following elements:
l 1 , j = number of functional evaluations evaluated at point ω l j where j = 1 , 2 , , m , l i , i 1 = 1 , for i = 2 , 3 , , m , l i , j = 0 , elseways .
Whereas, the spectral radius of Q 1 · Q 2 · · Q m is defined as the lower bound of the order of an m —step iterative method ψ = ψ 1 ψ 2 ψ m , where the matrices Q t correspond to the iteration steps ψ t , 1 t m .
According to the scheme (13), we obtain the corresponding matrices as follows:
ω j + 1 = ψ 1 z j , χ j , ω j , z j 1 , χ j 1 , ω j 1 , ω j 2 , z j 2 , χ j 2 , z j 3 , χ j 1 , ω j 3 Q 1 = 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ,
z j = ψ 2 χ j , ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 , z j 4 Q 2 = 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ,
χ j = ψ 1 ω j , z j 1 , χ j 1 , ω j 1 , z j 2 , χ j 2 , ω j 2 , z j 3 , χ j 3 , ω j 3 , z j 4 , χ j 4 Q 3 = 1 1 1 1 1 1 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 ,
Hence, we obtain:
Q 3 = Q 1 · Q 2 · Q 3 = 4 4 4 4 4 4 4 4 4 4 0 0 2 2 2 2 2 2 2 2 2 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 .
The above matrix Q 3 has the eigenvalues: 0 , 0 , 0 , 0 , 0 , 0 , 0 , 0 , 7.993145621 , 0.687271071 , 0.152937275 + 0.8394933233 i , 0.152937275 0.8394933233 i . As a result, the matrix’s spectral radius Q 3 is ρ Q 3 = 7.993145621 . Hence, we conclude that the order of convergence of the proposed two-step iterative scheme with memory (13) is at least 7.993 with an efficiency index of 1.999 2 . □

4. Special Cases

One can obtain several special cases of iteration scheme with memory (13) by choosing the weight functions G ( t j ) , such that the conditions of Theorem 1, i.e., G ( 0 ) = 1 , G ( 0 ) = 1 , G ( 0 ) < are satisfied. Here, we present two simple special cases of our iteration scheme (13) as follows.
Case 1: By choosing G ( t j ) = 1 t j (where t j = ϕ ( z j ) ϕ ( ω j ) ) in the scheme (13), we achieve the following specific method using the memory represented by S M 1 :
z j = ω j ϕ ( ω j ) ϕ [ ω j , χ j ] + q j ϕ ( χ j ) , χ j = ω j + p j ϕ ( ω j ) , j 0 , p j = 1 N 9 t j , q j = N 10 χ j 2 N 10 χ j , s j = N 11 χ j 6 , ω j + 1 = z j 1 ϕ ( z j ) ϕ ( ω j ) ϕ ( z j ) ϕ [ z j , χ j ] + q j ϕ ( χ j ) + s j ( z j χ j ) ( z j ω j ) × ϕ ( ω j ) + λ ϕ ( z j ) ϕ ( ω j ) + ( λ 2 ) ϕ ( z j ) .
Case 2: By taking G ( t j ) = 1 1 + t j (being t j = ϕ ( z j ) ϕ ( ω j ) ) in the scheme (13), we obtain another method with-memory denoted by S M 2 , given as follows:
z j = ω j ϕ ( ω j ) ϕ [ ω j , χ j ] + q j ϕ ( χ j ) , χ j = ω j + p j ϕ ( ω j ) , j 0 , p j = 1 N 9 t j , q j = N 10 χ j 2 N 10 χ j , s j = N 11 χ j 6 , ω j + 1 = z j ϕ ( ω j ) ϕ ( ω j ) + ϕ ( z j ) ϕ ( z j ) ϕ [ z j , χ j ] + q j ϕ ( χ j ) + s j ( z j χ j ) ( z j ω j ) × ϕ ( ω j ) + λ ϕ ( z j ) ϕ ( ω j ) + ( λ 2 ) ϕ ( z j ) .

5. Numerical Experiments and Applications

In this section, we test our two-step tri-parametric methods with-memory (14) and (15) denoted by ( S M 1 ) and ( S M 2 ) , respectively, with the help of different nonlinear functions given by Examples 1–7. To avoid the loss of significant digits and to achieve high accuracy, we have used the arbitrary precision arithmetics with 1000 significant digits in the programming package Maple 18. The formula to compute the computational order of convergence C O C is given as follows [32]:
C O C log ϕ ω j + 1 / ϕ ω j log ϕ ω j / ϕ ω j 1 .
For all the comparisons, we have chosen σ 1 , j = σ 2 , j = σ 3 , j = δ 1 , j = δ 2 , j = δ 3 , j = p j = q j = s j = 0.01 to start the iterations. We compare the accuracy and efficiency of our proposed iteration schemes ( S M 1 ) for λ = 2 and ( S M 2 ) for λ = 1 with the existing two-step methods with-memory of Abdullah et al. [30] denoted by ( S H ) , Zafar et al. [27] denoted by F Z , Zaka Ullaha et al. [28] denoted by ( Z K ) , Wang et al. [33] denoted by ( X W ) , Choubey et al. [19] denoted by ( N C ) , and Choubey et al. [20] denoted by ( J N ) , described as follows:
Method SH:
w j = ω j ϕ ( ω j ) ϕ ( ω j ) L j ϕ ( ω j ) , j 0 , ω j + 1 = w j 2 ϕ ( w j ) ϕ ( w j ) 2 ϕ 2 ( w j ) ϕ ( w j ) T ϕ ( w j ) ,
where
T ϕ ( w j ) = 1 w j ω j ϕ ( ω j ) + 2 ϕ ( w j ) + 3 ϕ ( ω j ) ϕ ( w j ) w j ω j
and
L j = 2 ϕ [ ω j , ω j , w j 1 ] ( 2 ϕ [ ω j , w j 1 , ω j 1 , ω j 1 ] ( ω j w j 1 ) 4 ϕ [ ω j , ω j , w j 1 , ω j 1 ] ) 2 ϕ ( ω j ) .
Method FZ:
χ j = ω j + σ 1 , l ϕ ω j , σ 1 , l = 1 N 3 ω j , j 0 , r j = ω j ϕ ω j ϕ ω j , χ j + σ 2 , l ϕ χ j , σ 2 , l = N 4 ( χ j ) 2 N 4 χ j , ω j + 1 = r j 1 1 + k j 1 1 ϕ r j ϕ ω j 2 ϕ ( r j ) ϕ [ χ j , r j ] + σ 2 , l ϕ χ j + σ 3 , l ( r j χ j ) ( r j ω j ) , k j = ϕ r j ϕ ω j , σ 3 , l = 1 6 N 5 r j ,
where, for j 1 ,
N 3 ( ξ , ω j , r j 1 , χ j 1 , ω j 1 ) = ϕ ( ω j ) + ( ξ ω j ) ϕ [ ω j , r j 1 ] + ( ξ ω j ) ( ξ r j 1 ) ϕ [ ω j , r j 1 , χ j 1 ] + ( ξ ω j ) ( ξ r j 1 ) ( ξ χ j 1 ) ϕ [ ω j , r j 1 , χ j 1 , ω j 1 ] ,
N 4 ( ξ , χ j , ω j , r j 1 , χ j 1 , ω j 1 ) = ϕ ( χ j ) + ( ξ χ j ) ϕ [ χ j , ω j ] + ( ξ ω j ) ( ξ χ j ) ϕ [ χ j , ω j , r j 1 ] + ( ξ χ j ) ( ξ ω j ) ( ξ r j 1 ) ϕ [ χ j , ω j , r j 1 , χ j 1 ] + ( ξ χ j ) ( ξ r j 1 ) ( ξ ω j ) ( ξ χ j 1 ) ϕ [ χ j , ω j , r j 1 , χ j 1 , ω j 1 ] ,
and
N 5 ( ξ , r j , χ j , ω j , r j 1 , χ j 1 , ω j 1 ) = ϕ ( r j ) + ( ξ r j ) ϕ [ r j , χ j ] + ( ξ χ j ) ( ξ r j ) ϕ [ r j , χ j , ω j ] + ( ξ ω j ) ( ξ χ j ) ( ξ r j ) ϕ [ r j , χ j , ω j , r j 1 ] + ( ξ r j 1 ) ( ξ ω j ) ( ξ χ j ) ( ξ r j ) ϕ [ r j , χ j , ω j , r j 1 , χ j 1 ] + ( ξ χ j 1 ) ( ξ r j 1 ) ( ξ ω j ) ( ξ χ j ) ( ξ r j ) ϕ [ r j , χ j , ω j , r j 1 , χ j 1 , ω j 1 ] .
Method ZK:
s j = ω j + δ 1 , l ϕ ω j , δ 1 , l = 1 N 6 ω j , j 0 , d j = ω j ϕ ω j ϕ ω j , s j + δ 2 , l ϕ s j , δ 2 , l = N 7 ( s j ) 2 N 7 s j , ω j + 1 = d j ϕ ( d j ) ϕ [ ω j , d j ] + ϕ s j , ω j , d j d j ω j + δ 3 , l ( d j ω j ) ( d j s j ) , δ 3 , l = N 8 d j 6 ,
where, for j 2 , N 6 ( ξ ) is a sixth degree interpolation polynomial passing through ω j , d j 1 , ω j 1 , s j 1 , d j 2 , s j 2 , ω j 2 , N 7 ( ξ ) is a seventh degree interpolation polynomial passing through s j , ω j , d j 1 , ω j 1 , s j 1 , d j 2 , s j 2 , ω j 2 , and N 8 ( ξ ) is an eighth degree interpolation polynomial passing through d j , s j , ω j , d j 1 , ω j 1 , s j 1 , d j 2 , s j 2 , ω j 2 .
Method NC:
χ j = ω j f ( ω j ) f ( ω j ) L j f ( ω j ) , ω j + 1 = χ j f ( χ j ) p 1 ( ω j , χ j ) 2 p 1 2 ( ω j , χ j ) f ( χ j ) p 2 ( ω j , χ j ) ,
where, p 1 ( ω j , χ j ) = 2 ϕ ( χ j ) ϕ ( ω j ) χ j ω j ϕ ( ω j ) , p 2 ( ω j , χ j ) = 2 χ j ω j ϕ ( χ j ) ϕ ( ω j ) χ j ω j ϕ ( ω j ) .
Method JN:
χ j = ω j f ( ω j ) f ( ω j ) L j f ( ω j ) ω j + 1 = χ j 2 f ( ω j ) f ( χ j ) f ( χ j ) 2 f ( ω j ) f ( χ j ) 2 f ( ω j ) 2 f ( χ j ) + f ( ω j ) f ( χ j ) f ( χ j ) ,
For methods with-memory S H , N C and J N , the values associated with the parameter L j have been recorded as:
Formula 1:
L j = H 2 ( ω j ) 2 ϕ ( ω j ) ,
where H 2 ( ω ) = ϕ ( ω j ) + ϕ [ ω j , ω j ] ( ω ω j ) + ϕ [ ω j , ω j , χ j 1 ] ( ω ω j ) 2 and H 2 ( ω j ) = 2 ϕ [ ω j , ω j , χ j 1 ] .
Formula 2:
L j = H 3 ( ω j ) 2 ϕ ( ω j ) ,
where H 3 ( ω ) = H 2 ( ω ) + ϕ [ ω j , ω j , χ j 1 , ω j 1 ] ( ω ω j ) 2 ( ω χ j 1 ) and H 3 ( ω j ) = 2 ϕ [ ω j , ω j , χ j 1 , ω j 1 ] ( ω j χ j 1 ) + 2 ϕ [ ω j , ω j , χ j 1 ] .
Formula 3:
L j = H 4 ( ω j ) 2 ϕ ( ω j )
where H 4 ( ω ) = H 3 ( ω ) + ϕ [ ω j , ω j , χ j 1 , ω j 1 , ω j 1 ] ( ω ω j ) 2 ( ω χ j 1 ) ( ω ω j 1 ) and H 4 ( ω j ) = 2 ϕ [ ω j , ω j , χ j 1 ] ( 2 ϕ [ ω j , χ j 1 , ω j 1 , ω j 1 ] ( ω n χ j 1 ) 4 ϕ [ ω j , ω j , χ j 1 , ω j 1 ] ) .
Method XW:
χ j = ω j ϕ ( ω j ) ϕ ( ω j ) + L j ϕ ( ω j ) r j = χ j ϕ ( ω j ) 2 ϕ [ ω j , χ j ] ϕ ( ω j ) + L j ϕ ( χ j ) ω j + 1 = r j [ 1 + 3 2 ( a j b j 3 ) ] ( β + ξ ) ϕ ( r j ) 2 ξ ϕ [ χ j , r j ] + ( β ξ ) ( ϕ ( ω j ) + T ϕ ( r j ) )
where a j = ϕ ( r j ) ϕ ( ω j ) , b j = ϕ ( χ j ) ϕ ( ω j ) , β = χ j ω j , ξ = r j ω j and T R .
For the method with-memory X W , the parameter value L j is recorded as:
Formula 1:
L j = H 2 ( ω j ) 2 ϕ ( ω j ) ,
where H 2 ( ω ) = ϕ ( ω j ) + ϕ [ ω j , ω j ] ( ω ω j ) + ϕ [ ω j , ω j , r j 1 ] ( ω ω j ) 2 and H 2 ( ω j ) = 2 ϕ [ ω j , ω j , r j 1 ] .
Formula 2:
L j = H 3 ( ω j ) 2 ϕ ( ω j ) ,
where H 3 ( ω ) = H 2 ( ω ) + ϕ [ ω j , ω j , r j 1 , χ j 1 ] ( ω ω n ) 2 ( ω r j 1 ) and H 3 ( ω j ) = 2 ϕ [ ω n , ω j , r j 1 ] + 2 ϕ [ ω j , ω j , r j 1 , χ j 1 ] ( ω j r j 1 ) .
Formula 3:
L j = H 4 ( ω j ) 2 ϕ ( ω j ) ,
where H 4 ( ω ) = H 3 ( ω ) + ϕ [ ω j , ω j , r j 1 , ω j 1 ] ( ω ω j ) 2 ( ω r j 1 ) + ϕ [ ω j , ω j , r j 1 , ω j 1 , ω j 1 ] ( ω ω j ) 2 ( ω r j 1 ) ( ω χ j 1 ) and H 4 ( ω j ) = H 2 ( ω j ) + 2 ϕ [ ω j , ω j , r j 1 , χ j 1 ] ( ω n r j 1 ) + 2 ϕ [ ω n , ω n , r j 1 , χ j 1 , ω j 1 ] ( ω j r j 1 ) ( ω j χ j 1 ) ] .
Formula 4:
L j = H 5 ( ω j ) 2 ϕ ( ω j ) ,
where H 5 ( ω ) = H 4 ( ω ) + ϕ [ ω j , ω j , r j 1 , χ j 1 , ω j 1 , ω j 1 ] ( ω ω j ) 2 ( ω r j 1 ) ( ω ω j 1 ) ( ω ω j 1 ) and H 5 ( ω j ) = H 4 ( ω j ) + 2 ϕ [ ω j , ω j , r j 1 , χ j 1 , ω j 1 , ω j 1 ] ( ω j r j 1 ) ( ω n χ j 1 ) ( ω n ω j 1 ) .
Example 1.
Location of maximum energy distribution:
Planck’s radiation law is given by
α = 8 π k u δ 5 e k u / δ p ϕ 1 ,
where α is energy density, δ is wavelength radiation, ϕ is absolute temperature, u is Plank’s constant, p is Boltzmann’s constant, and k is the speed of light. To maximize the energy density and determine the wavelength, we first evaluate
d α d δ = 8 π k u δ 5 e k u / δ p ϕ 1 5 + e k u / δ p ϕ k u / δ p ϕ e k u / δ p ϕ 1 .
The terms on the left side of the parentheses are zero in the limits as δ 0 , and δ , although energy density gives minima in both cases. The maximum we are seeking arises when the terms inside the parentheses are zero. This happens when
1 k u 5 δ max p ϕ = e k u / δ max p ϕ ,
where δ max being the wavelength to maximize the energy density. For ω = k u / δ max p ϕ , the above equation reduces to
1 ω 5 = e ω .
Now we can define the following non-linear expression,
ϕ 1 ω = e ω 1 + ω 5 .
The problem is to solve the nonlinear Equation (36), which has two roots, 4.965114232 and 0. We take the exact root α = 0 and initial approximation ω 0 = 2.5 . The computational results are depicted in Table 1, where, 4.35 ( 1 ) denotes 4.35 × 10 1 . It is observed that the accuracy, computational order of convergence ( C O C ), and efficiency index ( E I ) of our proposed schemes S M 1 and S M 2 are better than the others for the test problem ϕ 1 ω .
Example 2.
Vertical stress:
Boussinesq’s formula computes the vertical stress (s) within an elastic material induced at a specific point beneath the edge of a rectangular strip footing subjected to a uniform pressure q given as follows:
σ s = q π ω + C o s ( ω ) S i n ( ω ) .
To determine the value of ω, where the vertical stress (s) is equal to 25 percent of the applied footing stress q, we have to find the value of z at first. To find the point at which the footing stress q is equal to a quarter, we have to solve the following equation:
ϕ 2 ( ω ) = ω + C o s ( ω ) S i n ( ω ) π 1 4 .
The exact root of Equation (38) is 0.415856 . We take an initial guess for this root as ω = 1.1 to obtain the numerical results shown in Table 2.
Example 3.
We take the standard nonlinear test equation as follows:
ϕ 3 ω = e ω 2 3 ω s i n ( ω ) + l n ( ω 2 + 1 ) .
For the above non-linear function, we take α = 0 as the exact root and ω 0 = 0.5 as an initial guess. The computational results are shown in Table 3, which illustrate that our proposed schemes S M 1 and S M 2 perform better in terms of convergence speed and efficiency.
Example 4.
We consider another standard nonlinear test equation as follows:
ϕ 4 ω = 1 ω 4 ω 2 1 ω + 1 .
Here, we take the exact root α = 1 and initial approximation ω 0 = 0.2 . The numerical results for comparison are illustrated in Table 4, which show that computational order of convergence ( C O C ) and efficiency index ( E I ) of proposed schemes S M 1 and S M 2 are better than the earlier known schemes S H , N C , J N , X W , F Z and Z K .
Example 5.
In addition, we pick another standard non-linear test problem, including trigonometric function:
ϕ 5 ω = ω 2 tan ω ω 3 8 .
The above equation has 3 real roots 0 , 2 and 4.274782271 . We take α = 0 as the exact root and ω 0 = 1.5 as an initial guess for this problem. The computational results of the function ϕ 5 ω are shown in Table 5, from which it is seen that the proposed iterative schemes S M 1 and S M 2 have a faster convergence speed and better efficiency than the iterative schemes S H , N C , J N , X W , F Z and Z K .
Example 6.
We take one more standard nonlinear equation, as follows:
ϕ 6 ω = ω 3 + ω 2 3 ω 3 .
Here, we take α = 1.732050807 as an exact root. The comparison results by taking the initial guess ω 0 = 3.5 are shown in Table 6. It is observed from Table 6 that the schemes S M 1 and S M 2 perform better than the existing schemes F Z and Z K in terms of convergence and efficiency.
Example 7.
Blood rheology model:
Blood rheology refers to the study of how blood flows and behaves in the circulatory system. Modeling blood rheology is important for understanding various physiological and pathological conditions related to blood flow. Numerical iterative methods are commonly used to solve the mathematical equations governing blood rheology. Blood rheology is a branch of medicine that focuses on the physical and flow properties of blood. Since blood is a non-Newtonian fluid, it is categorized as a Caisson fluid. According to this concept, flow in a tube behaves like a plug with little deformation, and a velocity gradient develops close to the wall. We take into account the following nonlinear equation.
ϕ 7 ( ω ) = ω 8 441 8 63 ω 5 5714285714 100000000000 ω 4 + 16 9 ω 2 3624489796 1000000000 ω + 36 100 .
In order to examine the plug flow of Caisson fluid flow. Here, ω shows the plug flow of Caisson fluid flow. The one of the solutions of ϕ 7 ( ω ) is 0.1046986515 . We choose ω 0 = 2.5 as an initial approximation to solve ϕ 7 ( ω ) = 0 . Table 7 display the calculated results.
It is obvious from Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 that the special cases S M 1 and S M 2 of our proposed iterative scheme are reliable and efficient than the earlier iterative methods S H , N C , J N , X W , F Z and Z K in terms of accuracy, computational order of convergence C O C and efficiency index E I for different test problems.
Furthermore, Figure 3 demonstrates the graphical comparison of proposed iterative techniques S M 1 and S M 2 with other methods in terms of absolute error | ω k α | while Figure 4 shows the comparison in terms of computational order of convergence (COC), efficiency index (EI), and CPU time, in the first three iterations for solving ϕ 1 ( ω ) ϕ 7 ( ω ) . From Figure 3 and Figure 4, it is observed that the proposed schemes S M 1 and S M 2 are more robust than the others.

6. Fractals of Basins of Attraction

In this section, we investigate the comparison of fractal behavior of the proposed iteration method (14) for different values of λ with the iterative schemes S H ( 17 ) , F Z ( 20 ) , Z K ( 21 ) , and N C ( 22 ) discussed in Section 5. We compare their fractal behavior in terms of basins of attraction in the complex plane, which helps us to better understand their stability and convergence. Let ϕ be a nonlinear function to be solved by an iterative algorithm; we know that, in general, the the boundary between the basins of attraction for distinct zeros of ϕ represents a complex fractal form. By assigning a specific color to each basin, we generally obtain very beautiful fractals, which illustrate the performance of iterative methods. Initially, Stewart [5] and Varona [6] presented a graphical comparison between some classical iterative methods in 2001 and 2002, respectively. After that, it is a common practice to compare iteration methods graphically with the help of fractal images of basins of attraction. The book of Kalantari [34] provided several artistic fractal pictures of different polynomials. More recently, this kind of comparison has been studied in the papers [7,25,35,36,37]. All of these papers present a comparison of the methods by plotting basins of attraction for simple polynomials of the form z n 1 in the complex plane. We investigate convergence regions of different methods by representing their basins of attractions on the variety of nonlinear equations, including real-life problems discussed in Section 5.
To plot fractals of basins of attraction, we chose an initial guess z 0 from a grid of 500 × 500 points within a square D contained in C such that it contains all of the roots of ϕ ( z ) = 0 , each allocated by a unique color. For a given initial point in D, an iteration scheme within 25 iterations either converges to one of the roots, painted with a color assigned to that root, or diverges, usually marked with the color black. The brighter color of the basins indicates that a few iterations required for a method to converge to the root.
Basins of attraction of of ϕ 1 ( z ) = 0 are shown in Figure 5, which has two roots 0 , 4.96511 contained in D = [ 5 , 5 ] × [ 5 , 5 ] , represented by colors—cyan and magenta, respectively. Due to the limited space, we have written reduced significant digits of the roots. Figure 5 illustrates that the methods Z K and S M ( λ = 0.1 ) show wide basins of attraction as compared to those of S H ( L 0 = 0.1 ) , F Z , N C ( L 0 = 0.1 ) and S M ( λ = 2 , 1 ) while fast convergence is obtained by S H and S M ( λ = 0.1 ) .
For the nonlinear function ϕ 2 ( z ) , which has the root 0.415856 , we take D = [ 2 , 2 ] × [ 2 , 2 ] and assign color cyan to each initial point in D for which the method converges to 0.415856 . Fractals of basins for this problem are represented in Figure 6, which illustrate that all the methods possess similar regions of convergence except the method N C ( L 0 = 0.1 ) with several black regions. The method S M ( λ = 0.1 ) has fast convergence for initial points near the root since its basins are brighter than those of Z K and N C ( L 0 = 0.1 ) .
Similarly, we take D = [ 2 , 2 ] × [ 2 , 2 ] , for ϕ 3 ( z ) = 0 , which has the root 0. We assign color cyan to each initial point in D for which an iteration method converges to 0. Figure 7 represents the fractals of basins for this problem which illustrate that the proposed methods S M ( λ = 1 ) , S M ( λ = 0.5 ) , S M ( λ = 0.1 ) provide wide basins of attractions and have fast convergence for initial points near the root than those of F Z , Z K and N C ( L 0 = 0.1 ) .
We take D = [ 2 , 2 ] × [ 2 , 2 ] , for the nonlinear function ϕ 4 ( z ) , which has six roots; 1 , 1.40360 , 0.454979 0.649504 i , 0.454979 + 0.649504 i , 0.656780 0.837592 i , 0.656780 + 0.837592 i , represented by green, cyan, yellow, orange, red and magenta, respectively. Fractals of basins for ϕ 4 ( z ) = 0 are represented in Figure 8, which illustrates that the methods S M ( λ = 0.1 ) and S H ( L 0 = 0.1 ) are the best since they produce simple and wide regions of convergence as compared to other methods.
Fractal images of basins of attraction of ϕ 5 ( z ) = 0 are shown in Figure 9, which has three real roots; 0 , 2 , 4.27478 contained in D = [ 5 , 5 ] × [ 5 , 5 ] , represented by cyan, magenta and yellow, respectively. Figure 9 illustrates that all the methods produce wide regions of divergence (black regions); however, the methods S M ( λ = 0.1 , 0.5 ) and S H ( L 0 = 0.1 ) have comparatively better performances in terms of speed and regions of convergence.
For the nonlinear function ϕ 6 ( z ) , which has roots 1 , 1.73205 , 1.73205 , we take D = [ 5 , 5 ] × [ 5 , 5 ] and assign the colors magenta, cyan and yellow to each initial point in D, for which the method converges to 1 , 1.73205 and 1.73205 , respectively. Fractals of basins for ϕ 6 ( z ) = 0 are represented in Figure 10, which illustrates that the proposed method S M ( λ = 0.1 ) provide fast convergence with simple fractals and wide regions of convergence as compared to others except the methods S H ( L 0 = 0.1 ) and N C ( L 0 = 0.1 ) .
Basins of attraction of ϕ 7 ( z ) = 0 are shown in Figure 11, which has eight roots contained in D = [ 5 , 5 ] × [ 5 , 5 ] ; 0.104698 , 3.82238 , 2.27869 1.98747 i , 2.27869 + 1.98747 i , 1.23876 3.40852 i , 1.23876 + 3.40852 i , 1.55391 0.940414 i , 1.55391 + 0.940414 i , represented by colors, cyan, green, orange, yellow, red, magenta, pink and brown respectively. Figure 11 illustrates that the proposed method S M ( λ = 0.1 , 0.5 ) is the best one among all others, yielding fast convergence and simple fractals and wide regions of convergence. However, none of the methods converge to the roots 1.553919 0.940414 i and 1.55391 + 0.940414 i .
It is observed that for all of the problems, the proposed iteration scheme S M for λ = 0.1 provides wider and brighter basins of attraction with simple fractals which yields its stability and robustness. Furthermore, the smaller values of the parameter λ result in wider basins of attraction for the proposed iteration schemes.

7. Conclusions

In this manuscript, we have introduced derivative-free two-step iteration methods of optimal orders four and eight without memory and with-memory, respectively, for solving nonlinear equations. The suggested techniques are higher-order two-step variants of the one-step Traub’s method of optimal order two. It is to be remarked that the eighth-order convergence of the proposed iteration technique with memory is achieved by using only three functional evaluations. The proposed two-step technique’s efficiency index is 7 . 993 1 / 3 2 , making it the highest in the literature and better than the efficiency of several multi-step iteration schemes with-memory. The proposed two-step iteration methods with-memory compete with any j-point optimal method without memory since its efficiency index equals 2. To evaluate the effectiveness of the suggested iterative techniques and to support the theoretical findings, several numerical examples and real-world applications are given. The numerical outcomes of the proposed methods are presented in terms of absolute error, computational order of convergence (COC), and CPU time (sec). Further, we have investigated the fractal behavior and comparison of different iteration methods using fractals of basins of attraction on several nonlinear equations, including real-life problems. The fractals of basins of attractions illustrate the robustness and superiority of the proposed iteration methods without memory. The stability of the proposed iteration methods without memory is affirmed by the simple fractals defined by their wider basins of attraction in comparison with existing iteration methods. Additionally, the numerical tests illustrate that the proposed two-step Traub–Steffensen type iteration schemes with memory outperform existing multi-step iteration schemes with and without memory in many situations. Further research can be conducted to explore general criteria for the selection of free parameters. The current study focuses on the solution of univariate nonlinear equations, while its extension to multivariate equations is left for future research.

Author Contributions

Conceptualization, M.-u.-D.J.; Methodology, M.-u.-D.J.; Validation, M.-u.-D.J., S.A. (Shahid Abdullah), M.K. and S.A. (Shabbir Ahmad); Investigation, M.-u.-D.J., S.A. (Shahid Abdullah), M.K. and S.A. (Shabbir Ahmad); Writing—original draft, M.-u.-D.J.; Writing—review & editing, S.A. (Shahid Abdullah), M.K. and S.A. (Shabbir Ahmad). All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Acknowledgments

The authors are thankful to the editor and anonymous reviewers for their valuable suggestions and comments.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  1. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  3. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  4. Steffensen, I.F. Remarks on iteration. Scand. Actuar. J. 1933, 16, 64–72. [Google Scholar] [CrossRef]
  5. Stewart, B.D. Attractor Basins of Various Root-Finding Methods. Ph.D. Thesis, Naval Postgraduate School, Monterey, CA, USA, 2001. [Google Scholar]
  6. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–47. [Google Scholar] [CrossRef]
  7. Varona, J.L. An Optimal Thirty-Second-Order Iterative Method for Solving Nonlinear Equations and a Conjecture. Qual. Theory Dyn. Syst. 2022, 21, 39. [Google Scholar] [CrossRef]
  8. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. Assoc. Comput. Math. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  9. King, R.F. A family of fourth-order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  10. Behl, R.; Alshomrani, A.S.; Chun, C. A general class of optimal eighth-order derivative free methods for nonlinear equations. J. Math. Chem. 2020, 58, 854–867. [Google Scholar] [CrossRef]
  11. Cordero, A.; Reyes, J.A.; Torregrosa, J.R.; Vassileva, M.P. Stability analysis of a new fourth-order optimal iterative scheme for nonlinear equations. Axioms 2024, 13, 34. [Google Scholar] [CrossRef]
  12. Moscoso-Martinez, M.; Chicharro, F.I.; Cordero, A.; Torregrosa, J.R.; Urena-Callay, G. Achieving Optimal Order in a Novel Family of Numerical Methods: Insights from Convergence and Dynamical Analysis Results. Axioms 2024, 13, 458. [Google Scholar] [CrossRef]
  13. Petković, L.D.; Petković, M.S.; Džunić, J. A class of three-point root-solvers of optimal order of convergence. Appl. Math. Comput. 2010, 216, 671–676. [Google Scholar] [CrossRef]
  14. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  15. Sharma, J.R.; Sharma, R. A new family of modified Ostrowski’s methods with accelerated eighth order convergence. Numer. Algor. 2010, 54, 445–458. [Google Scholar] [CrossRef]
  16. Sharma, J.R.; Kumar, S.; Singh, H. A new class of derivative-free root solvers with increasing optimal convergence order and their complex dynamics. SEMA J. 2023, 8, 333–352. [Google Scholar] [CrossRef]
  17. Wang, X.; Liu, L. New eighth-order iterative methods for solving nonlinear equations. Comput. Appl. Math. 2010, 234, 1611–1620. [Google Scholar] [CrossRef]
  18. Zheng, Q.; Li, J.; Huang, F. An optimal Steffensen-type family for solving nonlinear equations. Appl. Math. Comput. 2011, 217, 9592–9597. [Google Scholar] [CrossRef]
  19. Choubey, N.; Panday, B.; Jaiswal, J.P. Several two-point with memory iterative methods for solving non-linear equations. Afr. Mat. 2018, 29, 435–449. [Google Scholar] [CrossRef]
  20. Choubey, N.; Jaiswal, J.; Choubey, A. Family of multipoint with memory iterative schemes for solving nonlinear equations. Int. J. Appl. Comput. Math. 2022, 8, 83. [Google Scholar] [CrossRef]
  21. Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Three-step iterative weight function scheme with memory for solving nonlinear problems. Math. Method. Appl. Sci. 2022. early view. [Google Scholar] [CrossRef]
  22. Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Memory in the iterative processes for nonlinear problems. Math. Method. Appl. Sci. 2023, 46, 4145–4158. [Google Scholar] [CrossRef]
  23. Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Design of iterative methods with memory for solving nonlinear systems. Math. Method. Appl. Sci. 2023, 46, 12361–12377. [Google Scholar] [CrossRef]
  24. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef]
  25. Sharma, S.; Kansal, M. A modified Chebyshev–Halley-type iterative family with memory for solving nonlinear equations and its stability analysis. Math. Method. Appl. Sci. 2023, 46, 12549–12569. [Google Scholar] [CrossRef]
  26. Soleymani, F.; Lotfi, T.; Tavakoli, E.; Khaksar, F. Haghani, Several iterative methods with memory using self accelerators. Appl. Math. Comput. 2015, 254, 452–458. [Google Scholar]
  27. Zafar, F.; Yasmin, N.; Kutbi, M.A.; Zeshan, M. Construction of Tri-parametric derivative free fourth order with and without memory iterative method. J. Nonli. Sci. Appl. 2016, 9, 1410–1423. [Google Scholar] [CrossRef]
  28. Ullaha, M.Z.; Kosari, S.; Soleymani, F.; Haghani, F.K.; Al-Fhaid, A.S. A super-fast tri-parametric iterative method with memory. Appl. Math. Comput. 2016, 289, 486–491. [Google Scholar]
  29. Akram, S.; Khalid, M.; Junjua, M.-U.; Altaf, S.; Kumar, S. Extension of King’s Iterative Scheme by Means of Memory for Nonlinear Equations. Symmetry 2023, 15, 1116. [Google Scholar] [CrossRef]
  30. Abdullah, S.; Choubey, N.; Dara, S. An efficient two-point iterative method with memory for solving non-linear equations and its dynamics. J. Appl. Math. Comput. 2023, 70, 285–315. [Google Scholar] [CrossRef]
  31. Herzberger, J. Über matrixdarstellungen fur iterationverfahren bei nichtlinearen gleichungen. Computing 1974, 12, 215–222. [Google Scholar] [CrossRef]
  32. Jay, L. A note on Q-order of convergence. BIT Numer. Math. 2001, 41, 422–429. [Google Scholar] [CrossRef]
  33. Wang, X.; Zhang, T. Some Newton-type iterative methods with and without memory for solving nonlinear equations. Int. J. Comput. Methods 2014, 11, 1350078. [Google Scholar] [CrossRef]
  34. Kalantari, B. Polynomial Root-Finding and Polynomiography; World Scientific: Singapore, 2009. [Google Scholar]
  35. Cordero, A.; Guasp, L.; Torregrosa, J.R. Choosing the most stable members of Kou’s family of iterative methods. J. Comput. Appl. Math. 2018, 330, 759–769. [Google Scholar] [CrossRef]
  36. Herceg, D.; Herceg, D. Eighth order family of iterative methods for nonlinear equations and their basins of attraction. J. Comput. Appl. Math. 2018, 343, 458–480. [Google Scholar] [CrossRef]
  37. Herceg, D.; Petković, I. Computer visualization and dynamic study of new families of root-solvers. J. Comput. Appl. Math. 2022, 401, 16. [Google Scholar] [CrossRef]
Figure 1. Basins of attraction of ϕ ( z ) = z 3 1 using Newton’s method (1) and Steffensen’ method (2).
Figure 1. Basins of attraction of ϕ ( z ) = z 3 1 using Newton’s method (1) and Steffensen’ method (2).
Fractalfract 08 00698 g001
Figure 2. Basins of attraction of ϕ ( z ) = z 3 1 using Traub’s method (3).
Figure 2. Basins of attraction of ϕ ( z ) = z 3 1 using Traub’s method (3).
Fractalfract 08 00698 g002
Figure 3. Comparisons of various iterative methods with-memory in terms of absolute error | ω j α | for ϕ 1 ( ω ) ϕ 7 ( ω ) in first three iterations.
Figure 3. Comparisons of various iterative methods with-memory in terms of absolute error | ω j α | for ϕ 1 ( ω ) ϕ 7 ( ω ) in first three iterations.
Fractalfract 08 00698 g003aFractalfract 08 00698 g003b
Figure 4. Comparisons of various iterative methods with-memory in terms of COC, EI, and CPU time for ϕ 1 ( ω ) ϕ 7 ( ω ) respectively.
Figure 4. Comparisons of various iterative methods with-memory in terms of COC, EI, and CPU time for ϕ 1 ( ω ) ϕ 7 ( ω ) respectively.
Fractalfract 08 00698 g004aFractalfract 08 00698 g004b
Figure 5. Basins of attraction of ϕ 1 ( z ) using several iteration methods without memory.
Figure 5. Basins of attraction of ϕ 1 ( z ) using several iteration methods without memory.
Fractalfract 08 00698 g005
Figure 6. Basins of attraction of ϕ 2 ( z ) using several iteration methods without memory.
Figure 6. Basins of attraction of ϕ 2 ( z ) using several iteration methods without memory.
Fractalfract 08 00698 g006
Figure 7. Basins of attraction of ϕ 3 ( z ) using several iteration methods without memory.
Figure 7. Basins of attraction of ϕ 3 ( z ) using several iteration methods without memory.
Fractalfract 08 00698 g007
Figure 8. Basins of attraction of ϕ 4 ( z ) using several iteration methods without memory.
Figure 8. Basins of attraction of ϕ 4 ( z ) using several iteration methods without memory.
Fractalfract 08 00698 g008
Figure 9. Basins of attraction of ϕ 5 ( z ) using several iteration methods without memory.
Figure 9. Basins of attraction of ϕ 5 ( z ) using several iteration methods without memory.
Fractalfract 08 00698 g009
Figure 10. Basins of attraction of ϕ 6 ( z ) using several iteration methods without memory.
Figure 10. Basins of attraction of ϕ 6 ( z ) using several iteration methods without memory.
Fractalfract 08 00698 g010
Figure 11. Basins of attraction of ϕ 7 ( z ) using several iteration methods without memory.
Figure 11. Basins of attraction of ϕ 7 ( z ) using several iteration methods without memory.
Fractalfract 08 00698 g011
Table 1. Numerical comparison of several iteration schemes with memory for ϕ 1 ( ω ) .
Table 1. Numerical comparison of several iteration schemes with memory for ϕ 1 ( ω ) .
Methods ω 1 α ω 2 α ω 3 α f ( ω 3 ) COC EI CPU
S H (17)–(24), L = 0.1 4.35 ( 1 ) 1.64 ( 5 ) 1.54 ( 34 ) 1.23 ( 34 ) 6.38 1.58 0.828
S H (17)–(25), L = 0.1 4.35 ( 1 ) 5.11 ( 5 ) 1.69 ( 33 ) 1.35 ( 33 ) 7.03 1.62 0.938
S H (17)–(26), L = 0.1 4.35 ( 1 ) 1.85 ( 6 ) 2.05 ( 45 ) 1.64 ( 45 ) 7.09 1.64 0.828
N C (22)–(24), L = 0.1 DDDD---
N C (22)–(25), L = 0.1 DDDD---
N C (22)–(26), L = 0.1 DDDD---
J N (23)–(24), L = 0.1 1.59 ( 1 ) 3.99 ( 5 ) 8.11 ( 28 ) 8.11 ( 28 ) 6.37 1.58 0.781
J N (23)–(25), L = 0.1 1.59 ( 1 ) 1.91 ( 5 ) 4.49 ( 32 ) 3.59 ( 32 ) 6.86 1.61 0.718
J N (23)–(26), L = 0.1 1.59 ( 3 ) 2.34 ( 5 ) 2.78 ( 34 ) 2.23 ( 34 ) 7.63 1.66 0.750
X W (27)–(28), L = 0.1 , T = 2 8.68 ( 1 ) 1.02 ( 3 ) 1.71 ( 23 ) 1.37 ( 23 ) 6.23 1.57 0.812
X W (27)–(29), L = 0.1 , T = 2 8.68 ( 1 ) 1.24 ( 3 ) 6.84 ( 23 ) 5.47 ( 23 ) 6.24 1.58 0.859
X W (27)–(30), L = 0.1 , T = 2 8.68 ( 1 ) 6.65 ( 4 ) 4.70 ( 25 ) 3.76 ( 25 ) 6.30 1.59 0.844
X W (27)–(31), L = 0.1 , T = 2 8.68 ( 1 ) 8.95 ( 4 ) 5.04 ( 24 ) 4.03 ( 24 ) 6.27 1.58 0.86
F Z (20) 3.27 ( 4 ) 3.28 ( 29 ) 4.61 ( 220 ) 3.69 ( 220 ) 7.63 1.96 0.446
Z K (21) 2.79 ( 4 ) 2.38 ( 30 ) 4.64 ( 226 ) 3.45 ( 226 ) 7.64 1.97 0.45
S M 1 (14) 2.68 ( 5 ) 3.34 ( 26 ) 2.68 ( 206 ) 2.15 ( 206 ) 8.61 2.03 0.467
S M 2 (15) 2.78 ( 6 ) 5.75 ( 33 ) 2.03 ( 260 ) 1.62 ( 260 ) 8.52 2.03 0.456
D stands for fails to converge.
Table 2. Numerical comparison of several iteration schemes with-memory for ϕ 2 ( ω ) .
Table 2. Numerical comparison of several iteration schemes with-memory for ϕ 2 ( ω ) .
Methods ω 1 α ω 2 α ω 3 α f ( ω 3 ) COC EI CPU
S H (17)–(24), L = 0.5 1.90 ( 3 ) 2.96 ( 20 ) 5.22 ( 133 ) 2.78 ( 133 ) 6.70 1.60 0.969
S H (17)–(25), L = 0.5 1.90 ( 3 ) 4.64 ( 21 ) 1.10 ( 144 ) 5.89 ( 145 ) 7.01 1.62 0.875
S H (17)–(26), L = 0.5 1.90 ( 3 ) 1.17 ( 22 ) 3.18 ( 162 ) 1.64 ( 162 ) 7.26 1.64 0.89
N C (22)–(24), L = 0.5 1.05 ( 2 ) 2.61 ( 10 ) 2.87 ( 45 ) 1.53 ( 45 ) 4.59 1.66 0.766
N C (22)–(25), L = 0.5 1.05 ( 2 ) 1.06 ( 10 ) 1.83 ( 49 ) 9.78 ( 50 ) 4.84 1.69 0.813
N C (22)–(25), L = 0.5 1.05 ( 2 ) 5.55 ( 12 ) 3.73 ( 58 ) 1.99 ( 58 ) 4.97 1.70 0.844
J N (23)–(24), L = 0.5 2.16 ( 3 ) 6.44 ( 17 ) 2.04 ( 96 ) 1.09 ( 96 ) 5.87 1.55 0.890
J N (23)–(25), L = 0.5 2.16 ( 3 ) 8.38 ( 18 ) 1.52 ( 107 ) 8.10 ( 108 ) 6.22 1.57 0.891
J N (23)–(26), L = 0.5 2.16 ( 3 ) 4.75 ( 20 ) 3.36 ( 127 ) 1.79 ( 127 ) 6.43 1.60 0.829
X W (27)–(28), L = 0.5 , T = 2 2.69 ( 3 ) 4.44 ( 21 ) 2.37 ( 163 ) 1.26 ( 163 ) 8.00 1.68 0.876
X W (27)–(29), L = 0.5 , T = 2 2.69 ( 3 ) 4.53 ( 21 ) 2.77 ( 163 ) 1.47 ( 163 ) 8.00 1.68 0.859
X W (27)–(30), L = 0.5 , T = 2 2.69 ( 3 ) 4.52 ( 21 ) 2.71 ( 163 ) 1.44 ( 163 ) 8.00 1.68 0.907
X W (27)–(31), L = 0.5 , T = 2 2.69 ( 3 ) 4.52 ( 21 ) 2.71 ( 163 ) 4.03 ( 24 ) 8.00 1.68 0.875
F Z (20) 7.23 ( 3 ) 2.65 ( 17 ) 3.05 ( 129 ) 1.62 ( 129 ) 7.75 1.97 0.41
Z K (21) 8.43 ( 3 ) 8.80 ( 20 ) 6.64 ( 158 ) 5.59 ( 158 ) 7.90 1.99 0.610
S M 1 (14) 3.30 ( 5 ) 8.82 ( 26 ) 1.57 ( 144 ) 8.38 ( 145 ) 7.77 2.00 0.608
S M 2 (15) 7.80 ( 10 ) 1.93 ( 51 ) 3.65 ( 247 ) 1.94 ( 247 ) 8.00 2.00 0.588
Table 3. Numerical comparison of several iteration schemes with memory for ϕ 3 ( ω ) .
Table 3. Numerical comparison of several iteration schemes with memory for ϕ 3 ( ω ) .
Methods ω 1 α ω 2 α ω 3 α f ( ω 3 ) COC EI CPU
S H (17)–(24), L = 0.5 9.40 ( 2 ) 8.94 ( 5 ) 5.36 ( 26 ) 5.36 ( 26 ) 6.81 1.61 0.859
S H (17)–(25), L = 0.5 9.40 ( 2 ) 7.74 ( 6 ) 1.77 ( 34 ) 1.77 ( 34 ) 6.85 1.61 0.859
S H (17)–(26), L = 0.5 9.40 ( 2 ) 9.65 ( 6 ) 3.04 ( 36 ) 3.04 ( 36 ) 7.47 1.65 0.844
N C (22)–(24), L = 0.5 D DD
N C (22)–(25), L = 0.5 D DD
N C (22)–(26), L = 0.5 D DD
J N (23)–(24), L = 0.5 7.27 ( 2 ) 2.00 ( 4 ) 2.43 ( 20 ) 2.43 ( 20 ) 6.35 1.55 0.750
J N (23)–(25), L = 0.5 7.27 ( 2 ) 2.38 ( 4 ) 5.49 ( 21 ) 5.49 ( 21 ) 6.84 1.61 0.906
J N (23)–(26), L = 0.5 7.27 ( 2 ) 2.57 ( 4 ) 8.75 ( 25 ) 8.75 ( 25 ) 8.54 1.70 0.953
X W (27)–(28), L = 0.5 , T = 2 1.38 ( 1 ) 7.04 ( 4 ) 2.28 ( 21 ) 2.28 ( 21 ) 7.15 1.63 0.922
X W (27)–(28), L = 0.5 , T = 2 1.38 ( 1 ) 8.47 ( 4 ) 8.88 ( 21 ) 8.88 ( 21 ) 7.18 1.63 0.923
X W (27)–(30), L = 0.5 , T = 2 1.38 ( 1 ) 7.96 ( 4 ) 5.48 ( 21 ) 5.48 ( 21 ) 7.18 1.63 0.875
X W (27)–(31), L = 0.5 , T = 2 1.38 ( 1 ) 8.22 ( 4 ) 7.06 ( 21 ) 7.06 ( 21 ) 7.18 1.63 0.975
F Z (20) 1.90 ( 3 ) 6.44 ( 20 ) 1.17 ( 142 ) 1.17 ( 142 ) 7.45 1.95 0.496
Z K (21) 2.02 ( 4 ) 4.93 ( 22 ) 3.15 ( 166 ) 1.32 ( 166 ) 7.74 1.97 0.50
S M 1 (14) 9.56 ( 5 ) 1.48 ( 24 ) 6.98 ( 189 ) 6.98 ( 189 ) 8.73 2.01 0.696
S M 2 (15) 3.74 ( 7 ) 4.32 ( 34 ) 3.61 ( 265 ) 3.61 ( 265 ) 8.57 2.04 0.738
D stands for fails to converge.
Table 4. Numerical comparison of several iteration schemes with memory for ϕ 4 ( ω ) .
Table 4. Numerical comparison of several iteration schemes with memory for ϕ 4 ( ω ) .
Methods ω 1 α ω 2 α ω 3 α f ( ω 3 ) COC EI CPU
S H (17)–(24), L = 0.1 DDDD
S H (17)–(25), L = 0.1 DDDD
S H (17)–(26), L = 0.1 DDDD
N C (22)–(24), L = 0.1 DDDD
N C (22)–(25), L = 0.1 DDDD
N C (22)–(26), L = 0.1 DDDD
J N (23)–(24), L = 0.1 DDDD
J N (23)–(25), L = 0.1 DDDD
J N (23)–(26), L = 0.1 DDDD
X W (27)–(28), L = 0.1 , T = 2 DDDD
X W (27)–(29), L = 0.1 , T = 2 DDDD
X W (27)–(30), L = 0.1 , T = 2 DDDD
X W (27)–(31), L = 0.1 , T = 2 DDDD
F Z (20) 7.50 ( 1 ) 1.92 ( 2 ) 1.31 ( 5 ) 6.57 ( 5 ) 0.93 0.97 0.360
Z K (21) 2.30 ( 4 ) 2.80 ( 18 ) 4.25 ( 144 ) 3.53 ( 144 ) 7.94 1.99 0.341
S M 1 (14) 3.72 ( 4 ) 7.09 ( 20 ) 3.27 ( 152 ) 1.63 ( 151 ) 8.25 2.01 0.452
S M 2 (15) 3.65 ( 5 ) 1.33 ( 27 ) 5.13 ( 214 ) 2.56 ( 213 ) 8.30 2.09 0.488
D stands for fails to converge.
Table 5. Numerical comparison of several iteration schemes with memory for ϕ 5 ( ω ) .
Table 5. Numerical comparison of several iteration schemes with memory for ϕ 5 ( ω ) .
Methods ω 1 α ω 2 α ω 3 α f ( ω 3 ) COC EI CPU
S H (17)–(17), L = 0.2 DDDD
S H (17)–(17), L = 0.2 DDDD
S H (17)–(17), L = 0.2 DDDD
N C (22)–(24), L = 0.2 DDDD
N C (22)–(25), L = 0.2 DDDD
N C (22)–(26), L = 0.2 DDDD
J N (23)–(24), L = 0.2 DDDD
J N (23)–(25), L = 0.2 DDDD
J N (23)–(26), L = 0.2 DDDD
X W (27)–(28), L = 0.2 , T = 5 8.79 ( 1 ) 7.95 ( 2 ) 2.53 ( 11 ) 2.02 ( 10 ) 7.19 1.63 0.735
X W (27)–(29), L = 0.2 , T = 5 8.79 ( 1 ) 8.62 ( 2 ) 5.15 ( 12 ) 4.12 ( 11 ) 7.96 1.68 0.829
X W (27)–(30), L = 0.2 , T = 5 8.79 ( 1 ) 2.66 ( 2 ) 1.34 ( 15 ) 1.07 ( 14 ) 7.40 1.64 0.733
X W (27)–(31), L = 0.2 , T = 5 8.79 ( 1 ) 3.06 ( 1 ) 4.66 ( 6 ) 3.73 ( 5 ) 6.86 1.61 0.72
F Z (20) 1.23 ( 1 ) 4.60 ( 9 ) 9.47 ( 67 ) 7.57 ( 66 ) 7.76 1.98 0.52
Z K (21) 3.17 ( 4 ) 3.32 ( 16 ) 2.54 ( 168 ) 2.15 ( 167 ) 7.92 1.99 0.601
S M 1 (14) 2.54 ( 4 ) 7.28 ( 19 ) 1.56 ( 183 ) 1.25 ( 182 ) 10.12 2.16 0.52
S M 2 (15) 4.08 ( 5 ) 1.73 ( 26 ) 9.31 ( 260 ) 7.44 ( 259 ) 10.91 2.21 0.606
D stands for fails to converge.
Table 6. Numerical comparison of several iteration schemes with memory for ϕ 6 ( ω ) .
Table 6. Numerical comparison of several iteration schemes with memory for ϕ 6 ( ω ) .
Methods ω 1 α ω 2 α ω 3 α f ( ω 3 ) COC EI CPU
S H (17)–(24), L = 0.1 3.86 ( 2 ) 1.17 ( 13 ) 3.56 ( 94 ) 3.37 ( 93 ) 6.98 1.62 0.593
S H (17)–(25), L = 0.1 3.86 ( 2 ) 1.80 ( 15 ) 2.24 ( 135 ) 2.12 ( 134 ) 8.98 1.62 0.562
S H (17)–(26), L = 0.1 3.86 ( 2 ) 1.80 ( 15 ) 2.24 ( 135 ) 2.12 ( 134 ) 8.98 1.73 0.609
N C (22)–(24), L = 0.1 9.66 ( 2 ) 1.12 ( 6 ) 2.03 ( 29 ) 1.92 ( 28 ) 4.63 1.66 0.672
N C (22)–(25), L = 0.1 9.66 ( 2 ) 3.28 ( 7 ) 1.29 ( 34 ) 1.22 ( 33 ) 5.03 1.71 0.672
N C (22)–(26), L = 0.1 9.66 ( 2 ) 3.28 ( 7 ) 1.29 ( 34 ) 1.22 ( 33 ) 5.03 1.71 0.578
J N (23)–(24), L = 0.1 2.00 ( 2 ) 1.24 ( 17 ) 1.87 ( 66 ) 1.77 ( 65 ) 5.95 1.56 0.626
J N (23)–(25), L = 0.1 2.00 ( 2 ) 8.62 ( 14 ) 2.17 ( 93 ) 2.05 ( 92 ) 7.00 1.62 0.625
J N (23)–(26), L = 0.1 2.00 ( 2 ) 8.62 ( 14 ) 2.17 ( 93 ) 2.05 ( 92 ) 7.00 1.62 0.673
X W (27)–(28), L = 0.1 , T = 2 3.85 ( 2 ) 1.73 ( 10 ) 1.50 ( 77 ) 1.42 ( 76 ) 8.04 1.68 0.625
X W (27)–(29), L = 0.1 , T = 2 3.85 ( 2 ) 1.46 ( 10 ) 3.86 ( 78 ) 3.65 ( 77 ) 8.03 1.68 0.594
X W (27)–(30), L = 0.1 , T = 2 3.85 ( 2 ) 1.46 ( 10 ) 3.86 ( 78 ) 3.65 ( 77 ) 8.03 1.68 0.578
X W (27)–(31), L = 0.1 , T = 2 3.85 ( 2 ) 1.46 ( 10 ) 3.86 ( 78 ) 3.65 ( 77 ) 8.03 1.68 0.687
F Z (20) 9.96 ( 8 ) 3.46 ( 40 ) 1.31 ( 219 ) 2.16 ( 218 ) 7.55 1.96 0.401
Z K (21) 6.94 ( 8 ) 4.22 ( 45 ) 1.56 ( 238 ) 2.58 ( 237 ) 7.90 1.99 0.410
S M 1 (14) 2.46 ( 9 ) 1.12 ( 51 ) 5.48 ( 247 ) 5.19 ( 246 ) 8.10 2.00 0.404
S M 2 (15) 4.56 ( 11 ) 4.19 ( 63 ) 1.04 ( 287 ) 9.91 ( 287 ) 8.10 2.00 0.401
D stands for fails to converge.
Table 7. Numerical comparison of several iteration schemes with memory for ϕ 7 ( ω ) .
Table 7. Numerical comparison of several iteration schemes with memory for ϕ 7 ( ω ) .
Methods ω 1 α ω 2 α ω 3 α f ( ω 3 ) COC EI CPU
S H (17)–(24), L = 0.7 4.30 ( 1 ) 1.47 ( 7 ) 3.56 ( 51 ) 1.25 ( 50 ) 6.64 1.60 0.610
S H (17)–(25), L = 0.7 4.30 ( 1 ) 1.80 ( 5 ) 1.47 ( 37 ) 4.80 ( 37 ) 7.17 1.63 0.719
S H (17)–(26), L = 0.7 4.30 ( 1 ) 6.26 ( 6 ) 8.05 ( 41 ) 2.61 ( 40 ) 7.07 1.63 0.720
N C (22)–(24), L = 0.7 DDDD
N C (22)–(25), L = 0.7 DDDD
N C (22)–(26), L = 0.7 DDDD
J N (23)–(24), L = 0.7 1.43 1.01 ( 1 ) 6.08 ( 16 ) 1.97 ( 15 ) 5.46 1.52 0.609
J N (23)–(25), L = 0.7 1.43 5.50 ( 2 ) 5.29 ( 11 ) 1.72 ( 10 ) 5.30 1.51 0.688
J N (23)–(26), L = 0.7 1.43 2.94 ( 3 ) 4.46 ( 18 ) 1.45 ( 17 ) 5.01 1.49 0.656
X W (27)–(28), L = 0.7 , T = 2 5.08 ( 1 ) 5.88 ( 5 ) 7.55 ( 34 ) 2.45 ( 33 ) 7.14 1.63 0.641
X W (27)–(29), L = 0.7 , T = 2 5.08 ( 1 ) 1.82 ( 5 ) 6.50 ( 38 ) 2.11 ( 37 ) 7.12 1.63 0.718
X W (27)–(30), L = 0.7 , T = 2 5.08 ( 1 ) 6.45 ( 5 ) 1.60 ( 33 ) 5.20 ( 33 ) 7.14 1.63 0.781
X W (27)–(31), L = 0.7 , T = 2 5.08 ( 1 ) 2.40 ( 5 ) 5.84 ( 37 ) 1.89 ( 36 ) 7.13 1.68 0.797
F Z (20) 3.35 ( 2 ) 5.16 ( 16 ) 6.68 ( 116 ) 2.17 ( 115 ) 7.58 1.96 0.47
Z K (21) 2.51 ( 6 ) 5.22 ( 38 ) 3.55 ( 190 ) 2.17 ( 189 ) 7.92 1.99 0.442
S M 1 (14) 2.51 ( 7 ) 2.69 ( 40 ) 5.15 ( 199 ) 1.67 ( 198 ) 8.00 2.00 0.434
S M 2 (15) 2.41 ( 8 ) 6.17 ( 47 ) 1.41 ( 225 ) 4.61 ( 225 ) 8.00 2.00 0.457
D stands for fails to converge.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Junjua, M.-u.-D.; Abdullah, S.; Kansal, M.; Ahmad, S. On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction. Fractal Fract. 2024, 8, 698. https://doi.org/10.3390/fractalfract8120698

AMA Style

Junjua M-u-D, Abdullah S, Kansal M, Ahmad S. On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction. Fractal and Fractional. 2024; 8(12):698. https://doi.org/10.3390/fractalfract8120698

Chicago/Turabian Style

Junjua, Moin-ud-Din, Shahid Abdullah, Munish Kansal, and Shabbir Ahmad. 2024. "On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction" Fractal and Fractional 8, no. 12: 698. https://doi.org/10.3390/fractalfract8120698

APA Style

Junjua, M. -u. -D., Abdullah, S., Kansal, M., & Ahmad, S. (2024). On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction. Fractal and Fractional, 8(12), 698. https://doi.org/10.3390/fractalfract8120698

Article Metrics

Back to TopTop