All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
https://www.mdpi.com/openaccess.
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
This paper investigates the design and stability of Traub–Steffensen-type iteration schemes with and without memory for solving nonlinear equations. Steffensen’s method overcomes the drawback of the derivative evaluation of Newton’s scheme, but it has, in general, smaller sets of initial guesses that converge to the desired root. Despite this drawback of Steffensen’s method, several researchers have developed higher-order iterative methods based on Steffensen’s scheme. Traub introduced a free parameter in Steffensen’s scheme to obtain the first parametric iteration method, which provides larger basins of attraction for specific values of the parameter. In this paper, we introduce a two-step derivative free fourth-order optimal iteration scheme based on Traub’s method by employing three free parameters and a weight function. We further extend it into a two-step eighth-order iteration scheme by means of memory with the help of suitable approximations of the involved parameters using Newton’s interpolation. The convergence analysis demonstrates that the proposed iteration scheme without memory has an order of convergence of 4, while its memory-based extension achieves an order of convergence of at least , attaining the efficiency index . Two special cases of the proposed iteration scheme are also presented. Notably, the proposed methods compete with any optimal j-point method without memory. We affirm the superiority of the proposed iteration schemes in terms of efficiency index, absolute error, computational order of convergence, basins of attraction, and CPU time using comparisons with several existing iterative methods of similar kinds across diverse nonlinear equations. In general, for the comparison of iterative schemes, the basins of iteration are investigated on simple polynomials of the form in the complex plane. However, we investigate the stability and regions of convergence of the proposed iteration methods in comparison with some existing methods on a variety of nonlinear equations in terms of fractals of basins of attraction. The proposed iteration schemes generate the basins of attraction in less time with simple fractals and wider regions of convergence, confirming their stability and superiority in comparison with the existing methods.
Several real-life problems in engineering and applied sciences involve nonlinear equations of the form , where and I is an open interval. The solution of these nonlinear equations is the basic aim of this research, which has a simple zero, say . Since the roots of a nonlinear equation cannot always be determined accurately, we have to find a numerical solution by using numerical methods. For this purpose, iteration methods, like Newton’s method, are frequently used [1,2,3]. Traub [3] classified these iterative methods into two categories; one-point (one-step) iterative methods and multi-point (multi-step) iterative methods. Newton’s method [1] and Steffensen’s method [4] are famous examples of one-step, one-point iterative methods, given by (1) and (2), respectively.
where
The investigation of the dynamical behavior of iterative methods using basins of attraction provides information about the regions of convergence and the selection of initial guesses for which a method converges or fails to converge. To investigate the regions of convergence of an iteration scheme for solving a nonlinear equation , we plot its basins of attraction, i.e., the set of initial guesses for which the iteration scheme converges to the roots [5,6], as follows. We chose an initial guess from a grid of points within the rectangle , which contains all of the roots of , each allocated by a unique color. Starting with an initial point in D, an iteration method may either converge to one of the roots or diverge after a specified number of iterations ‘20’, usually marked with the color black. For more details regarding basins of attraction, one should refer to [6,7]. For instance, we plot the basins of attraction of for Steffensen’s method (2), which has three roots—, and —contained in and represented by cyan, magenta, and yellow, respectively. Figure 1 represents the basins of attraction of using Newton’s method (1) and Steffensen’s method (2) with 1335 and black points, respectively, from a total of points in the region.
Steffensen’s method overcomes the difficulty of derivative calculation in Newton’s scheme, but, in general, it has smaller sets of initial guesses that converge to the desired roots (basins of attraction) as shown in Figure 2. In recent years, several researchers have developed higher-order variants of Steffensen’s scheme despite this drawback of Steffensen’s method. Traub [3] introduced a free parameter in Steffensen’s scheme (2) to obtain the first parametric derivative free iteration method, which provides larger basins of attraction for specific values of the parameter. Traub [3] further presented an iteration method with memory by using a suitable approximation of the free parameter as follows:
where are given, denotes the divided difference of first order and where is the first degree of Newton’s interpolation polynomial. The iterative scheme with memory given by (3) has a convergence order of . Figure 2 represents the basins of attraction of using Traub’s method (3) for and with 5825 and 2177 black points, respectively.
The concept of an optimal root finding method was stated by Kung and Traub [8], that a multi-step iterative method without memory using function evaluations per iteration has an order of convergence of (optimal method). Ostrowski [2] defined the efficiency index, i.e., , to compute the efficiency of a root-finding iteration scheme, where and j denote the order of convergence and the total number of function evaluations used by an iterative scheme per iterative step, respectively. For an optimal j-step iterative method (based on function evaluations) without memory, the efficiency index is . For instance, the efficiency index of the two-step optimal fourth order King’s method [9] is (and requires three functional evaluations).
Since multi-step (multi-point) methods have advantages over the one-step (one-point) iteration methods in terms of the efficiency index and convergence order, several optimal multi-step (multi-point) iteration methods without memory for solving nonlinear equations have been derived in recent years (see, for example, [10,11,12,13,14,15,16,17,18]).
Traub [3] pointed out that in some cases, the convergence order and efficiency index of an iteration scheme can be improved without using additional functional evaluations based on the approximation of an accelerating parameter, which appears in its error term by using an interpolating polynomial, which passes through the available points at current and previous iterations. Such iteration methods are defined as methods with memory [3]. Inspired by this idea, in recent years, several two- and three-step iterative methods with memory have been developed by employing free parameters [19,20,21,22,23,24,25,26,27,28,29,30]. Recently, Abdullah et al. [30] have developed a two-point iterative method with-memory by using Hermite interpolation polynomials in an existing sixth-order method without memory. They improved the —order of convergence of a sixth-order method to and the efficiency index from to . For more details regarding the improvement of convergence order by means of memory, one should see, e.g., [14,24].
In this paper, we present a family of two-step iterative root-finding methods with memory with a convergence order of and an efficiency index of , which is equal to an efficiency index of an —point optimal method without memory of order . In addition, the proposed methods possess wider regions of convergence, illustrated in terms of basins of attraction. The remaining contents of the paper proceed as follows. In Section 2, based on Traub’s scheme (3) and the second step of King’s method [9] and by using a parametric approximation of a derivative along with a weight function, we obtain a new optimal fourth-order derivative-free iteration scheme. In Section 3, we employ three self-accelerating parameters in the new optimal fourth-order scheme such that the convergence order is improved from 4 to 8 without using additional functional evaluations (i.e., using only three functional evaluations). It is necessary to remark that the efficiency index of the fourth-order method is improved from to 2. Section 4 is devoted to presenting some particular cases of the proposed family and weight functions. In Section 5, some numerical examples and real-life applications are reported to test the efficiency and performance of proposed methods and to justify the theoretical results. Section 6 presents an extensive analysis and comparison of proposed methods with existing ones in terms of fractals of basins of attractions in the complex plane on a variety of nonlinear functions. Finally, some concluding remarks are given in Section 7.
2. Two-Step Traub-Steffensen Type Iterative Scheme
In this section, we design a derivative-free two-step fourth-order optimal iteration scheme without memory. We introduce a free parameter q in Traub’s method without memory and combine it with the second step of King’s scheme [9] as follows:
By approximating with in the second step of the scheme (4), the following derivative-free two-step iteration scheme is obtained:
where the scalars and s are free parameters. With the help of Taylor series expansions, one can obtain the following error equation for the iteration scheme (5):
where ( and being approximate and exact roots, respectively) is the error at jth iteration and . Note that the scheme (5) is not optimal as it provides convergence order 3 by using three functional evaluations. To make it optimal, we use a real valued weight function (where ) in (5) and achieve the following family of optimal fourth-order methods:
The subsequent theorem demonstrates the conditions on the weight function to obtain optimal fourth-order convergence of the scheme (6).
Theorem1.
Let be a simple root of a sufficiently differentiable nonlinear function ϕ such that , where is an open set. Let an initial approximation be close enough to and if , and then the iteration scheme (6) has convergence order 4 with the error equation as follows:
where and s are free parameters, and
Proof.
Let the error at jth iteration be . By using Taylor’s series expansions of the function in the jth iteration, the proof is similar to those given in [14,19,21]. Hence, it is omitted. □
Remark1.
Theorem 1 demonstrated that convergence order of the iteration scheme (6) is 4 and it’s efficiency index is .
Further, by choosing , the obtained method has a convergence order of 8. Therefore, it is concluded from the error analysis that the free parameters and s in (7) perform a significant role in the with-memorization of the method without memory (6). These parameters are called self-accelerating parameters. Hence, the scheme (6) is extendable to a novel method with memory with an accelerated order of convergence 8 and a very high-efficiency index 2.
In this section, without requiring any additional functional evaluations, we extend the Traub–Steffensen type fourth-order tri-parametric iteration scheme (6) to an eighth-order iteration scheme with memory. To achieve this goal, we employ Newton’s interpolation polynomials of an appropriate degree to recursively determine the self-accelerating parameters , and s utilizing the already saved points at the current and previous iterations. We select the associated parameters p, q, and s in a manner that increases the fourth order of convergence of the scheme (6), as previously discussed.
If we choose , and , the order of the scheme (6) increase up to eight. Since the root and consequently the values of and are not known, we approximate the self-accelerators p, q, and s in (6) recursively by using Newton’s interpolation polynomials of an appropriate degree at each iterative step as:
where is a ninth degree Newton’s interpolation polynomial that passes through the points for any , given by:
is a tenth degree Newton’s interpolation polynomial that passes through the points, for any given by:
is an eleventh degree Newton’s interpolating polynomial that passes through the points, , for any given by:
Finally, we present the following two-step tri-parametric family of iterative methods with-memory, i.e., by replacing the parameters and s in the scheme (6) with self-accelerators and , given in (9):
It is worth mentioning that the initial values and could be taken as very small positive values. Additionally, the self-accelerator is to be computed exactly before the start of each iteration, is computed after , and is computed after the computation of .
The following theorem demonstrates that the newly presented iterative scheme with-memory (13) has a convergence order of with a computational efficiency index of .
Theorem2.
Let be an initial guess near enough to the simple zero α of a sufficiently differentiable function ϕ. If self-accelerators , and are iteratively computed by using the formulae given in (9), then the —order of convergence of the proposed iterative scheme with memory (13) is at least with an efficiency index of .
Proof.
The R-order of convergence of the iteration method (13) is ascertained using the Herzberger’s matrix method [31]. The lower bound of convergence order of one-step —point method with memory
is the spectral radius of its associated matrix with following elements:
Whereas, the spectral radius of is defined as the lower bound of the order of an —step iterative method where the matrices correspond to the iteration steps .
According to the scheme (13), we obtain the corresponding matrices as follows:
Hence, we obtain:
The above matrix has the eigenvalues: , As a result, the matrix’s spectral radius is . Hence, we conclude that the order of convergence of the proposed two-step iterative scheme with memory (13) is at least with an efficiency index of . □
4. Special Cases
One can obtain several special cases of iteration scheme with memory (13) by choosing the weight functions such that the conditions of Theorem 1, i.e., , are satisfied. Here, we present two simple special cases of our iteration scheme (13) as follows.
Case 1: By choosing (where ) in the scheme (13), we achieve the following specific method using the memory represented by :
Case 2: By taking (being ) in the scheme (13), we obtain another method with-memory denoted by , given as follows:
5. Numerical Experiments and Applications
In this section, we test our two-step tri-parametric methods with-memory (14) and (15) denoted by and , respectively, with the help of different nonlinear functions given by Examples 1–7. To avoid the loss of significant digits and to achieve high accuracy, we have used the arbitrary precision arithmetics with 1000 significant digits in the programming package Maple 18. The formula to compute the computational order of convergence is given as follows [32]:
For all the comparisons, we have chosen to start the iterations. We compare the accuracy and efficiency of our proposed iteration schemes for and for with the existing two-step methods with-memory of Abdullah et al. [30] denoted by , Zafar et al. [27] denoted by , Zaka Ullaha et al. [28] denoted by Wang et al. [33] denoted by Choubey et al. [19] denoted by and Choubey et al. [20] denoted by described as follows:
Method SH:
where
and
Method FZ:
where, for ,
and
Method ZK:
where, for , is a sixth degree interpolation polynomial passing through is a seventh degree interpolation polynomial passing through , and is an eighth degree interpolation polynomial passing through .
Method NC:
where, , .
Method JN:
For methods with-memory , and , the values associated with the parameter have been recorded as:
Formula 1:
where and .
Formula 2:
where and , .
Formula 3:
where and .
Method XW:
where , and .
For the method with-memory , the parameter value is recorded as:
Formula 1:
where and , .
Formula 2:
where and , , .
Formula 3:
where , and .
Formula 4:
where and .
Example1.
Location of maximum energy distribution:
Planck’s radiation law is given by
where α is energy density, δ is wavelength radiation, ϕ is absolute temperature, u is Plank’s constant, p is Boltzmann’s constant, and k is the speed of light. To maximize the energy density and determine the wavelength, we first evaluate
The terms on the left side of the parentheses are zero in the limits as and , although energy density gives minima in both cases. The maximum we are seeking arises when the terms inside the parentheses are zero. This happens when
where being the wavelength to maximize the energy density. For the above equation reduces to
Now we can define the following non-linear expression,
The problem is to solve the nonlinear Equation (36), which has two roots, and 0. We take the exact root and initial approximation The computational results are depicted in Table 1, where, denotes . It is observed that the accuracy, computational order of convergence (), and efficiency index () of our proposed schemes and are better than the others for the test problem
Example2.
Vertical stress:
Boussinesq’s formula computes the vertical stress (s) within an elastic material induced at a specific point beneath the edge of a rectangular strip footing subjected to a uniform pressure q given as follows:
To determine the value of ω, where the vertical stress (s) is equal to 25 percent of the applied footing stress q, we have to find the value of z at first. To find the point at which the footing stress q is equal to a quarter, we have to solve the following equation:
The exact root of Equation (38) is . We take an initial guess for this root as to obtain the numerical results shown in Table 2.
Example3.
We take the standard nonlinear test equation as follows:
For the above non-linear function, we take as the exact root and as an initial guess. The computational results are shown in Table 3, which illustrate that our proposed schemes and perform better in terms of convergence speed and efficiency.
Example4.
We consider another standard nonlinear test equation as follows:
Here, we take the exact root and initial approximation . The numerical results for comparison are illustrated in Table 4, which show that computational order of convergence () and efficiency index () of proposed schemes and are better than the earlier known schemes , , , , and .
Example5.
In addition, we pick another standard non-linear test problem, including trigonometric function:
The above equation has 3 real roots and . We take as the exact root and as an initial guess for this problem. The computational results of the function are shown in Table 5, from which it is seen that the proposed iterative schemes and have a faster convergence speed and better efficiency than the iterative schemes , , , , and .
Example6.
We take one more standard nonlinear equation, as follows:
Here, we take as an exact root. The comparison results by taking the initial guess are shown in Table 6. It is observed from Table 6 that the schemes and perform better than the existing schemes and in terms of convergence and efficiency.
Example7.
Blood rheology model:
Blood rheology refers to the study of how blood flows and behaves in the circulatory system. Modeling blood rheology is important for understanding various physiological and pathological conditions related to blood flow. Numerical iterative methods are commonly used to solve the mathematical equations governing blood rheology. Blood rheology is a branch of medicine that focuses on the physical and flow properties of blood. Since blood is a non-Newtonian fluid, it is categorized as a Caisson fluid. According to this concept, flow in a tube behaves like a plug with little deformation, and a velocity gradient develops close to the wall. We take into account the following nonlinear equation.
In order to examine the plug flow of Caisson fluid flow. Here, ω shows the plug flow of Caisson fluid flow. The one of the solutions of is . We choose as an initial approximation to solve . Table 7 display the calculated results.
It is obvious from Table 1, Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 that the special cases and of our proposed iterative scheme are reliable and efficient than the earlier iterative methods , , , , and in terms of accuracy, computational order of convergence and efficiency index for different test problems.
Furthermore, Figure 3 demonstrates the graphical comparison of proposed iterative techniques and with other methods in terms of absolute error while Figure 4 shows the comparison in terms of computational order of convergence (COC), efficiency index (EI), and CPU time, in the first three iterations for solving –. From Figure 3 and Figure 4, it is observed that the proposed schemes and are more robust than the others.
6. Fractals of Basins of Attraction
In this section, we investigate the comparison of fractal behavior of the proposed iteration method (14) for different values of with the iterative schemes , , , and discussed in Section 5. We compare their fractal behavior in terms of basins of attraction in the complex plane, which helps us to better understand their stability and convergence. Let be a nonlinear function to be solved by an iterative algorithm; we know that, in general, the the boundary between the basins of attraction for distinct zeros of represents a complex fractal form. By assigning a specific color to each basin, we generally obtain very beautiful fractals, which illustrate the performance of iterative methods. Initially, Stewart [5] and Varona [6] presented a graphical comparison between some classical iterative methods in 2001 and 2002, respectively. After that, it is a common practice to compare iteration methods graphically with the help of fractal images of basins of attraction. The book of Kalantari [34] provided several artistic fractal pictures of different polynomials. More recently, this kind of comparison has been studied in the papers [7,25,35,36,37]. All of these papers present a comparison of the methods by plotting basins of attraction for simple polynomials of the form in the complex plane. We investigate convergence regions of different methods by representing their basins of attractions on the variety of nonlinear equations, including real-life problems discussed in Section 5.
To plot fractals of basins of attraction, we chose an initial guess from a grid of points within a square D contained in such that it contains all of the roots of , each allocated by a unique color. For a given initial point in D, an iteration scheme within 25 iterations either converges to one of the roots, painted with a color assigned to that root, or diverges, usually marked with the color black. The brighter color of the basins indicates that a few iterations required for a method to converge to the root.
Basins of attraction of of are shown in Figure 5, which has two roots contained in , represented by colors—cyan and magenta, respectively. Due to the limited space, we have written reduced significant digits of the roots. Figure 5 illustrates that the methods and show wide basins of attraction as compared to those of and while fast convergence is obtained by and .
For the nonlinear function , which has the root , we take and assign color cyan to each initial point in D for which the method converges to . Fractals of basins for this problem are represented in Figure 6, which illustrate that all the methods possess similar regions of convergence except the method with several black regions. The method has fast convergence for initial points near the root since its basins are brighter than those of and .
Similarly, we take , for , which has the root 0. We assign color cyan to each initial point in D for which an iteration method converges to 0. Figure 7 represents the fractals of basins for this problem which illustrate that the proposed methods , , provide wide basins of attractions and have fast convergence for initial points near the root than those of and .
We take , for the nonlinear function , which has six roots; , represented by green, cyan, yellow, orange, red and magenta, respectively. Fractals of basins for are represented in Figure 8, which illustrates that the methods and are the best since they produce simple and wide regions of convergence as compared to other methods.
Fractal images of basins of attraction of are shown in Figure 9, which has three real roots; contained in , represented by cyan, magenta and yellow, respectively. Figure 9 illustrates that all the methods produce wide regions of divergence (black regions); however, the methods and have comparatively better performances in terms of speed and regions of convergence.
For the nonlinear function , which has roots , we take and assign the colors magenta, cyan and yellow to each initial point in D, for which the method converges to and , respectively. Fractals of basins for are represented in Figure 10, which illustrates that the proposed method provide fast convergence with simple fractals and wide regions of convergence as compared to others except the methods and .
Basins of attraction of are shown in Figure 11, which has eight roots contained in ; , represented by colors, cyan, green, orange, yellow, red, magenta, pink and brown respectively. Figure 11 illustrates that the proposed method is the best one among all others, yielding fast convergence and simple fractals and wide regions of convergence. However, none of the methods converge to the roots and .
It is observed that for all of the problems, the proposed iteration scheme for provides wider and brighter basins of attraction with simple fractals which yields its stability and robustness.
Furthermore, the smaller values of the parameter result in wider basins of attraction for the proposed iteration schemes.
7. Conclusions
In this manuscript, we have introduced derivative-free two-step iteration methods of optimal orders four and eight without memory and with-memory, respectively, for solving nonlinear equations. The suggested techniques are higher-order two-step variants of the one-step Traub’s method of optimal order two. It is to be remarked that the eighth-order convergence of the proposed iteration technique with memory is achieved by using only three functional evaluations. The proposed two-step technique’s efficiency index is , making it the highest in the literature and better than the efficiency of several multi-step iteration schemes with-memory. The proposed two-step iteration methods with-memory compete with any j-point optimal method without memory since its efficiency index equals 2. To evaluate the effectiveness of the suggested iterative techniques and to support the theoretical findings, several numerical examples and real-world applications are given. The numerical outcomes of the proposed methods are presented in terms of absolute error, computational order of convergence (COC), and CPU time (sec). Further, we have investigated the fractal behavior and comparison of different iteration methods using fractals of basins of attraction on several nonlinear equations, including real-life problems. The fractals of basins of attractions illustrate the robustness and superiority of the proposed iteration methods without memory. The stability of the proposed iteration methods without memory is affirmed by the simple fractals defined by their wider basins of attraction in comparison with existing iteration methods. Additionally, the numerical tests illustrate that the proposed two-step Traub–Steffensen type iteration schemes with memory outperform existing multi-step iteration schemes with and without memory in many situations. Further research can be conducted to explore general criteria for the selection of free parameters. The current study focuses on the solution of univariate nonlinear equations, while its extension to multivariate equations is left for future research.
Author Contributions
Conceptualization, M.-u.-D.J.; Methodology, M.-u.-D.J.; Validation, M.-u.-D.J., S.A. (Shahid Abdullah), M.K. and S.A. (Shabbir Ahmad); Investigation, M.-u.-D.J., S.A. (Shahid Abdullah), M.K. and S.A. (Shabbir Ahmad); Writing—original draft, M.-u.-D.J.; Writing—review & editing, S.A. (Shahid Abdullah), M.K. and S.A. (Shabbir Ahmad). All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.
Acknowledgments
The authors are thankful to the editor and anonymous reviewers for their valuable suggestions and comments.
Conflicts of Interest
The authors declare that there are no conflicts of interest.
References
Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
Stewart, B.D. Attractor Basins of Various Root-Finding Methods. Ph.D. Thesis, Naval Postgraduate School, Monterey, CA, USA, 2001. [Google Scholar]
Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell.2002, 24, 37–47. [Google Scholar] [CrossRef]
Varona, J.L. An Optimal Thirty-Second-Order Iterative Method for Solving Nonlinear Equations and a Conjecture. Qual. Theory Dyn. Syst.2022, 21, 39. [Google Scholar] [CrossRef]
Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. Assoc. Comput. Math.1974, 21, 643–651. [Google Scholar] [CrossRef]
King, R.F. A family of fourth-order methods for nonlinear equations. SIAM J. Numer. Anal.1973, 10, 876–879. [Google Scholar] [CrossRef]
Behl, R.; Alshomrani, A.S.; Chun, C. A general class of optimal eighth-order derivative free methods for nonlinear equations. J. Math. Chem.2020, 58, 854–867. [Google Scholar] [CrossRef]
Cordero, A.; Reyes, J.A.; Torregrosa, J.R.; Vassileva, M.P. Stability analysis of a new fourth-order optimal iterative scheme for nonlinear equations. Axioms2024, 13, 34. [Google Scholar] [CrossRef]
Moscoso-Martinez, M.; Chicharro, F.I.; Cordero, A.; Torregrosa, J.R.; Urena-Callay, G. Achieving Optimal Order in a Novel Family of Numerical Methods: Insights from Convergence and Dynamical Analysis Results. Axioms2024, 13, 458. [Google Scholar] [CrossRef]
Petković, L.D.; Petković, M.S.; Džunić, J. A class of three-point root-solvers of optimal order of convergence. Appl. Math. Comput.2010, 216, 671–676. [Google Scholar] [CrossRef]
Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
Sharma, J.R.; Sharma, R. A new family of modified Ostrowski’s methods with accelerated eighth order convergence. Numer. Algor.2010, 54, 445–458. [Google Scholar] [CrossRef]
Sharma, J.R.; Kumar, S.; Singh, H. A new class of derivative-free root solvers with increasing optimal convergence order and their complex dynamics. SEMA J.2023, 8, 333–352. [Google Scholar] [CrossRef]
Wang, X.; Liu, L. New eighth-order iterative methods for solving nonlinear equations. Comput. Appl. Math.2010, 234, 1611–1620. [Google Scholar] [CrossRef]
Zheng, Q.; Li, J.; Huang, F. An optimal Steffensen-type family for solving nonlinear equations. Appl. Math. Comput.2011, 217, 9592–9597. [Google Scholar] [CrossRef]
Choubey, N.; Panday, B.; Jaiswal, J.P. Several two-point with memory iterative methods for solving non-linear equations. Afr. Mat.2018, 29, 435–449. [Google Scholar] [CrossRef]
Choubey, N.; Jaiswal, J.; Choubey, A. Family of multipoint with memory iterative schemes for solving nonlinear equations. Int. J. Appl. Comput. Math.2022, 8, 83. [Google Scholar] [CrossRef]
Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Three-step iterative weight function scheme with memory for solving nonlinear problems. Math. Method. Appl. Sci.2022. early view. [Google Scholar] [CrossRef]
Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Memory in the iterative processes for nonlinear problems. Math. Method. Appl. Sci.2023, 46, 4145–4158. [Google Scholar] [CrossRef]
Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Design of iterative methods with memory for solving nonlinear systems. Math. Method. Appl. Sci.2023, 46, 12361–12377. [Google Scholar] [CrossRef]
Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput.2014, 226, 635–660. [Google Scholar] [CrossRef]
Sharma, S.; Kansal, M. A modified Chebyshev–Halley-type iterative family with memory for solving nonlinear equations and its stability analysis. Math. Method. Appl. Sci.2023, 46, 12549–12569. [Google Scholar] [CrossRef]
Soleymani, F.; Lotfi, T.; Tavakoli, E.; Khaksar, F. Haghani, Several iterative methods with memory using self accelerators. Appl. Math. Comput.2015, 254, 452–458. [Google Scholar]
Zafar, F.; Yasmin, N.; Kutbi, M.A.; Zeshan, M. Construction of Tri-parametric derivative free fourth order with and without memory iterative method. J. Nonli. Sci. Appl.2016, 9, 1410–1423. [Google Scholar] [CrossRef]
Akram, S.; Khalid, M.; Junjua, M.-U.; Altaf, S.; Kumar, S. Extension of King’s Iterative Scheme by Means of Memory for Nonlinear Equations. Symmetry2023, 15, 1116. [Google Scholar] [CrossRef]
Abdullah, S.; Choubey, N.; Dara, S. An efficient two-point iterative method with memory for solving non-linear equations and its dynamics. J. Appl. Math. Comput.2023, 70, 285–315. [Google Scholar] [CrossRef]
Herzberger, J. Über matrixdarstellungen fur iterationverfahren bei nichtlinearen gleichungen. Computing1974, 12, 215–222. [Google Scholar] [CrossRef]
Jay, L. A note on Q-order of convergence. BIT Numer. Math.2001, 41, 422–429. [Google Scholar] [CrossRef]
Wang, X.; Zhang, T. Some Newton-type iterative methods with and without memory for solving nonlinear equations. Int. J. Comput. Methods2014, 11, 1350078. [Google Scholar] [CrossRef]
Kalantari, B. Polynomial Root-Finding and Polynomiography; World Scientific: Singapore, 2009. [Google Scholar]
Cordero, A.; Guasp, L.; Torregrosa, J.R. Choosing the most stable members of Kou’s family of iterative methods. J. Comput. Appl. Math.2018, 330, 759–769. [Google Scholar] [CrossRef]
Herceg, D.; Herceg, D. Eighth order family of iterative methods for nonlinear equations and their basins of attraction. J. Comput. Appl. Math.2018, 343, 458–480. [Google Scholar] [CrossRef]
Herceg, D.; Petković, I. Computer visualization and dynamic study of new families of root-solvers. J. Comput. Appl. Math.2022, 401, 16. [Google Scholar] [CrossRef]
Figure 1.
Basins of attraction of using Newton’s method (1) and Steffensen’ method (2).
Figure 1.
Basins of attraction of using Newton’s method (1) and Steffensen’ method (2).
Figure 2.
Basins of attraction of using Traub’s method (3).
Figure 2.
Basins of attraction of using Traub’s method (3).
Figure 3.
Comparisons of various iterative methods with-memory in terms of absolute error for in first three iterations.
Figure 3.
Comparisons of various iterative methods with-memory in terms of absolute error for in first three iterations.
Figure 4.
Comparisons of various iterative methods with-memory in terms of COC, EI, and CPU time for respectively.
Figure 4.
Comparisons of various iterative methods with-memory in terms of COC, EI, and CPU time for respectively.
Figure 5.
Basins of attraction of using several iteration methods without memory.
Figure 5.
Basins of attraction of using several iteration methods without memory.
Figure 6.
Basins of attraction of using several iteration methods without memory.
Figure 6.
Basins of attraction of using several iteration methods without memory.
Figure 7.
Basins of attraction of using several iteration methods without memory.
Figure 7.
Basins of attraction of using several iteration methods without memory.
Figure 8.
Basins of attraction of using several iteration methods without memory.
Figure 8.
Basins of attraction of using several iteration methods without memory.
Figure 9.
Basins of attraction of using several iteration methods without memory.
Figure 9.
Basins of attraction of using several iteration methods without memory.
Figure 10.
Basins of attraction of using several iteration methods without memory.
Figure 10.
Basins of attraction of using several iteration methods without memory.
Figure 11.
Basins of attraction of using several iteration methods without memory.
Figure 11.
Basins of attraction of using several iteration methods without memory.
Table 1.
Numerical comparison of several iteration schemes with memory for .
Table 1.
Numerical comparison of several iteration schemes with memory for .
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Junjua, M.-u.-D.; Abdullah, S.; Kansal, M.; Ahmad, S.
On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction. Fractal Fract.2024, 8, 698.
https://doi.org/10.3390/fractalfract8120698
AMA Style
Junjua M-u-D, Abdullah S, Kansal M, Ahmad S.
On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction. Fractal and Fractional. 2024; 8(12):698.
https://doi.org/10.3390/fractalfract8120698
Chicago/Turabian Style
Junjua, Moin-ud-Din, Shahid Abdullah, Munish Kansal, and Shabbir Ahmad.
2024. "On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction" Fractal and Fractional 8, no. 12: 698.
https://doi.org/10.3390/fractalfract8120698
APA Style
Junjua, M. -u. -D., Abdullah, S., Kansal, M., & Ahmad, S.
(2024). On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction. Fractal and Fractional, 8(12), 698.
https://doi.org/10.3390/fractalfract8120698
Article Metrics
No
No
Article Access Statistics
For more information on the journal statistics, click here.
Multiple requests from the same IP address are counted as one view.
Junjua, M.-u.-D.; Abdullah, S.; Kansal, M.; Ahmad, S.
On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction. Fractal Fract.2024, 8, 698.
https://doi.org/10.3390/fractalfract8120698
AMA Style
Junjua M-u-D, Abdullah S, Kansal M, Ahmad S.
On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction. Fractal and Fractional. 2024; 8(12):698.
https://doi.org/10.3390/fractalfract8120698
Chicago/Turabian Style
Junjua, Moin-ud-Din, Shahid Abdullah, Munish Kansal, and Shabbir Ahmad.
2024. "On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction" Fractal and Fractional 8, no. 12: 698.
https://doi.org/10.3390/fractalfract8120698
APA Style
Junjua, M. -u. -D., Abdullah, S., Kansal, M., & Ahmad, S.
(2024). On Traub–Steffensen-Type Iteration Schemes With and Without Memory: Fractal Analysis Using Basins of Attraction. Fractal and Fractional, 8(12), 698.
https://doi.org/10.3390/fractalfract8120698