1. Introduction
One of the commonly encountered topics in computational mathematics is to tackle solving a nonlinear algebraic equation. The equation can be presented as in the scalar case
, or more complicated as a system of nonlinear algebraic equations. The procedure of finding the solutions (if it exists) cannot be done analytically. In some cases, the analytic techniques only give the real result while its complex zeros should be found and reported. As such, numerical techniques are a viable choice for solving such nonlinear problems. Each of the existing computational procedures has their own domain of validity with some pros and cons [
1,
2].
Two classes of methods with the use of derivatives and without the use of derivatives are known to be useful depending on the application dealing with [
3]. In the derivative-involved methods, a larger attraction basin along with a simple coding effort for higher dimensional problems is at hand which, in derivative-free methods, the area of choosing the initial approximations is smaller and extending to higher dimensional problems is via the application of a divided difference operator matrix, which is basically a dense matrix. However, the ease in not computing the derivative and, subsequently, the Jacobians, make the application of derivative-free methods more practical in several problems [
4,
5,
6,
7].
Here, an attempt is made at developing a computational method which is not only efficient in terms of the computational efficiency index, but also in terms of larger domains for the choice of the initial guesses/approximations for starting the proposed numerical method.
The Steffensen’s method [
8] for solving nonlinear scalar equations has quadratic convergence for simple zeros and given by:
where the two-point divided difference is defined by:
This scheme needs two function evaluations per cycle. Scheme (1) shows an excellent tool for constructing efficient iterative methods for nonlinear equations. This is because it is derivative-free with a free parameter. This parameter can, first of all, enlarge the attraction basins of Equation (1) or any of its subsequent methods and, second, can directly affect the improvement of the R-order of convergence and the efficiency index.
Recalling that Kung and Traub conjectured that the iterative method without memory based on
m functions evaluation per iteration attain the optimal convergence of order
[
9,
10].
The term “with memory” means that the values of the function associated with the computed approximations of the roots are used in subsequent iterations. This is unlike the term “without memory” in which the method only uses the current values to find the next estimate. As such, in a method with memory, the calculated results up to the desired numbers of iterations should be stored and then called to proceed.
Before proceeding the given idea to improve the speed of convergence, efficiency index, and the attraction basins, we provide a short literature by reviewing some of the existing methods with accelerated convergence order. Traub [
11] proposed the following two-point method with memory of order 2.414:
where
, and
or
. This is one of the pioneering and fundamental methods with memory for solving nonlinear equations.
Džunić in [
12] suggested an effective bi-parametric iterative method with memory of
R-order of convergence as follows:
Moreover, Džunić and Petković [
13] derived the following cubically convergent Steffensen-like method with memory:
where
depending on the second-order Newton interpolation polynomial.
Various Steffensen-type methods are proposed in [
14,
15,
16,
17].
In fact, it is possible to improve the performance of the aforementioned method by considering several more sub-steps and improve the computational efficiency index via multi-step iterative methods. However, this procedure is more computational burdensome. Thus, the motivation here is to know that is it possible to improve the performance of numerical methods in terms of the computational efficiency index, basins of attraction, and the rate of convergence without adding more sub-steps and propose a numerical method as a one-step solver.
Hence, the aim of this paper is to design a one-step method with memory which is quite fast and has an improved efficiency index, based on the modification of the one-step method of Steffensen (Equation (1)) and increase the convergence order to 3.90057 without any additional functional evaluations.
The rest of this paper is ordered as follows: In
Section 2, we develop the one-point Steffensen-type iterative scheme (Equation (1)) with memory which was proposed by [
18]. We present the main goal in
Section 3 by approximating the acceleration parameters involved in our contributed scheme by Newton’s interpolating polynomial and, thus, improve the convergence R-order. The numerical reports are suggested in
Section 4 to confirm the theoretical results. Some discussions are given in
Section 5.
2. An Iterative Method
The following iterative method without memory was proposed by [
18]:
with the following error equation to improve the performance of (1) in terms of having more free parameters:
where
. Using the error Equation (6), to derive Steffensen-type iterative methods with memory, we calculate the following parameters:
,
by the formula:
for
, while
,
are approximations to
and
, respectively; where
is a simple zero of
. In fact, Equation (7) shows a way to minimize the asymptotic error constant of Equation (6) by making this coefficient closer and closer to zero when the iterative method is converging to the true solution.
The initial estimates
and
must be chosen before starting the process of iterations. We state the Newton’s interpolating polynomial of fourth and fifth-degree passing through the saved points as follows:
Recalling that is an interpolation polynomial for a given set of data points also known as the Newton’s divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton’s divided differences method. For instance, here the set of data points for are .
Now, using some modification on Equation (5) we present the following scheme:
Noting that the accelerator parameters are getting updated and then used in the iterative method right after the second iterations, viz, . This means that the third line of Equation (9) is imposed at the beginning and after that the computed values are stored and used in the subsequent iterates. For , the degree of Newton interpolation polynomials would be two and three. However, for , interpolations of degrees four and five as given in Equation (8) can be used to increase the convergence order.
Additionally speaking, this acceleration of convergence would be attained without the use any more functional evaluations as well as imposing more steps. Thus, the proposed scheme with memory (Equation (9)) can be attractive for solving nonlinear equations.
3. Convergence Analysis
In this section, we show the convergence criteria of Equation (9) using Taylor’s series expansion and several extensive symbolic computations.
Theorem 1. Let the functionbe sufficiently differentiable in a neighborhood of its simple zero. If an initial approximationis necessarily close to. Then, R-order of convergence for the one-step method (Equation (9)) with memory is 3.90057.
Proof. The proof is done using the definition of the error equation as the difference between the k-estimate and the exact zero along with symbolic computations. Let the sequence
and
have convergence orders
and
, respectively. Namely,
and:
Therefore, using Equations (10) and (11), we have:
and:
The associated error equations to the accelerating parameters
and
for Equation (9) can now be written as follows:
and:
On the other hand, by using a symbolic language and extensive computations one can find the following error terms for the involved terms existing in the fundamental error Equation (6):
Combining Equations (14)–(17), we get that:
We now compare the left and right hand side of Equations (12)–(19) and Equations (13)–(18), respectively. Thus, we have the following nonlinear system of equations in order to find the final R-orders:
The positive real solution of (20) is and . Therefore, the convergence R-order for Equation (9) is 3.90057. □
Since improving the convergence R-order is useless if the whole computational method is expensive, basically researcher judge on a new scheme based upon its computational efficiency index which is a tool in order to provide a trade-off between the whole computational cost and the attained R-order. Assuming the cost of calculating each functional evaluation is one, we can use the definition of efficiency index as
,
is the whole computational cost [
19].
The computational efficiency index of Equation (9) is , which is clearly higher than efficiency index of Newton’s and Steffensen’s methods, of (3) of Equation (4).
However, this improved computational efficiency is reported by ignoring the number of multiplication and division per computing cycle. By imposing a slight weight for such calculations one may once again obtain the improved computational efficiency of (9) in contrast to the existing schemes of the same type.
4. Numerical Computations
In this section, we compare the convergence performance of Equation (9), with three well-known iterative methods for solving four test problems numerically carried out in Mathematica 11.1. [
20].
We denote Equations (1), (3), (5) and (9) with SM, DZ, PM, M4, respectively. We compare the our method with different methods, using
and
. Here, the computational order of convergence (coc) has been computed by the following formula [
21]:
Recalling that using a complex initial approximation, one is able to find the complex roots of the nonlinear equations using (9).
Experiment 1. Let us consider the following nonlinear test function:whereand. Experiment 2. We take into account the following nonlinear test function:whereand. Experiment 3. We consider the following test problem now:whereand. Experiment 4. The last test problem is taken into consideration as follows:whereand. Table 1,
Table 2,
Table 3 and
Table 4 show that the proposed Equation (9) is of order 3.90057 and it is obviously believed to be of more advantageous than the other methods listed due to its fast speed and better accuracy.
For better comparisons, we present absolute residual errors
, for each test function which are displayed in
Table 1,
Table 2,
Table 3 and
Table 4. Additionally, we compute the computational order of convergence. Noting that we have used multiple precision arithmetic considering 2000 significant digits to observe and the asymptotic error constant and the coc as obviously as possible.
The results obtained by our proposed Equation (M4) are efficient and show better performance than other existing methods.
A significant challenge of executing high-order nonlinear solvers is in finding initial approximation to start the iterations when high accuracy calculating is needed.
To discuss further, mostly based on interval mathematics, one can find a close enough guess to start the process. There are some other ways to determine the real initial approximation to start the process. An idea of finding such initial guesses given in [
22] is based on the useful commands in Mathematica 11.1 NDSolve [] for the nonlinear function on the interval
.
Following this the following piece of Mathematica code could give a list of initial approximations in the working interval for Experiment 4:
To check the position of the zero and the graph of the function, we can use the following code to obtain
Figure 1.
As a harder test problem, for the nonlinear function
, we can simply find a list of estimates as initial guesses using the above piece of codes as follows:
. The plot of the function in this case is brought forward in
Figure 2.
We observe that the two self-accelerating parameters
and
have to be selected before the iterative procedure is started. That is, they are calculated by using information existing from the present and previous iterations (see, e.g., [
23]). The initial estimates
and
should be preserved as precise small positive values. We use
whenever required.
After a number of iterates, the (nonzero) free parameters start converging to a particular value which makes the coefficient of Equation (6) zero as well as make the numerical scheme to converge with high R-order.
5. Ending Comments
In this paper, we have constructed a one-step method with memory to solve nonlinear equations. By using two self-accelerator parameters our scheme equipped with Newton’s interpolation polynomial without any additional functional calculation possesses the high computational efficiency index 1.97499, which is higher than many of the existing methods.
The efficacy of our scheme is confirmed by some of numerical examples. The results in
Table 1,
Table 2,
Table 3 and
Table 4 shows that our method (Equation (M4)) is valuable to find an adequate estimate of the exact solution of nonlinear equations.