1. Introduction
Since exact solutions for nonlinear equations are rarely available, we usually resort to their numerical solutions. To locate the desired numerical roots, many authors [
1,
2,
3,
4,
5,
6,
7,
8,
9] have developed high-order iterative methods including optimal eighth-order ones [
10,
11,
12,
13,
14,
15].
This paper is devoted to devise a class of sixth-order iterative root-finders for nonlinear systems by employing a three-step weighted Jarratt-like method below:
where
,
is a parameter to be determined later and
are weight functions being analytic [
16,
17,
18] in a neighborhood of 1. Note that Scheme (
1) uses two functional values as well as two derivatives. We are certainly able to introduce generic weight functions using one derivative and three functional values to develop general optimal eighth-order methods that covers the existing ones for the zero of a given scalar function. However, expanding such approach to a nonlinear system requires different weight functions. For unified analysis to be performed in both scalar and vector functions, we aim to develop a family of Jarratt-like sixth-order iterative methods by maintaining the same form of weight functions with two derivatives as well as two functional values. This extension to nonlinear systems is the main strength of this paper.
The robustness of the current analysis presented here covers most existing studies on higher-order root-finders using two derivatives and two function values for both scalar and vector equations. The results of Theorem 1 give us not only fairly generic scalar function solvers, but also some advantage of extending to a nonlinear system with any finite dimension. Such an extension is evidently characterized by Theorem 2 to be studied in this analysis.
Our major aim is not only to design a class of sixth-order methods by fully specifying the algebraic structure of generic weight functions
and
, but also to investigate their basins of attraction behind the extraneous fixed points [
19] when applied to polynomials. The last sub-step of (
1) in the form of weighted Newton’s method is clearly more convenient in dealing with extraneous fixed points which are the roots of the weight function
.
The extraneous fixed points may lead us to attractive, indifferent, repulsive and chaotic orbits via the related basins of attraction.
Section 2 investigates the main theorem regarding the convergence behavior with the desired forms of weight functions, while
Section 3 deals with special cases of weight functions that can cover many of the existing studies using two derivatives and two functional evaluations.
Section 4 discusses the computational and long-term orbit behavior of the proposed iterative methods regarding scalar functions.
Section 5 presents numerical experiments in a
d-dimensional Euclidean space by solving a system of nonlinear vector equations
encountered in a real life with
. In addition, computational efficiency is addressed with issues related to the accuracy and applicability of the proposed methods. Concluding remarks are stated in
Section 6.
5. Extension to a Family of the Sixth-Order Methods for Nonlinear Systems of Equations
Let
with
have a zero
and be holomorphic in a neighborhood of
. Taylor expansion of
about
easily gives:
where
and
for
. For notational convenience, we drop the subscript
n of
and
for the time being. We observe that
and
are
matrices, with
. From (
30), we find that the truncated
defines a polynomial in
e with matrix coefficients (independent of
x). Hence, it is easily seen that
where
I is the
identity matrix. The inverse of
can be found by identifying
from the relation
Consequently, we find that:
where
.
Additional computations show that:
with .
with .
We find and with .
+ with .
+ with .
Theorem 1 suggests us to use and as at most third-degree matrix polynomials in .
+ with .
Equating the coefficients of the first and second-order terms in
yields:
We obtain:
and
, with
.
with .
Now, we annihilate the first five coefficients of the terms up to the fifth-order terms of
with the use of (
34) by taking the set of coefficients below:
The discussions thus far lead us to the following theorem for nonlinear systems of equations.
Theorem 2. Let with have a simple root α and be sufficiently Frétchet differentiable in Ω containing α. Let be an initial guess chosen close to α. Let be matrix functions sufficiently Frétchet differentiable in a neighborhood of I, being defined by:
,
with for and . If or are given, then iterative scheme (1) reduces to a family of sixth-order methods satisfying the error equation below. For where for .
Equation (
37) clearly reduces to (
2) for a scalar function by identifying
with
. In what follows, we employ several test examples for the zeros of vector-valued functions to verify the convergence behavior claimed here. In terms of Euclidean norm
, we display the error sizes for
, residual error
and ACOC using the error criterion of
within 20 iterations.
Test Example 1
We consider a nonlinear algebraic vector equation
defined by
with
as follows:
The exact solution is given by
. We try to solve (
38) with an initial guess vector
by method (
1), and find the results in
Table 5. We observe that ACOC approaches up to 6, which is the theoretical order of convergence.
Test Example 2
We consider a nonlinear ODE boundary-value problem given below:
The exact solution is found to be
. With the use of the central finite difference method, the first and second derivatives are approximated by:
where
,
N is the number of divisions of the interval
. It can be shown that
and
in view of Taylor expansion of
) about
x. This discretization yields the algebraic equations with 6 unknowns
:
for
, with boundary conditions
, and
. Further computation after selecting
gives us a nonlinear algebraic vector equation
defined by
with
of the form:
After solving (
42) with an initial guess vector
by a typical method
LK1, we find the results in
Table 6 and
Figure 5. It is seen that ACOC approaches up to 6, which is the theoretical order of convergence.
The errors
for
at the internal nodes are, respectively, given by:
As a remark, we should note that the numerical solution by the central finite-difference methods is accurate within the range of
with
.
Test Example 3
A two-dimensional nonlinear reaction-diffusion equation for a concentration
of the substance under consideration in a bounded domain
with continuous boundary
is represented by an initial boundary value problem:
where
is a diffusion coefficient,
a is a positive constant,
g is continuous on
, and
is the Laplacian operator. For brevity of analysis, let
,
, and
(i.e., unit square region). We are interested in steady state solutions to (
43), which lead us to elliptic partial differential equations with Dirichlet boundary conditions as follows:
By using central divided differences with step
in each component of the space vector, we discretize (
44) into a nonlinear system of equations with 25 nodes, 9 of which constitute interior nodal variables
in
, while the remaining 16 nodes are boundary nodes. As a result, we obtain a nonlinear algebraic vector equation
defined by:
where
,
,
I is the identity matrix of size
,
and
.
We solve (
38) with an initial guess vector
by a typical method
LK1, and find the results in
Table 7. It is evident that ACOC reaches up to 6, being the theoretical order of convergence. As can be seen in
Table 7, the methods with
appear to converge more quickly and better than those with
.
Interior 16 nodal values of the steady-state solution of
are illustrated with adjacent nodal points connected by straight lines in
Figure 6.
Test Example 4
A
d-dimensional nonlinear equation
with
is given by:
The above nonlinear system is described in [
4]. Selecting
, we find
and solve (
46) in
with an initial guess vector
for the desired root
in
Table 8. It is evident that ACOC reaches up to 6, being the theoretical order of convergence. As can be seen in
Table 8, the methods with
appear to converge more quickly and better than those with
.
Computational Efficiency
The computational efficiency of an iterative method is defined by an efficiency index
[
30], with
as the order of convergence and
d as the number of functional evaluations per iteration. We require
n scalar functions for each
f and
for each
. The concept of the efficiency index
E applied to a nonlinear system of vector equations has been extended to treat the concept of
computational efficiency by using
[
4], where
is the number of operations associated with products and quotients. Suppose that
n is the size of the matrix needed in the nonlinear system of vector equations. Matrix inversion requires
product-quotient operations and
-decomposition technique for solving linear systems requires
product-quotient operations, including the
product-quotient operations related to matrix multiplication by a vector. Note that each method treated here follows the three set of linear systems (
1) and has one matrix inverse
. Consequently, the number of functional evaluations plus product-quotient operations
becomes
, which gives us the computational efficiency
for each listed method.
Many real-life application problems include ones related to: interval arithmetic benchmark, neurophysiology, chemical equilibrium, kinematic application, combustion application, and economics modeling, whose studies are described in [
31]. The methods used therein are based on second-order Newton-like approach which may be more efficient in real-life problems in terms of speed and computational cost. On the other hand, our proposed family of sixth-order methods (
1) is much more accurate than Newton-like methods, but has more complexities owing to the high-order formulation and require more CPU time to get the desired solution.
One certainly has to acknowledge that determining a better method than the other one should be avoided through solving a function with a randomly chosen initial guess vector and comparing the number of convergent iterations.