Next Article in Journal
Analytical Solution of Sloshing in a Cylindrical Tank with an Elastic Cover
Next Article in Special Issue
A New Three-Step Class of Iterative Methods for Solving Nonlinear Systems
Previous Article in Journal
An Application of Total-Colored Graphs to Describe Mutations in Non-Mendelian Genetics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iterative Methods with Memory for Solving Systems of Nonlinear Equations Using a Second Order Approximation

by
Alicia Cordero
1,†,
Javier G. Maimó
2,*,
Juan R. Torregrosa
1,† and
María P. Vassileva
2,†
1
Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, 46022 Valencia, Spain
2
Instituto Tecnológico de Santo Domingo (INTEC), Santo Domingo 10602, Dominican Republic
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(11), 1069; https://doi.org/10.3390/math7111069
Submission received: 11 October 2019 / Revised: 30 October 2019 / Accepted: 31 October 2019 / Published: 7 November 2019

Abstract

:
Iterative methods for solving nonlinear equations are said to have memory when the calculation of the next iterate requires the use of more than one previous iteration. Methods with memory usually have a very stable behavior in the sense of the wideness of the set of convergent initial estimations. With the right choice of parameters, iterative methods without memory can increase their order of convergence significantly, becoming schemes with memory. In this work, starting from a simple method without memory, we increase its order of convergence without adding new functional evaluations by approximating the accelerating parameter with Newton interpolation polynomials of degree one and two. Using this technique in the multidimensional case, we extend the proposed method to systems of nonlinear equations. Numerical tests are presented to verify the theoretical results and a study of the dynamics of the method is applied to different problems to show its stability.

1. Introduction

This paper deals with iterative methods for approximating the solutions of a nonlinear system of n equations and n unknowns, F ( x ) = 0 , where F : D R n R n is a nonlinear vectorial function defined in a convex set D. The main aim of this work is to design iterative methods with memory for approximating the solutions ξ of F ( x ) = 0 . An iterative method is said to have memory when the fixed point function G depends on more than one previous iteration, that is, the iterative expression is x k + 1 = G ( x k , x k 1 , ) .
A classical iterative scheme with memory for solving scalar equations f ( x ) = 0 , n = 1 , is the known secant method, whose iterative expression is
x k + 1 = x k f ( x k ) ( x k x k 1 ) f ( x k ) f ( x k 1 ) , k = 1 , 2 , ,
with x 0 and x 1 as initial estimations. This method can be obtained from Newton’s scheme by replacing the derivative by the first divided difference f [ x k , x k 1 ] . For n > 1 the secant method has the expression
x ( k + 1 ) = x ( k ) [ x ( k ) , x ( k 1 ) ; F ] 1 F ( x ( k ) ) , k = 1 , 2 , ,
where [ x ( k ) , x ( k 1 ) ; F ] is the divided difference operator, defined in [1].
It is possible to design methods with memory starting from methods without memory by introducing accelerating parameters. The idea is to introduce parameters in the original scheme and study the error equation of the method to see if any particular values of these parameters allow to increase the order of convergence, see for example [2,3,4] and the references therein. Several authors have constructed iterative methods with memory for solving nonlinear systems, by approximating the accelerating parameters by Newton’s polynomial interpolation of first degree, see for example the results presented by Petković and Sharma in [5] and by Narang et al. in [6]. As far as we know, in the multivariate case, higher-degree Newton polynomials have never been used. In this paper, we will approximate the accelerating parameters with multidimensional Newton’s polynomial interpolation of second degree.
In [1], Ortega and Rheinboldt introduced a general definition of the order of convergence, called R-order defined as follows
R p { x k } = lim k s u p ( e k ) 1 k , i f p = 1 lim k s u p ( e k ) 1 p k , i f p > 1
where e k = x k ξ , k = 0 , 1 , , and R p is called R-factor. The R-order of convergence of an iterative method (IM) at the point ξ is
O R ( ( I M ) , ξ ) = + , if R m ( ( I M ) , ξ ) = 0 , for all m [ 1 , + ) i n f { m [ 1 , + ) : R m ( ( I M ) , ξ ) = 1 } , otherwise
Let (IM) be an iterative method with memory that generates a sequence { x k } of approximations to the root ξ , and let us also assume that this sequence converges to ξ . If there exists a nonzero constant η and nonnegative numbers t i , 0 i m such that the inequality
| e k + 1 | η i = 0 m | e k i | t i
holds, then the R-order of convergence of (IM) satisfies the inequality
O R ( ( I M ) , ξ ) s * ,
where s * is the unique positive root of the equation
s m + 1 i = 0 m t i s m i = 0 .
We can find the proof of this result in [1].
Now, we introduce some notations that we use in this manuscript. Let r be the order of an iterative method, then
e k + 1 D k , r e k r ,
where D k , r tends to the asymptotic error constant of the iterative method when k . To avoid higher order terms in the Taylor series that do not influence the convergence order, we will use the notation used by Traub in [2]. If { f k } and { g k } are null sequences (that is, sequences convergent to zero) and
f k g k C ,
where C is a nonzero constant, we can write
f k = O ( g k ) or f k C g k .
Let { x ( k ) } k 0 be a sequence of vectors generated by an iterative method converging to a zero of F, with R-order greater than or equal to r, then according to [4] we can write
e k + 1 D ( k , r ) e k r ,
where { D ( k , r ) } is a sequence which tends to the asymptotic error constant D r of the iterative method when k , so e k + 1 e k .

Accelerating Parameters

We illustrate the technique of the accelerating parameters, introduced by Traub in [2], starting from a very simple method with a real parameter α
x k + 1 = x k α f ( x k ) , k = 1 , 2 ,
This method has first order of convergence, with error equation
e k + 1 = ( 1 α f ( ξ ) ) e k + O [ e k 2 ] ,
where e k = x k ξ , k = 0 , 1 ,
As it is easy to observe, the order of convergence can increase up to 2 if α = 1 f ( ξ ) . Since ξ is unknown, we estimate f ( ξ ) by approximating the nonlinear function with a Newton polynomial interpolation of degree 1, at points ( x k , f ( x k ) ) and ( x k 1 , f ( x k 1 ) )
N 1 ( t ) = f ( x k 1 ) + f [ x k , x k 1 ] ( t x k 1 ) = f ( x k 1 ) + f ( x k ) f ( x k 1 ) x k x k 1 ( t x k 1 ) .
To construct the polynomial N 1 ( t ) , it is necessary to evaluate the nonlinear function f in two points; so, two initial estimates are required, x 1 and x 0 . The derivative of the nonlinear function is approximated by the derivative of the interpolating polynomial, that is, α = 1 f ( ξ ) 1 N 1 ( x k ) , and the resulting scheme is
α k = x k x k 1 f ( x k ) f ( x k 1 ) , x k + 1 = x k α k f ( x k ) ,
substituting α k in the iterative expression
x k + 1 = x k f ( x k ) ( x k x k 1 ) f ( x k ) f ( x k 1 ) ,
we get the secant method [7], with order of convergence p = 1 + 5 2 1.6180 .

2. Modified Secant Method

A similar process can be done now using a Newton polynomial interpolation of second degree for approximating parameter α . As it is a better approximation of the nonlinear function, it is expected that the order of convergence of the resulting scheme is greater than the obtained in the case of a polynomial of degree 1. For the modified secant method, the approximation of the parameter is α k 1 N 2 ( x k ) , where
N 2 ( t ) = f ( x k 1 ) + f [ x k , x k 1 ] ( t x k 1 ) + f [ x k , x k 1 , x k 2 ] ( t x k ) ( t x k 1 ) ,
being
f [ x k , x k 1 , x k 2 ] = f [ x k , x k 1 ] f [ x k 1 , x k 2 ] x k x k 2 .
So,
N 2 ( x k ) = f [ x k , x k 1 ] + f [ x k , x k 1 , x k 2 ] ( 2 t x k x k 1 )
and replacing this expression in α k = 1 N 2 ( x k ) an iterative method with memory is obtained
α k = 1 x k ( f ( x k 1 ) f ( x k 2 ) ) + x k 1 ( f ( x k 2 ) f ( x k ) ) + x k 2 ( f ( x k ) f ( x k 1 ) ) ( x k x k 2 ) ( x k 2 x k 1 ) + f [ x k , x k 1 ] , x k + 1 = x k α k f ( x k ) .
Now, three initial points are necessary, x k , x k 1 and x k 2 , to calculate the value of parameter α k .
In the following result, we present the order of convergence of the iterative scheme with memory (19).
Theorem 1.
(Order of convergence of the modified secant method) Let ξ be a simple zero of a sufficiently differentiable function f : D R R in an open interval D. Let x 0 , x 1 and x 2 be initial guesses sufficiently close to ξ. Then, the order of convergence of method (19) is at least 1.8393 , with error equation e k + 1 c 3 e k 1 e k 2 e k .
Proof. 
By substituting the different terms of α k by their Taylor expansions, we have
( 1 α k f ( ξ ) ) c 3 e k 1 e k 2 .
Now, from the error Equation (13)
e k + 1 c 3 e k 1 e k 2 e k .
By Writing e k and e k 1 in terms of e k 2 , we obtain
e k + 1 e k 2 e k 2 p e k 2 p 2 = e k 2 p 2 + p + 1 .
where p denotes the order of convergence of method (19). But, if p is the order of (19), then e k + 1 e k 2 p 3 . Therefore, p must satisfy p 3 = p 2 + p + 1 . The unique positive root of this cubic polynomial is p 1.8393 and, by applying the result of Ortega and Rheinbolt, this is the order of convergence of scheme (19). □
The efficiency index, defined by Ostrowski in [8], depends on the number of functional evaluations per iteration, d, and on the order of convergence p, in the way
I = p 1 / d ,
so, as in the modified secant method there is only one new functional evaluation per iteration, its efficiency index is 1.8393 .

Nonlinear Systems

Method (12) can be extended for approximating the roots of a nonlinear system F ( x ) = 0 , where F : D R n R n is a vectorial function defined in a convex set D,
x ( k + 1 ) = x ( k ) α F ( x ( k ) ) .
It is easy to prove that this method has linear convergence, with error equation
e k + 1 = ( 1 + α F ( ξ ) ) e k + O [ e k 2 ] ,
being e k = x ( k ) ξ , k = 0 , 1 ,
In the multidimensional case, we can approximate the parameter α = [ F ( ξ ) ] 1 with a multivariate Newton polynomial interpolation. If we use a polynomial of first degree
N 1 ( t ) = F ( x ( k 1 ) ) + [ x ( k ) , x ( k 1 ) ; F ] ( t x ( k 1 ) ) ,
then N 1 ( x ( k ) ) = [ x ( k ) , x ( k 1 ) , F ] and α is approximated by α k = [ x ( k ) , x ( k 1 ) ; F ] 1 , so the iterative resulting method is
x ( k + 1 ) = x ( k ) [ x ( k ) , x ( k 1 ) ; F ] 1 F ( x ( k ) ) ,
that corresponds to the secant method for the multidimensional case.
To extend the modified secant method (19) to this context, we need a multidimensional Newton polynomial interpolation of second degree
N 2 ( t ) = F ( x ( k 1 ) ) + [ x ( k ) , x ( k 1 ) ; F ] ( t x ( k 1 ) ) + [ x ( k ) , x ( k 1 ) , x ( k 2 ) ; F ] ( t x ( k ) ) ( t x ( k 1 ) ) ,
where
[ · , · , · ; F ] : R n × R n × R n B ( R n × R n , R n ) ,
such that [ x , y , z ; F ] ( x z ) = [ x , y ; F ] [ y , z ; F ] , where B denotes the set of bi-linear mappings.
Moreover,
N 2 ( t ) = [ x ( k ) , x ( k 1 ) ; F ] + [ x ( k ) , x ( k 1 ) , x ( k 2 ) ; F ] ( ( t x ( k ) ) + ( t x ( k 1 ) ) ) ,
evaluating the derivative in x ( k ) ,
N 2 ( x ( k ) ) = [ x ( k ) , x ( k 1 ) ; F ] + [ x ( k ) , x ( k 1 ) , x ( k 2 ) ; F ] ( x ( k ) x ( k 1 ) ) .
So, the iterative expression of the resulting method with memory is
x ( k + 1 ) = x ( k ) [ [ x ( k ) , x ( k 1 ) ; F ] + [ x ( k ) , x ( k 1 ) , x ( k 2 ) ; F ] ( x ( k ) x ( k 1 ) ) ] 1 F ( x ( k ) ) , k = 2 , 3 ,
The divided difference operator can be expressed in its integral form by using the Genocchi-Hermite formula for divided difference of first order [9]
[ x , x + h ; F ] = 0 1 F ( x + h t ) d t ,
as
F ( x + h t ) = F ( x ) + F ( x ) h t + 1 2 F ( x ) ( h t ) 2 + ,
then
[ x , x + h ; F ] = F ( x ) + h 2 F ( x ) + h 2 6 F ( x ) + ,
and, in general, the divided difference of k-order can be calculated by [10]
[ x 0 , x 1 , , x k ; F ] = 0 1 0 1 t 1 k 1 t 2 k 2 t k 1 F ( k ) ( μ ) d t 1 d t 2 d t k ,
where
μ = x 0 + t 1 ( x 1 x 0 ) + t 1 t 2 ( x 2 x 1 ) + + t 1 t 2 t k ( x k x k 1 ) .
Theorem 2.
(Order of Convergence of the Modified Secant Method in the Multidimensional Case) Let ξ be a zero of a sufficiently differentiable function F : D R n R n in a convex set D, such that F ( ξ ) is nonsingular. Let x ( 0 ) , x ( 1 ) and x ( 2 ) be initial guesses sufficiently close to ξ. Then the order of convergence of que method (31) is at least 1.8393 with error equation
e k + 1 c 3 e k 1 e k 2 e k ,
being c k = ( 1 / k ! ) [ F ( ξ ) ] 1 F ( k ) ( ξ ) , k = 2 , 3 ,
Proof. 
For the second order divided difference we write (35) for k = 2 in the following way
[ x , x + h , x + k ; F ] = 0 1 0 1 F ( x + h t 1 + k t 2 ) d t 1 d t 2 = 1 2 F ( x ) + 1 3 F ( x ) h + k 2 + 1 8 F ( i v ) ( x ) h 2 + k 2 3 + h k ,
where h = x k x k 1 and k = x k 2 x k 1 . We can write h and k in terms of the errors as h = e k e k 1 and k = e k 2 e k 1 . Substituting the approximation of α , α k , in the error equation (24), we obtain
e k + 1 c 3 e k 1 e k 2 e k .
Following the same steps as in the unidimensional case, the order of convergence p of method (31) must satisfy the equation p 3 = p 2 + p + 1 , which has the unique positive root p = 1.8393 . □

3. Dynamical and Numerical Study

When a new method is designed, it is interesting to analyze the dynamical behavior of the rational function that appears when the scheme is applied on polynomials of low degree. For the secant method, this analysis has been made by different researchers, so we are going to study the behavior of the modified secant method. As this scheme would be accurate for quadratic polynomials, due to the approximation we make, we use 3rd degree polynomials in this analysis. In order to make a general study, we use four polynomials that combined can generate any third degree polynomial, that is, p 1 ( x ) = x 3 x , p 2 ( x ) = x 3 + x , p 3 ( x ) = x 3 and p 4 ( x ) = x 3 + γ x + 1 , where γ is a free real parameter.
The rational operators obtained when our method (31) is applied on the mentioned polynomials are:
O p 1 ( x k 2 , x k 1 , x k ) = x k 5 x k 2 x k 1 2 x k 1 x k 2 x k 2 2 6 x k 2 + x k 1 2 + x k 1 x k 2 + x k 2 2 + 1 , O p 2 ( x k 2 , x k 1 , x k ) = x k 5 x k 2 x k 1 2 x k 1 x k 2 x k 2 2 6 x k 2 + x k 1 2 + x k 1 x k 2 + x k 2 2 1 , O p 3 ( x k 2 , x k 1 , x k ) = x k 5 x k 2 x k 1 2 x k 1 x k 2 x k 2 2 6 x k 2 + x k 1 2 + x k 1 x k 2 + x k 2 2 , O p 4 ( x k 2 , x k 1 , x k ) = 5 x k 3 x k x k 1 2 + x k 1 x k 2 + x k 2 2 1 γ 6 x k 2 + x k 1 2 + x k 1 x k 2 + x k 2 2 .
In order to analyze the fixed points, we introduce the auxiliary operators G i , i = 1 , 2 , 3 , 4 , defined as follows
G i ( x k 2 , x k 1 , x k ) = ( x k 1 , x k , O p i ( x k 2 , x k 1 , x k ) ) , i = 1 , 2 , 3 , 4 .
So, a point ( x k 2 , x k 1 , x k ) is a fixed point of G i if x k 2 = x k 1 , x k 1 = x k and x k = O p i ( x k 2 , x k 1 , x k ) . We can prove that there are no fixed points different from the roots of the polynomials. So, the method is very stable. On the other hand, critical points are also interesting because a classical result of Fatou and Julia states that each basin of attraction of an attracting fixed point contains at least a critical point ([11,12]), so it is important to determine to which basin of attraction belongs each critical point. The critical points are determined by the calculation of the determinant of the Jacobian matrix G i , i = 1 , 2 , 3 , 4 ([13]). For the polynomials p 1 to p 4 the critical points are the roots of the polynomials and the points ( x k 2 , x k 1 , x k ) , such that x k 1 = 2 x k 2 .
The bifurcation diagram ([13]) is a dynamical tool that shows the behavior of the sequence of iterates depending on a parameter. In this case, we use parameter γ from p 4 ( x ) . Starting from an initial estimation close to zero, we use a mesh of 500 subintervals for γ in the x axis and we draw the last 100 iterates. In Figure 1a, we plot the real roots of p 4 and in Figure 1b we draw the bifurcation diagram. We can see how the bifurcation diagram always matches the solutions.
Now, we are going to compare Newton’s method and the secant and modified secant schemes on the following test functions, under different numerical characteristics, as well as the dynamical planes associated to them.
(1)
f 1 ( x ) = sin ( x ) x 2 + 1 ,
(2)
f 2 ( x ) = ( x 1 ) ( x 3 + x 10 + 1 ) sin ( x ) ,
(3)
f 3 ( x ) = arctan ( x ) ,
(4)
F 4 ( x 1 , x 2 ) = ( x 1 2 1 , x 2 2 1 ) ,
(5)
F 5 ( x 1 , x 2 ) = ( x 1 2 x 1 x 2 2 1 , x 2 sin ( x 1 ) ) ,
(6)
F 6 ( x 1 , x 2 , x 3 ) = ( x 1 x 2 1 , x 2 x 3 1 , x 1 x 3 1 ) .
Variable precision arithmetics has been used with 100 digits of mantissa, with the stopping criterion | x k + 1 x k | < 10 25 or | f ( x k + 1 ) | < 10 25 with a maximum of 100 iterations. That means the iterative method will stop when any of these conditions are met. These tests have been executed on a computer with 16GB of RAM using MATLAB 2014a version. As a numerical approximation of the order of convergence of the method, we use the approximated computational order of convergence (ACOC), defined as [14]:
p A C O C = ln ( | x k + 1 x k | / | x k x k 1 | ) ln ( | x k x k 1 | / | x k 1 x k 2 | ) .
Let us remark that we use the same stopping criterium and calculation of ACOC in the multidimensional case, only replacing the absolute value by a norm. As we only use one initial point to be able to compare methods with memory with Newton’s method, to calculate the initial points needed for the methods with memory we use an initial estimation of α . We take α 1 = 0.01 in the unidimensional case. In the multivariate case, we use the initial value α 1 1 = 5 I for the secant method, and two different values for the two first iterations, α 1 1 = 5 I and α 2 1 = 3 I for the modified secant method, where I denotes the identity matrix of size n × n . We have observed that, taking two different approximations of α leads to a more stable behavior of the modified secant method.
On the other hand, for each test function we determine the associated dynamical plane. This tool of the dynamical analysis for methods with memory was introduced in [15]. The dynamical plane is a visual representation of the basins of attraction of an specific problem. It is constructed by defining a mesh of points, each of which is taken as a initial estimation of the iterative method. The complex plane is created showing the real part of the initial estimate in the x axis and the imaginary part on the y axis ([16]). In a similar way as before, we use an initial estimation of α to calculate the necessary initial points. This approach makes possible to draw the performance of iterative schemes with memory in the complex plane, thus allowing to compare the performance of methods with and without memory. To draw the dynamical planes we have used a mesh of 400 × 400 initial estimations, a maximum of 40 iterations and a tolerance of 10 3 . We used α = 0.01 to calculate the initial estimations. Each point used as an initial estimate is painted in a certain color, depending on the root to which the method converges. If it does not converge to any root, it is painted black.
In Table 1, we show the results obtained by Newton’s method, secant and modified secant schemes for the scalar functions f 1 ( x ) , f 2 ( x ) and f 3 ( x ) . These tests confirm the theoretical results with a very stable ACOC when the method is convergent.
On the other hand, in Figure 2 we see that the three methods behave well on function f 1 ( x ) , with no significant differences between their basins of attraction. In Figure 3, we see that in case of function f 2 ( x ) the dynamical plane of the modified secant is better than the secant and very similar to Newton’s one. The secant method presents black regions that, in this case, mean slow convergence. In Figure 4, we see the basins of attraction of the methods on f 3 ( x ) . We observe that the wider basins of attraction correspond to the modified secant method. In this case, the black regions mean points where the methods are divergent.
In Table 2 the numerical results corresponding to the vectorial functions F 4 ( x 1 , x 2 ) , F 5 ( x 1 , x 2 ) and F 6 ( x 1 , x 2 , x 3 ) are presented. There is nothing significant in these results, they are those expected taking into account the theoretical results. In this case, the definition of ACOC is the same as before, replacing the absolute value by the norm. For systems F 4 ( x ) = 0 and F 5 ( x ) = 0 , we show in Figure 5 and Figure 6 the basins of attraction of each root. In the first case, the three methods behave similarly, but in the second one, Newton’s scheme presents some black regions that do not appear in the modified secant method.

4. Conclusions

The technique of introducing accelerating parameters allows us to generate new methods with higher convergence order than the original one, with the same number of functional evaluations. The increase is more significant if we start from a method with high order of convergence. As far as we know, this is the first time that an iterative method with memory for solving nonlinear systems is designed by using Newton polynomial interpolation with several variables of second degree. In addition to obtaining the expression of what we called modified secant method, a dynamical study was carried out adapting tools from the dynamics of methods without memory to methods with memory. Although numerically computing the divided differences is hard, methods with memory show a very stable dynamical behavior, even more than other known methods without memory.

Author Contributions

Writing—original draft preparation, J.G.M., writing—review and editing A.C. and J.R.T., validation, M.P.V.

Funding

This research was supported by PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE), Generalitat Valenciana PROMETEO/2016/089, and FONDOCYT 2016–2017-212 República Dominicana.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  3. Soleymani, F.; Lotfi, T.; Tavakoli, E.; Khaksar Haghani, F. Several iterative methods with memory using self-accelerators. Appl. Math. Comput. 2015, 254, 452–458. [Google Scholar] [CrossRef]
  4. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier Academic Press: New York, NY, USA, 2013. [Google Scholar]
  5. Petković, M.S.; Sharma, J.R. On some efficient derivative-free iterative methods with memory for solving systems of nonlinear equations. Numer. Algorithms 2016, 71, 457–474. [Google Scholar] [CrossRef]
  6. Narang, M.; Bhatia, S.; Alshomrani, A.S.; Kanwar, V. General efficient class of Steffensen type methods with memory for solving systems of nonlinear equations. Comput. Appl. Math. 2019, 352, 23–39. [Google Scholar] [CrossRef]
  7. Potra, F.A. An error analysis for the secant method. Numer. Math. 1982, 38, 427–445. [Google Scholar] [CrossRef]
  8. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  9. Michelli, C.A. On a Numerically Efficient Method for Computing Multivariate B-Splines. In Multivariate Approximation Theory; Schempp, W., Zeller, K., Eds.; Birkhäuser: Basel, Switzerland, 1979; pp. 211–248. [Google Scholar]
  10. Potra, F.-A.; Ptak, V. Nondiscrete Induction and Iterative Processes; Pitman Publising INC: Boston, MA, USA, 1984. [Google Scholar]
  11. Fatou, P. Sur les équations fonctionelles. Bull. Soc. Math. Fr. 1919, 47, 161–271. [Google Scholar] [CrossRef]
  12. Julia, G. Mémoire sur l’iteration des fonctions rationnelles. J. Math. Pures Appl. 1918, 8, 47–245. [Google Scholar]
  13. Robinson, R.C. An Introduction to Dynamical Systems, Continous and Discrete; American Mathematical Society: Providence, RI, USA, 2012. [Google Scholar]
  14. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  15. Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. A multidimensional dynamical approach to iterative methods with memory. Appl. Math. Comput. 2015, 271, 701–715. [Google Scholar] [CrossRef] [Green Version]
  16. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing Dynamical Parameters Planes of Iterative Families and Methods. Sci. World J. 2013, 2013. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Comparison between the real roots and the iterative method.
Figure 1. Comparison between the real roots and the iterative method.
Mathematics 07 01069 g001
Figure 2. Dynamical planes for f 1 ( z ) = sin ( z ) z 2 + 1 .
Figure 2. Dynamical planes for f 1 ( z ) = sin ( z ) z 2 + 1 .
Mathematics 07 01069 g002
Figure 3. Dynamical planes for f 2 ( z ) = ( z 1 ) ( z 10 + z 3 + 1 ) sin ( z ) .
Figure 3. Dynamical planes for f 2 ( z ) = ( z 1 ) ( z 10 + z 3 + 1 ) sin ( z ) .
Mathematics 07 01069 g003
Figure 4. Dynamical planes for f 3 ( z ) = arctan z .
Figure 4. Dynamical planes for f 3 ( z ) = arctan z .
Mathematics 07 01069 g004
Figure 5. Dynamical planes for F 4 ( x 1 , x 2 ) = ( x 1 2 1 , x 2 2 1 ) .
Figure 5. Dynamical planes for F 4 ( x 1 , x 2 ) = ( x 1 2 1 , x 2 2 1 ) .
Mathematics 07 01069 g005
Figure 6. Dynamical planes for F 5 ( x 1 , x 2 ) = ( x 1 2 x 1 x 2 2 1 , x 2 sin x 1 ) .
Figure 6. Dynamical planes for F 5 ( x 1 , x 2 ) = ( x 1 2 x 1 x 2 2 1 , x 2 sin x 1 ) .
Mathematics 07 01069 g006
Table 1. Numerical results for the scalar functions f 1 ( x ) , f 2 ( x ) , f 3 ( x ) .
Table 1. Numerical results for the scalar functions f 1 ( x ) , f 2 ( x ) , f 3 ( x ) .
    f 1 ( x ) = sin ( x ) x 2 + 1 ,     x 0 = 1 ,     ξ 1.4096    
  Method  ACOCIter | x k + 1 x k | | f ( x k + 1 ) |
Newton2.006 1.6 × 10 17 3.5 × 10 34
Secant1.629 2.4 × 10 18 5.9 × 10 29
SecantM1.848 1.5 × 10 16 5.3 × 10 30
    f 2 ( x ) = ( x 1 ) ( x 3 + x 10 + 1 ) sin x     x 0 = 0.75 ,     ξ = 1    
  Method  ACOCIter | x k + 1 x k | | f ( x k + 1 ) |
Newton2.0012 2.7 × 10 22 8.9 × 10 43
Secant n.c
SecantM1.8212 8.2 × 10 17 2.0 × 10 29
    f 3 ( x ) = arctan ( x )     x 0 = 1.4 ,     ξ = 0    
  Method  ACOCIter | x k + 1 x k | | f ( x k + 1 ) |
Newton n.c
Secant1.067 7.8 × 10 16 5.9 × 10 34
SecantM1.8211 7.6 × 10 21 6.7 × 10 38
Table 2. Numerical results for the vectorial functions F 4 ( x ) , F 5 ( x ) , F 6 ( x ) .
Table 2. Numerical results for the vectorial functions F 4 ( x ) , F 5 ( x ) , F 6 ( x ) .
    F 4 ( x 1 , x 2 ) = ( x 1 2 1 , x 2 2 1 ) ,     x ( 0 ) = ( 0.5 , 0.5 ) ,     ξ = ( 1 , 1 )  
  Method  ACOCIter | | x ( k + 1 ) x ( k ) | | | | F ( x ( k + 1 ) ) | |
Newton2.006 1.5 × 10 15 1.7 × 10 30
Secant1.629 1.3 × 10 20 6.9 × 10 33
SecantM1.887 2.3 × 10 17 3.9 × 10 33
    F 5 ( x 1 , x 2 ) = ( x 1 2 x 1 x 2 2 1 , x 2 sin x 1 ) ,     x ( 0 ) = ( 1.5 , 1 ) ,     ξ ( 1.95 , 0.93 )  
  Method  ACOCIter | | x ( k + 1 ) x ( k ) | | | | F ( x ( k + 1 ) ) | |
Newton2.026 8.4 × 10 18 2.5 × 10 35
Secant1.628 1.0 × 10 16 6.4 × 10 27
SecantM1.877 2.5 × 10 24 7.65 × 10 41
    F 6 ( x 1 , x 2 , x 3 ) = ( x 1 x 2 1 , x 2 x 3 1 , x 1 x 3 1 ) ,     x ( 0 ) = ( 0.5 , 0.5 , 0.5 ) ,     ξ = ( 1 , 1 , 1 )  
  Method  ACOCIter | | x ( k + 1 ) x ( k ) | | | | F ( x ( k + 1 ) ) | |
Newton2.006 1.9 × 10 15 2.0 × 10 30
Secant1.629 1.6 × 10 20 8.4 × 10 33
SecantM1.947 2.8 × 10 17 1.5 × 10 33

Share and Cite

MDPI and ACS Style

Cordero, A.; Maimó, J.G.; Torregrosa, J.R.; Vassileva, M.P. Iterative Methods with Memory for Solving Systems of Nonlinear Equations Using a Second Order Approximation. Mathematics 2019, 7, 1069. https://doi.org/10.3390/math7111069

AMA Style

Cordero A, Maimó JG, Torregrosa JR, Vassileva MP. Iterative Methods with Memory for Solving Systems of Nonlinear Equations Using a Second Order Approximation. Mathematics. 2019; 7(11):1069. https://doi.org/10.3390/math7111069

Chicago/Turabian Style

Cordero, Alicia, Javier G. Maimó, Juan R. Torregrosa, and María P. Vassileva. 2019. "Iterative Methods with Memory for Solving Systems of Nonlinear Equations Using a Second Order Approximation" Mathematics 7, no. 11: 1069. https://doi.org/10.3390/math7111069

APA Style

Cordero, A., Maimó, J. G., Torregrosa, J. R., & Vassileva, M. P. (2019). Iterative Methods with Memory for Solving Systems of Nonlinear Equations Using a Second Order Approximation. Mathematics, 7(11), 1069. https://doi.org/10.3390/math7111069

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop