Next Article in Journal
A Radial Memetic Algorithm to Resolve the No-Wait Job-Shop Scheduling Problem
Next Article in Special Issue
Enhanced Ninth-Order Memory-Based Iterative Technique for Efficiently Solving Nonlinear Equations
Previous Article in Journal
Exploring the Diversity of Kink Solitons in (3+1)-Dimensional Wazwaz–Benjamin–Bona–Mahony Equation
Previous Article in Special Issue
Influence of Fractional Order on the Behavior of a Normalized Time-Fractional SIR Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Step Fifth-Order Efficient Jacobian-Free Iterative Method for Solving Nonlinear Systems

by
Alicia Cordero
1,*,
Javier G. Maimó
2,
Antmel Rodríguez-Cabral
2 and
Juan R. Torregrosa
1
1
Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera, s/n, 46022 Valencia, Spain
2
Area de Ciencias Básicas y Ambientales, Instituto Tecnológico de Santo Domingo (INTEC), Av. Los Próceres, Gala, Santo Domingo 10602, Dominican Republic
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(21), 3341; https://doi.org/10.3390/math12213341
Submission received: 9 September 2024 / Revised: 21 October 2024 / Accepted: 22 October 2024 / Published: 24 October 2024
(This article belongs to the Special Issue New Trends and Developments in Numerical Analysis: 2nd Edition)

Abstract

:
This article introduces a novel two-step fifth-order Jacobian-free iterative method aimed at efficiently solving systems of nonlinear equations. The method leverages the benefits of Jacobian-free approaches, utilizing divided differences to circumvent the computationally intensive calculation of Jacobian matrices. This adaptation significantly reduces computational overhead and simplifies the implementation process while maintaining high convergence rates. We demonstrate that this method achieves fifth-order convergence under specific parameter settings, with broad applicability across various types of nonlinear systems. The effectiveness of the proposed method is validated through a series of numerical experiments that confirm its superior performance in terms of accuracy and computational efficiency compared to existing methods.

1. Introduction

Let F ( x ) = 0 be a nonlinear system of equations, F : D R n R n , and the functions f i for i = 1 , 2 , , n be the coordinate components of F, expressed as F ( x ) = f 1 ( x ) , f 2 ( x ) , , f n ( x ) T . Solving nonlinear systems is generally challenging, and solutions ξ are typically found by linearizing the problem or employing a fixed-point iteration function G : D R n R n , leading to an iterative fixed-point method. Among the various root-finding techniques for nonlinear systems, Newton’s method is the most well known, and follows the second-order iterative procedure:
x ( k + 1 ) = x ( k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) , k = 0 , 1 , ,
F ( x ( k ) ) being the Jacobian matrix of F at the k-th iterate.
Recently, many researchers have focused on developing iterative methods that outperform Newton’s method in terms of both efficiency and order of convergence. Numerous approaches need the computation of F at different points along each iteration. Nevertheless, calculating the Jacobian poses significant challenges, particularly in high-dimensional problems, where its computation can be costly or even impractical (see for an example [1,2,3,4,5] and the references therein). In some instances, the Jacobian may not exist at all. To address this issue, alternative approaches have been proposed in the literature, such as replacing the Jacobian matrix with a divided difference operator. One of the simplest alternatives is the multidimensional version of Steffensen’s method, attributed to Samanskii [6,7], which substitutes the Jacobian in Newton’s procedure with a first-order operator of divided differences:
x ( k + 1 ) = x ( k ) [ x ( k ) , z ( k ) ; F ] 1 F ( x ( k ) ) , k = 0 , 1 , ,
where z ( k ) = x ( k ) + F ( x ( k ) ) , and [ · , · ; F ] : Ω × Ω R n × R n L ( R n ) is the operator of divided differences related to F [8],
[ y , x ; F ] ( y x ) = F ( y ) F ( x )   for any   x , y Ω .
The implementation of this divided difference is carried out, for all 1 i , j n , as
[ y , x ; F ] i , j = F i ( y 1 , y 2 , , y j 1 , y j , x j + 1 , , x n ) F i ( y 1 , y 2 , , y j 1 , x j , x j + 1 , , x n ) y j x j ,
where x = ( x 1 , x 2 , , x j 1 , x j , x j + 1 , , x n ) T and y = ( y 1 , y 2 , , y j 1 , y j , y j + 1 , , y n ) T , as defined by Ortega et al. in [8]. This substitution retains the second order of convergence while bypassing the calculation of F . However, the set of converging initial estimations is dramatically reduced by this replacement in case of Newton’s and Samanskii’s schemes. This implies that the initial estimation used in the iterative procedures must be closer to the (unknown) solution in order to converge. This Jacobian-free scheme is known as an unstable variant of Newton’s procedure.
Although both the Samanskii and Newton methods exhibit quadratic convergence, it has been shown that the Jacobian-free scheme is less stable than Newton’s method, with stability depending more on the initial guess. This has been thoroughly analyzed in [9,10,11,12,13], where it was found that, for scalar cases f ( x ) = 0 , derivative-free iterative methods become more stable when selecting z = x + α f ( x ) for small real values of α .
Our aim in this manuscript is to propose a very efficient and stable class of Jacobian-free iterative methods. Its convergence order is fourth, although it holds a fifth-order element. The main achievement of this construction is the relation between scalar accelerators, firstly appearing in [14], with the vectorial point-wise power weight function introduced firstly in [15]. The first technique allows faster convergence with low computational costs, while the second one improves the stability of Jacobian-free methods, allowing convergence to the solution with initial estimations that are not so close to the solution as in Samanskii’s case. These techniques are presented in the following state of the art, showing some of the advances in this specific area of research in the last few years.
However, substituting the Jacobian with divided differences can result in a lower convergence order for some iterative methods, for example the multidimensional version of Ostrowski’s fourth-order method (see [16]),
y ( k ) = x ( k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) 2 [ y ( k ) , x ( k ) ; F ] F ( x ( k ) ) 1 F ( y ( k ) ) , k = 0 , 1 , ,
achieves only cubic convergence if F ( x ) is replaced by [ y , x ; F ] , as follows:
y ( k ) = x ( k ) [ x ( k ) + F ( x ( k ) ) , x ( k ) ; F ] 1 F ( x ( k ) ) , k = 0 , 1 , , x ( k + 1 ) = y ( k ) 2 [ y ( k ) , x ( k ) ; F ] [ x ( k ) + F ( x ( k ) ) , x ( k ) ; F ] 1 F ( y ( k ) ) .
Despite the reduction in performance observed in some Jacobian-free methods, it is important to highlight that there are iterative methods that are successfully modified in their iterative expressions to preserve the order of convergence, even after fully transiting to Jacobian-free formulations. This is the case of a combination of the Traub–Steffensen family of methods and a second step with divided difference operators, proposed by Behl et al. in [17].
y ( k ) = x ( k ) u ( k ) , x ( k ) ; F 1 F ( x ( k ) ) , k = 0 , 1 , 2 , x ( k + 1 ) = y ( k ) y ( k ) , x ( k ) ; F 1 u ( k ) , x ( k ) ; F u ( k ) , y ( k ) ; F 1 F ( y ( k ) ) ,
where u ( k ) = x ( k ) + β F ( x ( k ) ) , β R . The iterative schemes have a fourth order of convergence for every β , β 0 ; for our purposes, we choose β = 1 , and this is called Traub-Ste. Let us remark that different approximations of the Jacobian matrix have been necessary to hold the original order of convergence, increasing the computational cost and, therefore, diminishing its efficiency.
Other fourth-order methods also lose their convergence order when the Jacobian is replaced with divided differences, such as Jarratt’s scheme [18], Sharma’s method [19], Montazeri’s method [20], Ostrowski’s vectorial extension ([16]), and Sharma–Arora’s fifth-order scheme [21]. In all these cases, Jacobian-free versions of the methods reduce to lower orders of convergence. Nevertheless, Amiri et al. [15] demonstrated that using a specialized divided difference operator of the form [ x , x + G ( x ) ; F ] , where G ( x ) = f 1 ( x ) m , f 2 ( x ) m , , f n ( x ) m T , m N , as an approximation of the Jacobian matrix may preserve the convergence order. By selecting an appropriate parameter m, the original fourth-order convergence of these methods can be maintained. Also, the appropriate value of m for each optimal order of convergence was conjectured.
For the sake of comparison with our proposed methods, we now consider several efficient vectorial iterative schemes existing in the literature to be transformed into their Jacobian-free versions following the idea of Amiri et al. [15]. This technique allowed us to design Jacobian-free partners of vectorial schemes with minimal computational costs. In the efficiency and numerical sections, we compare these Jacobian-free schemes with our proposed procedures. The first one is the vectorial extension of Ostrowski’s scheme (see [16,22], for instance),
y ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) 2 y ( k ) , x ( k ) ; F F ( x ( k ) ) 1 F ( y ( k ) ) ,
whose Jacobian-free version, obtained by substituting the Jacobian matrix by the divided difference operator (with Amiri et al. approach [15], m = 2 ), is
y ( k ) = x ( k ) u ( k ) , x ( k ) ; F 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) 2 y ( k ) , x ( k ) ; F u ( k ) , x ( k ) ; F 1 F ( y ( k ) ) ,
where u ( k ) , x ( k ) , F F ( x ( k ) ) , and u ( k ) = x ( k ) + α G ( x ( k ) ) (with m = 2 ), α R . We denote this method as O s t r o 01 .
Another fourth-order method proposed by Sharma in [19] using Jacobian matrices is
y ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) 3 I 2 F ( x ( k ) ) 1 [ y ( k ) , x ( k ) ; F ] F ( x ( k ) ) 1 F ( y ( k ) ) ,
to which we apply the same procedure, Amiri’s, performed for (2), obtaining its Jacobian-free partner
y ( k ) = x ( k ) u ( k ) , x ( k ) ; F 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) 3 I 2 u ( k ) , x ( k ) ; F 1 [ y ( k ) , x ( k ) ; F ] u ( k ) , x ( k ) ; F 1 F ( y ( k ) ) ,
which we denote by M 4 , 3 , u ( k ) , x ( k ) , F F ( x ( k ) ) and u ( k ) = x ( k ) + α G ( x ( k ) ) , m = 2 , first appeared in [15].
We finish with a sixth-order scheme [19], which is obtained by adding a step to the previous method (4),
y ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , k = 0 , 1 , , z ( k ) = y ( k ) 3 I 2 F ( x ( k ) ) 1 [ y ( k ) , x ( k ) ; F ] F ( x ( k ) ) 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) 3 I 2 F ( x ( k ) ) 1 [ y ( k ) , x ( k ) ; F ] F ( x ( k ) ) 1 F ( z ( k ) ) .
Similarly, its Jacobian-free version was constructed in [15] and denoted by M 6 , 3 ,
y ( k ) = x ( k ) u ( k ) , x ( k ) ; F 1 F ( x ( k ) ) , k = 0 , 1 , , z ( k ) = y ( k ) 3 I 2 u ( k ) , x ( k ) ; F 1 [ y ( k ) , x ( k ) ; F ] u ( k ) , x ( k ) ; F 1 F ( y ( k ) ) , x ( k + 1 ) = z ( k ) 3 I 2 u ( k ) , x ( k ) ; F 1 [ y ( k ) , x ( k ) ; F ] u ( k ) , x ( k ) ; F 1 F ( z ( k ) ) .
where again, u ( k ) , x ( k ) , F F ( x ( k ) ) and u ( k ) = x ( k ) + α G ( x ( k ) ) , m = 2 . It should be noticed that in schemes (3), (5) and (7), a quadratic element-by-element power of F ( x ( k ) ) was employed in the divided differences. This adjustment was essential for preserving the convergence order of the original method (see [15]). However, in our proposal, the order of convergence of the original schemes is held, avoiding the computational cost of this element-by-element power.
Therefore, we aim to avoid the calculation of Jacobian matrices, which can be a bottleneck in terms of computational efficiency, especially for large systems. To achieve this, we present a two-step fifth-order efficient Jacobian-free iterative method that addresses these challenges by eliminating the need for direct Jacobian computation. Our approach is grounded in the use of divided differences and scalar accelerators recently developed in some very efficient schemes (using Jacobian matrices) [14,23]. This not only reduces the computational costs, but also accelerates the convergence with simpler iterative expressions. The proposed method’s design and theoretical underpinnings are discussed, emphasizing its ability to achieve high-order convergence without the Jacobian calculations typically required.
Our proposal is a variant of the scheme proposed by Singh, Sharma and Kumar in [14],
w ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , k = 0 , 1 , 2 , x ( k + 1 ) = w ( k ) p 1 + p 2 F ( w ( k ) ) T F ( w ( k ) ) F ( x ( k ) ) T F ( x ( k ) ) F ( w ( k ) ) 1 F ( w ( k ) ) ,
where F y ( k ) T F y ( k ) F x ( k ) T F x ( k ) is a scalar accelerator that can be interpreted as F ( y ( k ) ) T F ( y ( k ) ) = F ( y ( k ) ) 2 , and F ( x ( k ) ) T F ( x ( k ) ) = F ( x ( k ) ) 2 , respectively. The real parameters p 1 and p 2 make the order of convergence of the method five if p 1 = p 2 = 1 , the order four if p 1 = 1 and p 2 is arbitrary, and the order two if p 1 1 and p 2 is arbitrary. It is known that in many practical applications, computing the Jacobian matrix can be very resource-intensive and time-consuming; therefore, Jacobian-free methods are often preferred.
Making a modification to the scheme (8) by replacing the Jacobian matrices by specific divided differences, we obtain the following family of novel Jacobian-free methods:
y ( k ) = x ( k ) u x ( k ) , x ( k ) , F 1 F ( x ( k ) ) , k = 0 , 1 , 2 , x ( k + 1 ) = y ( k ) p 1 + p 2 F ( y ( k ) ) T F ( y ( k ) ) F ( x ( k ) ) T F ( x ( k ) ) u y ( k ) , y ( k ) , F 1 F ( y ( k ) ) ,
where u x ( k ) , x ( k ) ; F F ( x ( k ) ) , u y ( k ) , y ( k ) ; F F ( y ( k ) ) , u x ( k ) = x ( k ) + α F ( x ( k ) ) and u y ( k ) = y ( k ) + α F ( y ( k ) ) . From now on, we will refer to our modified scheme as M S ( p 1 , p 2 ) . This class of iterative schemes uses scalar accelerators, in order to achieve high-order convergence with low computational effort, and also obtains good stability properties by using vectorial point-wise weight functions in the first-order divided differences. These properties have, as a direct consequence, higher efficiency compared with the existing Jacobian-free methods previously mentioned, as well as good numerical results on nonlinear systems of different sizes.
In Section 2, we demonstrate the theoretical order of convergence of the new parametric class of Jacobian-free iterative methods using scalar accelerators, depending on the values of the parameters involved. Subsequently, in Section 3 we carry out an efficiency analysis in which we compare our proposed method with the Jacobian-free versions of others previously cited in the literature, with excellent results. Finally, Section 4 presents practical results of these iterative methods applied to different nonlinear systems of equations, the testing of which confirms the theoretical results for both small- and big-sized academical and applied systems.

2. Convergence of New Jacobian-Free Iterative Method

The following result shows the error equations arising from method (9) for the possible parameter values, thereby demonstrating that the convergence results of the family (8) hold.
Theorem 1.
Let F be a differentiable enough function F : Ω R n R n , defined in the open convex neighborhood Ω of ξ solution of F ( x ) = 0 . Let us also consider an initial seed x ( 0 ) near enough to ξ and let F ( x ) be continuous and invertible at ξ. Then, the parametric class of iterative schemes presented in (9) locally converges for all α R , with the order of convergence given by the following:
(a) 
Fifth-order convergence if p 1 = p 2 = 1 , the corresponding error equation being
x ( k + 1 ) ξ = ( M 4 A 2 ( M 2 M 1 + M 1 M 2 ) ( C 1 + C 2 ) M 2 ( ( M 1 2 + M 1 ) M 2 + M 2 M 1 P 1 M 1 2 Q ) M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) .
(b) 
Fourth-order convergence if p 1 = 1 , p 2 1 , the corresponding error equation being
x ( k + 1 ) ξ = ( A 2 M 1 2 + C 1 M 1 p 2 M 1 3 ) e ( k ) 4 + ( M 4 p 1 ( A 2 ( M 2 M 1 + M 1 M 2 ) ( C 1 + C 2 ) M 2 ) ( p 2 ( ( M 1 2 + M 1 ) M 2 + M 2 M 1 P 1 M 1 2 Q ) M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) .
(c) 
Second-order convergence if p 1 1 , p 2 a r b i t r a r y , the corresponding error equation being
x ( k + 1 ) ξ = M 1 ( 1 p 1 ) e ( k ) 2 + M 2 ( 1 p 1 ) e ( k ) 3 + ( M 3 ( 1 p 1 ) p 1 A 2 M 1 2 + p 1 C 1 M 1 p 2 M 1 3 ) e ( k ) 4 + ( M 4 p 1 ( A 2 ( M 2 M 1 + M 1 M 2 ) ( C 1 + C 2 ) M 2 ) ( p 2 ( ( M 1 2 + M 1 ) M 2 + M 2 M 1 P 1 M 1 2 Q ) M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) ,
where A j = 1 j ! F ( ξ ) 1 F ( j ) ( ξ ) , j = 2 , 3 , , and C i , M i , i = 1 , 2 , are combinations of A j , denoting the error at iteration k by e ( k ) = x ( k ) ξ .
Proof. 
Let e ( k ) = x ( k ) ξ be the error at the k-th iteration and let ξ R n be a solution of F ( x ) = 0 . Then, expanding F ( x ( k ) ) in the neighborhood of ξ , we have
F ( x ( k ) ) = F ( ξ ) e ( k ) + A 2 e ( k ) 2 + A 3 e ( k ) 3 + A 4 e ( k ) 4 + A 5 e ( k ) 5 + A 6 e ( k ) 6 + O e ( k ) 7 , F ( x ( k ) ) = F ( ξ ) I + 2 A 2 e ( k ) + 3 A 3 e ( k ) 2 + 4 A 4 e ( k ) 3 + 5 A 5 e ( k ) 4 + 6 A 6 e ( k ) 5 + O e ( k ) 6 , F ( x ( k ) ) = F ( ξ ) 2 A 2 + 6 A 3 e ( k ) + 12 A 4 e ( k ) 2 + 20 A 5 e ( k ) 3 + 30 A 6 e ( k ) 4 + O e ( k ) 5 , F ( x ( k ) ) = F ( ξ ) 6 A 3 + 24 A 4 e ( k ) + 60 A 5 e ( k ) 2 + 120 A 6 e ( k ) 3 + O e ( k ) 4 , F ( i v ) ( x ( k ) ) = F ( ξ ) 24 A 4 + 120 A 5 e ( k ) + 360 A 6 e ( k ) 2 + O e ( k ) 3 , F ( v ) ( x ( k ) ) = F ( ξ ) 120 A 5 + 720 A 6 e ( k ) + O e ( k ) 2 , F ( v i ) ( x ( k ) ) = F ( ξ ) 720 A 6 + O e ( k ) .
Then, based on the formula of Genochi and Hermite (see [8]), we have
u x ( k ) , x ( k ) ; F = F ( x ( k ) ) + 1 2 ! F ( x ( k ) ) ( u x ( k ) x ( k ) ) + 1 3 ! F ( x ( k ) ) ( u x ( k ) x ( k ) ) 2 + 1 4 ! F ( i v ) ( x ( k ) ) ( u x ( k ) x ( k ) ) 3 + 1 5 ! F ( v ) ( x ( k ) ) ( u x ( k ) x ( k ) ) 4 + 1 6 ! F ( v i ) ( x ( k ) ) ( u x ( k ) x ( k ) ) 5 + O ( ( u x ( k ) x ( k ) ) 6 ) .
Taking into account that u x ( k ) x ( k ) = α F ( x ( k ) ) , and performing a series expansion up to the fifth order, we obtain
α 2 ( F ( x ( k ) ) ) 2 = α 2 ( F ( ξ ) ) 2 e ( k ) 2 + α [ ( F ( ξ ) ) 2 A 2 + F ( ξ ) A 2 F ( ξ ) ] e ( k ) 3 + α 2 [ ( F ( ξ ) ) 2 A 3 + F ( ξ ) A 2 F ( ξ ) A 2 + F ( ξ ) A 3 F ( ξ ) ] e ( k ) 4 + α 2 [ ( F ( ξ ) ) 2 A 4 + F ( ξ ) A 2 F ( ξ ) A 3 + F ( ξ ) A 3 F ( ξ ) A 2 + F ( ξ ) A 4 F ( ξ ) ] e ( k ) 5 + O ( e ( k ) 6 ) , α 3 ( F ( x ( k ) ) ) 3 = α 3 ( F ( ξ ) ) 3 e ( k ) 3 + α 3 [ ( F ( ξ ) ) 3 A 2 + ( F ( ξ ) ) 2 A 2 F ( ξ ) + F ( ξ ) A 2 ( F ( ξ ) ) 2 ] e ( k ) 4 + α 3 [ ( F ( ξ ) ) 3 A 3 + ( F ( ξ ) ) 2 A 2 F ( ξ ) A 2 + F ( ξ ) A 2 ( F ( ξ ) ) 2 A 2 + ( F ( ξ ) ) 2 A 3 F ( ξ ) + F ( ξ ) A 2 F ( ξ ) A 2 F ( ξ ) + F ( ξ ) A 3 ( F ( ξ ) ) 2 ] e ( k ) 5 + O ( e ( k ) 6 ) , α 4 ( F ( x ( k ) ) ) 4 = α 4 ( F ( ξ ) ) 4 e ( k ) 4 + α 4 [ ( F ( ξ ) ) 4 A 2 + ( F ( ξ ) ) 3 A 2 F ( ξ ) + ( F ( ξ ) ) 2 A 2 ( F ( ξ ) ) 2 + F ( ξ ) A 2 ( ( F ( ξ ) ) 3 ] e ( k ) 5 + O ( e k 6 ) , α 5 ( F ( x ( k ) ) ) 5 = α ( F ( ξ ) ) 5 e ( k ) 5 + O ( e ( k ) 6 ) .
By combining Formulas (13) and (15) in the Taylor series expansion (14), we obtain the following:
u x ( k ) , x ( k ) ; F = F ( ξ ) I + B 1 e k + B 2 e ( k ) 2 + B 3 e ( k ) 3 + B 4 e ( k ) 4 + B 5 e ( k ) 5 + O ( e ( k ) 6 ) ,
where
B 1 = 2 A 2 + α A 2 F ( ξ ) , B 2 = 3 A 3 + α A 2 F ( ξ ) A 2 + 3 α A 3 F ( ξ ) + α 2 A 3 ( F ( ξ ) ) 2 , B 3 = 4 A 4 + α A 2 F ( ξ ) A 3 + 3 α A 3 F ( ξ ) A 2 + 6 α A 4 F ( ξ ) + α 2 A 3 ( F ( ξ ) ) 2 A 2 + α 2 A 3 F ( ξ ) A 2 F ( ξ ) + α 2 4 A 4 ( F ( ξ ) ) 2 + α 3 A 4 ( F ( ξ ) ) 3 , B 4 = 5 A 5 + α A 2 F ( ξ ) A 4 + 3 α A 3 F ( ξ ) A 3 + 6 α A 4 F ( ξ ) A 2 + 10 α A 5 F ( ξ ) + α 2 A 3 ( F ( ξ ) ) 2 + α 2 A 3 F ( ξ ) A 2 F ( ξ ) A 2 + α 2 A 3 F ( ξ ) A 3 F ( ξ ) + 4 α 2 A 4 ( F ( ξ ) ) 2 A 2 + 4 α 2 A 4 F ( ξ ) A 2 F ( ξ ) + 10 α 2 A 5 ( F ( ξ ) ) 2 + α 3 A 4 ( F ( ξ ) ) 3 A 2 + α 3 A 4 ( F ( ξ ) ) 2 A 2 F ( ξ ) + α 3 A 4 F ( ξ ) A 2 ( F ( ξ ) ) 2 + 5 α 3 A 5 ( F ( ξ ) ) 3 + α 4 A 5 ( F ( ξ ) ) 4 , a n d B 5 = 6 A 6 + α A 2 F ( ξ ) A 5 + 3 α A 3 F ( ξ ) A 4 + 6 α A 4 F ( ξ ) A 3 + 10 α A 5 F ( ξ ) A 2 + 15 α A 6 F ( ξ ) + α 2 A 3 ( F ( ξ ) ) 2 A 4 + α 2 A 3 F ( ξ ) A 2 F ( ξ ) A 3 + α 2 A 3 F ( ξ ) A 3 F ( ξ ) A 2 + α 2 A 3 F ( ξ ) A 4 F ( ξ ) + 4 α 2 A 4 ( F ( ξ ) ) 2 A 3 + 4 α 2 A 4 F ( ξ ) A 2 F ( ξ ) A 2 + 4 α 2 A 4 F ( ξ ) A 3 F ( ξ ) + 10 α 2 A 5 ( F ( ξ ) ) 2 A 2 + 10 α 2 A 5 F ( ξ ) A 2 F ( ξ ) + 20 α 2 A 6 ( F ( ξ ) ) 2 + α 3 A 4 ( F ( ξ ) ) 3 A 3 + α 3 A 4 ( F ( ξ ) ) 2 A 2 F ( ξ ) ) A 2 + α 3 A 4 F ( ξ ) A 2 ( F ( ξ ) ) 2 A 2 + α 3 A 4 ( F ( ξ ) ) 2 A 3 F ( ξ ) + α 3 A 4 F ( ξ ) A 2 F ( ξ ) A 2 F ( ξ ) + α 3 A 4 F ( ξ ) A 3 ( F ( ξ ) ) 2 + 5 α 3 A 5 ( F ( ξ ) ) 3 A 2 + 5 α 3 A 5 ( F ( ξ ) ) 2 A 2 F ( ξ ) + 5 α 3 A 5 F ( ξ ) A 2 ( F ( ξ ) ) 2 + 15 α 3 A 6 ( F ( ξ ) ) 3 + α 4 A 5 ( F ( ξ ) ) 4 A 2 + α 4 A 5 ( F ( ξ ) ) 3 A 2 F ( ξ ) + α 4 A 5 ( F ( ξ ) ) 2 A 2 ( F ( ξ ) ) 2 + α 4 A 5 F ( ξ ) A 2 ( F ( ξ ) ) 3 + 6 α 4 A 6 ( F ( ξ ) ) 4 + α 5 A 6 ( F ( ξ ) ) 5 .
Next, we expand the inverse of the divided difference operator u x ( k ) , x ( k ) ; F , forcing it to satisfy u x ( k ) , x ( k ) ; F 1 u x ( k ) , x ( k ) ; F = I ,
u x ( k ) , x ( k ) ; F 1 = I + X 2 e k + X 3 e ( k ) 2 + X 4 e ( k ) 3 + X 5 e ( k ) 4 + O ( e ( k ) 5 ) F ( ξ ) 1 ,
where
X 2 = B 1 , X 3 = B 1 2 B 2 , X 4 = B 1 B 2 + B 2 B 1 B 1 3 B 3 , X 5 = B 1 B 3 + B 3 B 1 + B 1 4 B 4 B 1 2 + B 2 2 B 1 B 2 B 1 B 2 B 1 2 .
By using the Taylor expansions of F ( x ( k ) ) defined in (13) and u x ( k ) , x ( k ) , F 1 obtained in (17), we obtain the error equation for the first step:
y ( k ) ξ = M 1 e ( k ) 2 + M 2 e ( k ) 3 + M 3 e ( k ) 4 + M 4 e ( k ) 5 + O ( e ( k ) 6 ) ,
where
M 1 = ( X 2 + A 2 ) , M 2 = ( A 3 + X 2 A 2 + X 3 ) , M 3 = ( A 4 + X 2 A 3 + X 3 A 2 + X 4 ) , M 4 = ( A 5 + X 2 A 4 + X 3 A 3 + X 4 A 2 + X 5 ) .
Now, we find the error equation for the second step,
F ( y ( k ) ) = F ( ξ ) e y ( k ) + A 2 e y ( k ) 2 + O ( e y ( k ) 3 ) , F ( y ( k ) ) = F ( ξ ) I + 2 A 2 e y ( k ) + 3 A 3 e y ( k ) 2 + O ( e y ( k ) 3 ) , F ( y ( k ) ) = F ( ξ ) 2 A 2 + 6 A 3 e y ( k ) + 12 A 4 e y ( k ) 2 + O ( e y ( k ) 3 ) , F ( y ( k ) ) = F ( ξ ) 6 A 3 + O ( e y ( k ) ) ,
from which arises
F ( y ( k ) ) = F ( ξ ) M 1 e ( k ) 2 + M 2 e ( k ) 3 + ( M 3 + A 2 M 1 2 ) e ( k ) 4 + A 2 ( M 2 M 1 + M 1 M 2 ) e ( k ) 5 + O ( e ( k ) 6 ) , F ( y ( k ) ) = F ( ξ ) I + 2 A 2 M 1 e ( k ) 2 + 2 A 2 M 2 e ( k ) 3 + ( 2 A 2 M 3 + 3 A 3 M 1 2 ) e ( k ) 4 + ( 2 A 2 M 4 + 3 A 3 ( M 1 M 2 + M 2 M 1 ) ) e ( k ) 5 + O ( e ( k ) 6 ) , F ( y ( k ) ) = F ( ξ ) 2 A 2 + 6 A 3 M 1 e ( k ) 2 + 6 A 3 M 3 e ( k ) 3 + O ( e ( k ) 4 ) , F ( y ( k ) ) = F ( ξ ) 6 A 3 + O ( e ( k ) ) .
Then, following the process seen in (14), the expansion of the second difference operator is given by
u y ( k ) , y ( k ) ; F = F ( ξ ) I + ( 2 A 2 M 1 + α A 2 F ( ξ ) M 1 ) e ( k ) 2 + ( 2 A 2 M 2 + α A 2 F ( ξ ) M 2 ) e ( k ) 3 + ( 2 A 3 M 3 + 3 α A 3 M 1 2 + α A 2 F ( ξ ) ( M 3 + A 2 M 2 2 ) + 3 α A 3 M 1 F ( ξ ) M 1 + α A 3 F ( ξ ) M 1 F ( ξ ) M 1 ) e ( k ) 4 + 2 A 2 M 4 + 3 A 3 ( M 1 M 2 + M 2 M 1 ) + α A 2 F ( ξ ) A 2 ( M 2 M 1 + M 1 M 2 ) + 3 α A 3 M 1 F ( ξ ) ( M 2 + M 1 ) + α A 3 F ( ξ ) ( M 1 F ( ξ ) M 2 + M 2 F ( ξ ) M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) ,
which can be expressed as
u y ( k ) , y ( k ) ; F = F ( ξ ) I + C 1 e ( k ) 2 + C 2 e ( k ) 3 + C 3 e ( k ) 4 + C 4 e ( k ) 5 + O ( e ( k ) 6 ) ,
where
C 1 = 2 A 2 M 1 + α A 2 F ( ξ ) M 1 , C 2 = 2 A 2 M 2 + α A 2 F ( ξ ) M 2 , C 3 = 2 A 3 M 3 + 3 α A 3 M 1 2 + α A 2 F ( ξ ) ( M 3 + A 2 M 2 2 ) + 3 α A 3 M 1 F ( ξ ) M 1 + α A 3 F ( ξ ) M 1 F ( ξ ) M 1 , C 4 = 2 A 2 M 4 + 3 A 3 ( M 1 M 2 + M 2 M 1 ) + α A 2 F ( ξ ) A 2 ( M 2 M 1 + M 1 M 2 ) + 3 α A 3 M 1 F ( ξ ) ( M 2 + M 1 ) + α A 3 F ( ξ ) ( M 1 F ( ξ ) M 2 + M 2 F ( ξ ) M 1 ) .
Again, we obtain the following in a similar way to how it is obtained in (17):
u y ( k ) , y ( k ) ; F 1 = C 1 e ( k ) 2 C 2 e ( k ) 3 + O e ( k ) 4 .
Now, we proceed to calculate the expansion of u y ( k ) , y ( k ) ; F 1 F y ( k ) , obtaining
u y ( k ) , y ( k ) ; F 1 F y k = M 1 e ( k ) 2 + M 2 e ( k ) 3 + ( M 3 A 2 M 1 2 C 1 M 1 ) e ( k ) 4 + ( A 2 ( M 2 M 1 + M 1 M 2 ) ( C 1 + C 2 ) M 2 ) e ( k ) 5 + O ( e ( k ) 6 ) .
According to Theorem 1 in [14], proven by Singh, Sharma and Kurmar, we have
F ( y ( k ) ) T F ( y ( k ) ) F ( x ( k ) ) T F ( x ( k ) ) = P e y ( k ) 2 + O ( e y ( k ) 3 ) P e ( k ) 2 + Q e ( k ) 3 + O ( e ( k ) 4 ) ,
where
P = i = 1 n m i P i ,   with   P i = R i T × R i , Q = i = 1 n m i Q i ,   with   Q i = R i T H i + H i T R i ,
and
R i = f i x 1 , f i x 2 , , f i x n , H i = 1 2 2 f i x j x r n × n .
Using (19), obtain
e y ( k ) 2 = M 1 2 e ( k ) 4 + ( M 1 M 2 + M 2 M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) .
After substituting (29) in (28) when performing the quotient, we obtain the following:
F ( y ( k ) ) T F ( y ( k ) ) F ( x ( k ) ) T F ( x ( k ) ) = M 1 2 e ( k ) 2 + M 1 M 2 + M 2 M 1 P 1 M 1 2 Q e ( k ) 3 + O ( e ( k ) 4 ) .
Finally, fitting (19), (27) and the last result obtained in (30), we have
x ( k + 1 ) ξ = M 1 ( 1 p 1 ) e ( k ) 2 + M 2 ( 1 p 1 ) e ( k ) 3 + ( M 3 ( 1 p 1 ) p 1 A 2 M 1 2 + p 1 C 1 M 1 p 2 M 1 3 ) e ( k ) 4 + ( M 4 p 1 ( A 2 ( M 2 M 1 + M 1 M 2 ) ( C 1 + C 2 ) M 2 ) ( p 2 ( ( M 1 2 + M 1 ) M 2 + M 2 M 1 P 1 M 1 2 Q ) M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) .
In this last result, it is easy to observe that when p 1 is different from one ( p 1 1 ), the iterative method has an order of convergence equal to two, since the term e ( k ) 2 would not cancel out. However, when p 1 = p 2 = 1 , both terms with e ( k ) 2 and e ( k ) 3 cancel out while the term e ( k ) 4 is as follows
A 2 M 1 2 + C 1 M 1 M 1 3 .
Let us remember that C 1 = 2 A 2 M 1 + α A 2 F ( ξ ) M 1 , so that
A 2 M 1 2 + α A 2 F ( ξ ) M 1 2 M 1 3 ,
but since M 1 = ( X 2 + A 2 ) , then X 2 = B 1 , so M 1 = ( B 1 + A 2 ) . Also, B 1 = 2 A 2 + α A 2 F ( ξ ) . Given that M 1 = A 2 + α A 2 F ( ξ ) , replacing M 1 in Equation (32), we obtain
( A 2 + α A 2 F ( ξ ) ) M 1 2 M 1 3 M 1 3 M 1 3 = 0 ,
resulting in the error equation
e ( k + 1 ) ξ = ( M 4 A 2 ( M 2 M 1 + M 1 M 2 ) ( C 1 + C 2 ) M 2 ( ( M 1 2 + M 1 ) M 2 + M 2 M 1 P 1 M 1 2 Q ) M 1 ) e ( k ) 5 + O ( e ( k ) 6 ) .
From the above, it is clear to see that if p 1 = 1 but p 2 1 , only order four is reached. □

3. Efficiency Analysis

We demonstrated the order of convergence of the proposed class of the iterative method for the different values of the parameters p 1 , p 2 . In this section, we perform a computational effort study considering the effort of solving the involved linear systems per iteration and the other computational costs (functional evaluations, amount of product/quotients,…), not only for the proposed class but also for some Jacobian-free schemes presented in the introductory section.
In order to achieve this aim, it is known that the operations (products/quotients) needed for solving a n × n linear system are
1 3 n 3 + n 2 1 3 n .
However, if other linear systems with the same coefficient matrix are solved, then the cost is increased only in n 2 operations each. For each divided difference, we calculate n 2 quotients; for each functional evaluation of F at different points, there are costs of n real evaluations; for each evaluation of a divided differences, there are n 2 n scalar evaluations. Indeed, a matrix–vector product needs n 2 products/quotients. Based on the above, the computational cost for each method appears in Table 1. From family (9), which we call M S ( p 1 , p 2 ) , we consider the fifth-order member p 1 = p 2 = 1 and its fourth-order partner p 1 = 1 , p 2 = 1 . They have the same computational cost, which will be reflected, along with the others, in Table 1.
The results presented in Table 1 show that the method with the highest computational cost is Traub-Ste, while those with intermediate costs are O s t r o 01 and M S ( p 1 , p 2 ) , with the latter being slightly better. The ones that offer the lowest cost are M 4 , 3 and M 6 , 3 , although the latter has sixth-order convergence, which is a factor to consider when obtaining the efficiency index.
In order to show more clearly how computational cost influences the efficiency index I = p 1 C , where p is the convergence order of the corresponding scheme (see [24]), we present Figure 1 and Figure 2 for different sizes of the nonlinear system to be solved.
In Figure 1, we observe that for systems with dimensions n = 2 , 3 , , 10 , the proposed class of vectorial iterative methods (9) shows better computational efficiency than the other schemes, for both members of the family. On the other hand, as the system grows to sizes n = 10 , 11 , , 50 , it becomes apparent that methods M 4 , 3 and M 6 , 3 yield better results. Let us notice that for sizes of n < 11 , our method has better efficiency than M 6 , 3 , and for sizes n < 12 , it has better efficiency compared to M 4 , 3 (see Figure 2).
In the next section, we check the theoretical convergence order of our proposed method and assess the efficiency of different schemes in nonlinear systems of various sizes.

4. Numerical Results

In this section, we test numerically whether or not the theoretical order holds for practical purposes in the proposed Jacobian-free class (9). Moreover, we compare it with the methods that appeared in the efficiency section to show their accuracy and computational performance.
Numerical results were obtained with the Matlab2022b version, using 8000 digits in variable precision arithmetics, a processor AMD A12-9720P RADEON R7, 12 COMPUTE CORES 4C, +8G-Ram, and 2.70 GHz. The results are shown in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12 and Table 13, including the following information, where the appearing norms are Euclidean:
  • k: amount of iterations needed (“-” appears when the scheme does not converge or it needs more iterations than the maximum allowed).
  • ξ ¯ : obtained solution.
  • Cpu-time: average time in seconds required by the iterative method to reach the solution of the problem when executed ten times.
  • ρ : approximated computational order of convergence, ACOC, firstly appearing in [25]
    ρ = ln x ( k + 1 ) x ( k ) x ( k ) x ( k 1 ) ln x ( k ) x ( k 1 ) x ( k 1 ) x ( k 2 ) , k = 2 , 3 , ,
    (if ρ is not stable, then “-” appears in the table).
  • ϵ a p r o x = x ( k + 1 ) x ( k ) .
  • ϵ f = F x ( k + 1 ) . If ϵ f or ϵ a p r o x are very far from zero or we reach infinity or NAN, then “-” appears in the table.
Regarding the stopping criterium, the iterative process ends when one of the following conditions is fulfilled:
(i)
F x ( k + 1 ) < 10 100 ;
(ii)
x ( k + 1 ) x ( k ) < 10 100 ;
(iii)
Fifty iterations are reached.

4.1. Academic Problems

We include a wide variety of nonlinear systems with different dimensions, ranging from 2 to 25. The diversity of expressions present in these systems ensures the versatility of the methods applied and their efficiency.
Example 1.
The first case we present is a system F 1 ( x ) = f 1 1 ( x ) , f 2 1 ( x ) , , f 25 1 ( x ) T , where
f i 1 ( x ) = x i 2 x i + 1 1 , i = 1 , 2 , , 24 , f 25 1 ( x ) = x 25 2 x 1 1 .
the solution for which is ξ ¯ = ( 1 , 1 , 1 , , 1 ) .
Table 2. Results for function F 1 , using as a seed x ( 0 ) = ( 1.5 , 1.5 , 1.5 , , 1.5 ) .
Table 2. Results for function F 1 , using as a seed x ( 0 ) = ( 1.5 , 1.5 , 1.5 , , 1.5 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 74.001.108 × 10 51 4.624 × 10 204 ξ ¯ 1 191.7242
M S ( 1 , 1 ) 54.974.904 × 10 24 3.995 × 10 117 ξ ¯ 1 143.1027
Traub-Ste54.001.241 × 10 35 1.517 × 10 140 ξ ¯ 1 227.8747
O s t r o 01 ------
M 4 , 3 64.007.338 × 10 40 3.015 × 10 158 ξ ¯ 1 163.9808
M 6 , 3 55.961.336 × 10 47 7.876 × 10 284 ξ ¯ 1 142.6446
In Table 2, the O s t r o 01 method reached the maximum number of iterations without converging to the solution, while the most notable scheme is M 6 , 3 in almost all aspects, such as errors, iterations and computational time. On the other hand, although M S ( 1 , 1 ) exhibits a lower computational time than M 4 , 3 , the latter has an additional iteration, a relatively similar time, and better errors, proving that it might even be superior in terms of efficiency.
Example 2.
The second case we present is a system F 2 ( x ) = f 1 2 ( x ) , f 2 2 ( x ) , , f 8 2 ( x ) T , where
f i 2 ( x ) = x i cos 2 x i k = 1 8 x k , i = 1 , 2 , , 8 ,
ξ ¯ ( 0.5149 , 0.5149 , , 0.5149 ) being its solution.
Table 3. Numerical results for F 2 , x ( 0 ) = ( 1 , 1 , 1 , , 1 ) .
Table 3. Numerical results for F 2 , x ( 0 ) = ( 1 , 1 , 1 , , 1 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 83.992.794 × 10 29 4.092 × 10 115 ξ ¯ 1 73.3126
M S ( 1 , 1 ) 85.007.534 × 10 74 2.162 × 10 366 ξ ¯ 1 72.6143
Traub-Ste164.009.389 × 10 68 3.459 × 10 269 ξ ¯ 1 317.9304
O s t r o 01 154.006.341 × 10 70 1.641 × 10 278 ξ ¯ 1 139.3808
M 4 , 3 74.008.510 × 10 87 3.949 × 10 346 ξ ¯ 1 76.9114
M 6 , 3 66.024.223 × 10 47 8.063 × 10 281 ξ ¯ 1 79.7898
In Table 3, we observe that our proposed schemes yield the best computational times, with the M S ( 1 , 1 ) method standing out for having the smallest error norm among all schemes. The M 4 , 3 method, while showing good overall performance in terms of errors, is more computationally expensive as it takes one iteration longer to reach the solution to the system and slightly more time compared to M S ( p 1 , p 2 ) methods, indicating that the program takes more time to generate each iteration.
Example 3.
The third case we present is a system F 3 ( x ) = f 1 3 ( x ) , f 2 3 ( x ) , , f 5 3 ( x ) T , where
f i 3 ( x ) = k = 1 5 x k x i e ( x i ) , i = 1 , 2 , 3 , 4 , 5 ,
where the solution is ξ ¯ ( 0.20389 , 0.20389 , 0.20389 , , 0.20389 ) .
The corresponding numerical results appear in Table 4, where the M S ( 1 , 1 ) method shows the best errors, while the shortest computational time is achieved by the M 6 , 3 method.
Table 4. Numerical results for F 3 , x ( 0 ) = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ) .
Table 4. Numerical results for F 3 , x ( 0 ) = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 44.001.305 × 10 55 2.980 × 10 221 ξ ¯ 1 12.8166
M S ( 1 , 1 ) 45.004.997 × 10 101 1.143 × 10 503 ξ ¯ 1 12.6105
Traub-Ste44.009.461 × 10 68 1.372 × 10 270 ξ ¯ 1 18.9457
O s t r o 01 44.001.420 × 10 47 2.882 × 10 189 ξ ¯ 1 12.4768
M 4 , 3 44.003.440 × 10 44 1.007 × 10 175 ξ ¯ 1 12.8707
M 6 , 3 36.071.482 × 10 21 3.012 × 10 127 ξ ¯ 1 10.5536
Example 4.
The fourth case we present is a system F 4 ( x ) = f 1 4 ( x ) , f 2 4 ( x ) , , f 5 4 ( x ) T , where
f i 4 ( x ) = k = 1 5 x k x i e ( x i ) x i , i = 1 , 2 , 3 , 4 , 5 ,
the solution for which ξ ¯ = ( 0 , 0 , 0 , , 0 ) .
Table 5. Numerical results for F 4 , x ( 0 ) = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ) .
Table 5. Numerical results for F 4 , x ( 0 ) = ( 0.5 , 0.5 , 0.5 , 0.5 , 0.5 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 43.997.838 × 10 31 4.801 × 10 121 ξ ¯ 1 14.6365
M S ( 1 , 1 ) 45.002.864 × 10 52 2.514 × 10 258 ξ ¯ 1 14.6532
Traub-Ste44.008.656 × 10 34 3.124 × 10 133 ξ ¯ 1 21.7138
O s t r o 01 44.002.268 × 10 35 6.443 × 10 140 ξ ¯ 1 14.0140
M 4 , 3 44.002.594 × 10 47 9.227 × 10 188 ξ ¯ 1 14.8093
M 6 , 3 35.326.593 × 10 26 7.254 × 10 153 ξ ¯ 1 12.0667
The results in Table 5 are similar to those in Table 4. They are very balanced, with the M S ( 1 , 1 ) method presenting the best errors, while the shortest computational time is achieved by the M 6 , 3 method. However, this last one requires more effort to generate an iteration because, despite it producing one less iteration than the other iterative methods, their execution times are very similar.
Example 5.
The fifth case we present is a system F 5 ( x ) = f 1 5 ( x ) , f 2 5 ( x ) , , f 10 5 ( x ) T , where
f i 5 ( x ) = x i + 1 2 log 1 x i + k = 1 10 ( x k ) , i = 1 , 2 , 3 , , 10 ,
its solution being ξ ¯ ( 7.4370 , 7.4370 , , 7.4370 ) .
Table 6. Numerical results for F 5 , x ( 0 ) = ( 7 , 7 , 7 , , 7 ) .
Table 6. Numerical results for F 5 , x ( 0 ) = ( 7 , 7 , 7 , , 7 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 34.031.343 × 10 24 1.076 × 10 101 ξ ¯ 1 58.6649
M S ( 1 , 1 ) 35.035.715 × 10 36 1.098 × 10 183 ξ ¯ 1 58.6010
Traub-Ste44.002.252 × 10 97 1.397 × 10 392 ξ ¯ 1 138.3238
O s t r o 01 44.001.317 × 10 99 1.705 × 10 401 ξ ¯ 1 76.6871
M 4 , 3 34.001.490 × 10 24 2.175 × 10 101 ξ ¯ 1 64.3101
M 6 , 3 36.011.565 × 10 53 4.782 × 10 326 ξ ¯ 1 69.4223
For Table 6, we note that the best computational times are obtained by the methods M S ( p 1 , p 2 ) . On the other hand, the best errors are obtained by the O s t r o 01 scheme, which has a competitive computational time considering it requires four iterations, similar to Traub-Ste. The latter has been shown to be the one that requires the most time to converge.
Example 6.
The sixth case we present is a system F 6 ( x ) = f 1 6 ( x ) , f 2 6 ( x ) , , f 5 6 ( x ) T , where
f i 6 ( x ) = x i + 1.5 sin k = 1 5 x k x i , i = 1 , 2 , 3 , 4 , 5 ,
and there exist two solutions, ξ ¯ 1 ( 0.3004 , 0.3004 , , 0.3004 ) and ξ ¯ 2 ( 0.4579 , 0.4579 , , 0.4579 ) .
Table 7. Numerical results for F 6 x ( 0 ) = ( 0.75 , 0.75 , 0.75 , 0.75 , 0.75 ) .
Table 7. Numerical results for F 6 x ( 0 ) = ( 0.75 , 0.75 , 0.75 , 0.75 , 0.75 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 64.005.425 × 10 88 5.270 × 10 349 ξ ¯ 1 20.2517
M S ( 1 , 1 ) 54.975.922 × 10 39 1.965 × 10 190 ξ ¯ 1 16.9191
Traub-Ste204.009.536 × 10 99 4.618 × 10 391 ξ ¯ 2 154.5691
O s t r o 01 54.001.050 × 10 33 2.164 × 10 132 ξ ¯ 1 16.5869
M 4 , 3 104.013.127 × 10 35 1.500 × 10 138 ξ ¯ 1 51.2143
M 6 , 3 175.906.480 × 10 23 2.076 × 10 132 ξ ¯ 2 105.4212
In Table 7, the best errors were obtained by M S ( 1 , 1 ) and Traub-Ste, which converged to different solutions, while in terms of performance, M S ( 1 , 1 ) and O s t r o 01 appeared to be better.
Example 7.
The seventh case we present is a system F 7 ( x ) = f 1 7 ( x ) , f 2 7 ( x ) T , where
F i 7 ( x ) = arctan x i + 1 2 k = 1 2 x k 2 x i 2 = 0 , i = 1 , 2 ,
and ξ ¯ ( 0.936 , 0.936 ) .
Table 8. Numerical results for F 7 , x ( 0 ) = ( 0.25 , 0.25 ) .
Table 8. Numerical results for F 7 , x ( 0 ) = ( 0.25 , 0.25 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 54.005.203 × 10 41 6.322 × 10 161 ξ ¯ 1 3.6588
M S ( 1 , 1 ) 45.005.282 × 10 73 6.419 × 10 362 ξ ¯ 1 3.0030
Traub-Ste64.006.615 × 10 81 7.658 × 10 321 ξ ¯ 1 4.6891
O s t r o 01 ------
M 4 , 3 64.005.613 × 10 98 3.787 × 10 389 ξ ¯ 1 3.7011
M 6 , 3 75.922.032 × 10 32 5.723 × 10 190 ξ ¯ 1 4.4699
In Table 8, the O s t r o 01 method could not converge to the solution because it reached the maximum number of iterations, whereas the M S ( p 1 , p 2 ) and M 4 , 3 methods showed better overall behavior.

4.2. Special Problems

In the following cases, we address problems of interest where we demonstrate the advantages of our method applied to three types of systems: a non-differentiable system, a high-dimensional system, and a system with real-world applications.
We present a non-differentiable system of size n × n with n = 2 , which includes both transcendental and algebraic functions, such as the absolute value. These characteristics justify the use of derivative-free numerical schemes, as proposed in this work, to obtain the desired results. To demonstrate the effectiveness and robustness of our method, we will solve the problem using five different initial estimates, thereby showcasing its general good performance when addressing such problems.
Example 8.
The eighth case we present is a system F 8 ( x ) = f 1 8 ( x ) , f 2 8 ( x ) T , where
f 1 8 ( x ) = l o g ( | x 1 | ) + | x 2 | , f 2 8 ( x ) = e ( x 1 ) + x 2 1 ,
with solutions ξ ¯ 1 ( 0.6275 , 0.4661 ) and ξ ¯ 2 ( 0.5122 , 0.669 ) .
Table 9. Numerical results for F 8 , x ( 0 ) = ( 0.25 , 0.25 ) .
Table 9. Numerical results for F 8 , x ( 0 ) = ( 0.25 , 0.25 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 44.001.139 × 10 54 9.420 × 10 217 ξ ¯ 1 2.9146
M S ( 1 , 1 ) 44.201.415 × 10 72 2.272 × 10 301 ξ ¯ 1 2.8907
Traub-Ste44.114.417 × 10 48 1.391 × 10 191 ξ ¯ 1 3.1954
O s t r o 01 64.002.520 × 10 78 1.241 × 10 310 ξ ¯ 1 3.4855
M 4 , 3 74.021.401 × 10 72 3.093 × 10 288 ξ ¯ 1 4.0501
M 6 , 3 66.274.634 × 10 58 7.902 × 10 346 ξ ¯ 1 3.6475
In Table 9, the M S ( 1 , 1 ) method stands out with the lowest computational time, being more efficient than the others, despite requiring a similar number of iterations and showing a convergence order of 4.20 , which is close to that of the other methods. The Traub-Ste method is less efficient, with a computational time of 3.1954 but with a larger errors. O s t r o 01 is competitive, only falling slightly behind in computational time. Methods M 4 , 3 and M 6 , 3 are the most expensive in terms of computational time, requiring seven and six iterations, respectively, with the highest time recorded for M 4 , 3 . Despite their higher orders of convergence, both methods show lower overall efficiency compared to the M S and O s t r o 01 methods.
Table 10. Numerical results for F 8 , x ( 0 ) = ( 1.25 , 1.25 ) .
Table 10. Numerical results for F 8 , x ( 0 ) = ( 1.25 , 1.25 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 64.001.019 × 10 65 6.029 × 10 261 ξ ¯ 1 4.1403
M S ( 1 , 1 ) 54.052.474 × 10 42 4.929 × 10 177 ξ ¯ 1 3.3999
Traub-Ste64.008.946 × 10 86 2.341 × 10 342 ξ ¯ 1 4.4433
O s t r o 01 84.002.050 × 10 55 5.429 × 10 219 ξ ¯ 1 4.4246
M 4 , 3 ------
M 6 , 3 ------
In Table 10, method M S ( 1 , 1 ) stands out as the one with the lowest computational time, requiring fewer iterations compared to the other methods and showing a convergence order of 4.05. The Traub-Ste method, despite having the same convergence order as M S ( 1 , 1 ) , shows a slightly higher computational time and larger errors in the difference between iterates. O s t r o 01 , which requires more iterations, falls only slightly behind in computational time but remains competitive overall. Methods M 4 , 3 and M 6 , 3 do not converge to the solution, as division by zero occurred when calculating the divided differences.
Table 11. Numerical results for F 8 , x ( 0 ) = ( 2 , 2 ) .
Table 11. Numerical results for F 8 , x ( 0 ) = ( 2 , 2 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 104.006.679 × 10 71 1.113 × 10 281 ξ ¯ 1 6.9568
M S ( 1 , 1 ) 84.284.387 × 10 77 3.113 × 10 321 ξ ¯ 1 5.4126
Traub-Ste154.004.978 × 10 100 2.442 × 10 397 ξ ¯ 2 10.6624
O s t r o 01 ------
M 4 , 3 ------
M 6 , 3 ------
Method M S ( 1 , 1 ) stands out as the one with the lowest computational time, requiring fewer iterations compared to the other methods and showing a convergence order of 4.28 . The Traub-Ste scheme, despite having the same convergence order as M S ( 1 , 1 ) , shows a significantly higher computational time but better results in terms of errors. M S ( 1 , 1 ) , while requiring more iterations than M S ( 1 , 1 ) , still maintains a competitive time compared to Traub-Ste. Division by zero is the reason why the other methods could not reach the solution.
Table 12. Numerical results for F 8 , x ( 0 ) = ( 2.25 , 2.25 ) .
Table 12. Numerical results for F 8 , x ( 0 ) = ( 2.25 , 2.25 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 164.002.821 × 10 94 3.543 × 10 375 ξ ¯ 1 10.5084
M S ( 1 , 1 ) 64.198.959 × 10 45 1.550 × 10 184 ξ ¯ 2 4.2948
Traub-Ste------
O s t r o 01 k > 50 -----
M 4 , 3 ------
M 6 , 3 ------
In Table 12, it is observed that only the members of our proposed class converged, and they did so to different solutions. M S ( 1 , 1 ) stood out as the most efficient in terms of time and iterations, while M S ( 1 , 1 ) yielded the best errors. Method O s t r o 01 did not converge because it exceed the maximum number of iterations, while the other methods could not reach the solution due to division by zero during the computation of divided differences.
Table 13. Numerical results for F 8 , x ( 0 ) = ( 2.5 , 2.5 ) .
Table 13. Numerical results for F 8 , x ( 0 ) = ( 2.5 , 2.5 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 104.002.063 × 10 86 6.071 × 10 344 ξ ¯ 2 6.8361
M S ( 1 , 1 ) 74.173.281 × 10 52 6.327 × 10 218 ξ ¯ 1 4.7535
Traub-Ste------
O s t r o 01 ------
M 4 , 3 ------
M 6 , 3 ------
The observations for Table 13 are the same as those in the previous one, but now method O s t r o 01 does not converge to the solution due to a division by zero, just like the other methods that do not converge.
Example 9.
Now, we work with a high-dimensional problem ( n = 200 ) , where we use anonymous functions to obtain the results in a reasonable amount of time. We take two initial estimates for this problem. The ninth case we present is a system F 9 ( x ) = f 9 1 ( x ) , f 9 1 ( x ) , , f 200 9 ( x ) T , where
f i 9 ( x ) = x i 3 + 2 x i + 1 2 x i + 2 + 4 x i + 3 + 3 x i + 4 2 10 , i = 1 , 2 , , 196 , f 197 9 ( x ) = x 197 3 + 2 x 198 2 x 199 + 4 x 200 + 3 x 1 2 10 . f 198 9 ( x ) = x 198 3 + 2 x 199 2 x 200 + 4 x 1 + 3 x 2 2 10 . f 199 9 ( x ) = x 199 3 + 2 x 200 2 x 1 + 4 x 2 + 3 x 3 2 10 . f 200 9 ( x ) = x 200 3 + 2 x 1 2 x 2 + 4 x 3 + 3 x 4 2 10 .
whose solution is ξ ¯ = ( 1 , 1 , 1 , , 1 ) .
The results presented in Table 14 and Table 15 show that for both initial estimates, the best performances were obtained by the Traub-Ste method, followed closely by our proposed methods, especially M S ( 1 , 1 ) . On the other hand, for both cases, the O s t r o 01 method failed to converge to a solution, reaching the maximum number of iterations, while M 4 , 3 and M 6 , 3 displayed the same behavior, as shown in Table 14.
Example 10.
In this problem, we focus on the comprehensive analysis of a nonlinear elliptic initial and boundary value problem, previously addressed in [26]. The same idea from [27] is adopted, where introducing a new term | u | makes the problem non-differentiable. The partial differential equation we will study next has the ability to represent a wide range of nonlinear phenomena present in physical, chemical, and biological systems. The applied problem will revolve around the diffusion of nutrients in a biological substrate. In this context, and within the fields of agriculture and biotechnology, a two-dimensional substrate that simulates a cultivation area or a growth medium for microorganisms is considered.The tenth case is given by the partial differential equation
2 u x 2 + 2 u y 2 = u ( x , y ) 3 + | u ( x , y ) | , with Ω = { ( x , y ) R 2 : x , y [ 0 , 1 ] } , u ( x , 0 ) = 2 x 2 x + 1 , u ( x , 1 ) = 2 , 0 x 1 , u ( 0 , y ) = 2 y 2 y + 1 , u ( 1 , y ) = 2 , 0 y 1 .
In this scenario, u ( x , y ) represents the concentration of nutrients at each point ( x , y ) within the substrate. The equation reflects how nutrients diffuse within the substrate, while the term u ( x , y ) 3 + | u ( x , y ) | incorporates the interaction between nutrient concentration and the biochemical processes present in the substrate. By discretizing it with central differences, we will have the expression
2 λ 2 + 1 u i , j λ 2 u i , j + 1 + u i , j 1 + λ 2 u i , j 1 u i + 1 , j + u i 1 , j + h 2 u i , j 3 + | u i , j | = 0 .
Then, following the process seen in [27] results in the nonlinear system
( u ) = τ u ( x , y ) + h 2 ν u ( x , y ) W = 0 .
To solve the system of nonlinear Equation (36), we employ the iterative method S S ( 1 , 1 ) . We generate an initial approximation u ( 0 ) = ( 1 , 1 , , 1 ) of the exact solution u ( x , y ) . The technical specifications of the computer used to solve this case are identical to those employed in solving academical problems. In this instance, we select n = m = 10 .
By solving the system associated with this PDE, we present the solution of the system in the following matrix:
u ( x i , y j ) = 1.0000 0.9256 0.8843 1.5207 1.7438 2.0000 0.9256 0.8883 0.8575 0.9627 0.7187 2.0000 0.8843 0.8575 0.8270 0.6335 0.4006 2.0000 1.5207 0.9627 0.6335 0.0726 0.0364 2.0000 1.7438 0.7187 0.4006 0.0364 0.0182 2.0000 2.0000 2.0000 2.0000 2.0000 2.0000 2.0000
After five iterations, the obtained solution satisfies
u ( 5 ) u ( 4 ) < 6.038 × 10 74
and the norm of the nonlinear operator ( u ) evaluated at the last iteration is
u ( 5 ) 1.22 × 10 299 .
The approximate solution in R 100 , resized to a 10 × 10 matrix for i , j = 1 , , 10 , and then embedded in the solution matrix within the grid bounded by the boundary conditions, is presented in the matrix above.
We can assert that the values u ( x i , y j ) obtained fall within the range 0 < u ( x i , y j ) < 1 . In the context of the posed problem, which involves the diffusion of nutrients in a two-dimensional biological substrate, these observations take on key importance. The concentration of nutrients u ( x i , y j ) follows a pattern where values are constrained between 0 and 1. This limitation in concentration suggests a biological equilibrium in the system, where absorption, diffusion, and biochemical reactions are in harmony. This implies that the system is stable and well regulated in terms of nutrient availability.

5. Conclusions

The fifth-order Jacobian-Free iterative method with scalar accelerators developed in this study (along with its fourth-order partners) has proven to be an efficient tool for solving nonlinear systems of equations. It preserves the convergence order of the original scheme, despite the elimination of Jacobian matrix calculations, with low computational cost. The substitution of these matrices with divided differences not only reduces computational complexity but also facilitates implementation, maintaining high precision and efficiency. The numerical results highlight its good performance in terms of efficiency, both in non-differentiable problems, high-dimensional problems, and practical application problems. In future works, higher-order Jacobian-free schemes might be developed, focusing on their stability and efficiency.

Author Contributions

Conceptualization, J.R.T. and A.C.; software, A.R.-C.; methodology, J.G.M.; validation, J.G.M., A.C. and J.R.T.; investigation, A.C.; formal analysis, J.R.T.; resources, A.R.-C.; writing—original draft preparation, A.R.-C.; visualization, J.G.M.; writing—review and editing, J.R.T. and A.C.; supervision, J.R.T. and A.C. All authors have read and agreed to the published version of the manuscript.

Funding

Funded with Ayuda a Primeros Proyectos de Investigación (PAID-06-23), Vicerrectorado de Investigación de la Universitat Politècnica de València (UPV).

Data Availability Statement

All the necessary data is available inside this paper.

Acknowledgments

The authors thank the reviewers for their corrections and comments, which have helped to improve this document.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Jiang, Y.; Chai, T.; Chen, G. Output Feedback Based Adaptive Optimal Output Regulation for Continuous-Time Strict-Feedback Nonlinear Systems. IEEE Trans. Autom. Control 2024. [Google Scholar] [CrossRef]
  2. Xu, W.; Zheng, N.; Hayami, K. Jacobian-free implicit inner-iteration preconditioner for nonlinear least squares problems. J. Sci. Comput. 2016, 68, 1055–1081. [Google Scholar] [CrossRef]
  3. Fung, S.W.; Heaton, H.; Li, Q.; McKenzie, D.; Osher, S.; Yin, W. Jacobian-free backpropagation for implicit networks. AAAI Conf. Artif. Intell. 2022, 36, 6648–6656. [Google Scholar] [CrossRef]
  4. Ashrafizadeh, A.; Devaud, C.B.; Aydemir, N.U. A Jacobian-free Newton–Krylov method for thermalhydraulics simulations. Int. J. Numer. Methods Fluids 2015, 77, 590–615. [Google Scholar] [CrossRef]
  5. Eguillon, Y.; Lacabanne, B.; Tromeur-Dervout, D. IFOSMONDI co-simulation algorithm with Jacobian-free methods in PETSc. Eng. Comput. 2022, 38, 4423–4449. [Google Scholar] [CrossRef]
  6. Samanskii, V. On a modification of the Newton method. Ukr. Math. J. 1967, 19, 133–138. [Google Scholar]
  7. Steffensen, J.F. Remarks on iteration. Skand. Aktuarietidskr. 1933, 1, 64–72. [Google Scholar] [CrossRef]
  8. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
  9. Chicharro, F.I.; Cordero, A.; Gutiérrez, J.M.; Torregrosa, J.R. Complex dynamics of derivative-free methods for nonlinear equations. Appl. Math. Comput. 2013, 219, 7023–7035. [Google Scholar] [CrossRef]
  10. Wang, X.; Zhang, T.; Qin, Y. Efficient two-step derivative-free iterative methods with memory and their dynamics. Int. J. Comput. Math. 2016, 93, 1423–1446. [Google Scholar] [CrossRef]
  11. Chu, Y.; Rafiq, N.; Shams, M.; Akram, S.; Mir, N.A.; Kalsoom, H. Computer methodologies for the comparison of some efficient derivative free simultaneous iterative methods for finding roots of non-linear equations. Comput. Mater. Contin. 2020, 66, 275–290. [Google Scholar] [CrossRef]
  12. Bahgat, M.S. Three-point iterative algorithm in the absence of the derivative for solving nonlinear equations and their basins of attraction. J. Egypt. Math. Soc. 2021, 29, 23. [Google Scholar] [CrossRef]
  13. Zhanlav, T.; Otgondorj, K. Comparison of some optimal derivative-free three-point iterations. J. Numer. Anal. Approx. Theory 2020, 49, 76–90. [Google Scholar] [CrossRef]
  14. Singh, H.; Sharma, J.R.; Kumar, S. A simple yet efficient two-step fifth-order weighted-Newton method for nonlinear models. Numer. Algorithms 2023, 93, 203–225. [Google Scholar] [CrossRef]
  15. Amiri, A.R.; Cordero, A.; Darvishi, M.T.; Torregrosa, J.R. Preserving the order of convergence: Low-complexity Jacobian-free iterative schemes for solving nonlinear systems. Comput. Appl. Math. 2018, 337, 87–97. [Google Scholar] [CrossRef]
  16. Ostrowski, A.M. Solutions of Equations and Systems of Equations; Academic Press: New York, NY, USA; London, UK, 1966. [Google Scholar]
  17. Behl, R.; Cordero, A.; Torregrosa, J.R.; Bhalla, S. A New High-Order Jacobian-Free Iterative Method with Memory for Solving Nonlinear Systems. Mathematics 2021, 9, 2122. [Google Scholar] [CrossRef]
  18. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  19. Sharma, J.R.; Arora, H. On efficient weighted-Newton methods for solving systems of nonlinear equations. Appl. Math. Comput. 2013, 222, 497–506. [Google Scholar] [CrossRef]
  20. Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S.S. On a new method for computing the numerical solution of systems of nonlinear equations. Appl. Math. 2012, 2012, 751975. [Google Scholar] [CrossRef]
  21. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  22. Abad, M.F.; Cordero, A.; Torregrosa, J.R. A family of seventh-order schemes for solving nonlinear systems. Bull. Math. Soc. Sci. Math. Roum. 2014, 57, 133–145. [Google Scholar]
  23. Cordero, A.; Rojas–Hiciano, R.V.; Torregrosa, J.R.; Vassileva, M.P. A highly efficient class of optimal fourth-order methods for solving nonlinear systems. Numer. Algorithms 2024, 95, 1879–1904. [Google Scholar] [CrossRef]
  24. Traub, I.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  25. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  26. Villalba, E.G.; Hernandez, M.; Hueso, J.L.; Martínez, E. Using decomposition of the nonlinear operator for solving non-differentiable problems. Math. Methods Appl. Sci. 2022. [Google Scholar] [CrossRef]
  27. Cordero, A.; Leonardo-Sepúlveda, M.; Torregrosa, J.R.; Vassileva, M.P. Enhancing the convergence order from p to p+3 in iterative methods for solving nonlinear systems of equations without the use of Jacobian matrices. Mathematics 2023, 11, 4238. [Google Scholar] [CrossRef]
Figure 1. I indices for M S ( p 1 , p 2 ) and comparison methods.
Figure 1. I indices for M S ( p 1 , p 2 ) and comparison methods.
Mathematics 12 03341 g001
Figure 2. I indices for M S ( p 1 , p 2 ) and comparison schemes.
Figure 2. I indices for M S ( p 1 , p 2 ) and comparison schemes.
Mathematics 12 03341 g002
Table 1. Computational effort of new and comparison schemes.
Table 1. Computational effort of new and comparison schemes.
 Method Complexity C
  M S ( p 1 , p 2 )   2 3 n 3 + 6 n 2 2 3 n
 Traub-Ste  n 3 + 10 n 2 2 n
  O s t r o 01   2 3 n 3 + 6 n 2 + 1 3 n
  M 4 , 3   1 3 n 3 + 8 n 2 + 2 3 n
  M 6 , 3   1 3 n 3 + 11 n 2 + 5 3 n
Table 14. Results for function F 9 , using as a seed x ( 0 ) = ( 1.5 , 1.5 , 1.5 , , 1.5 ) .
Table 14. Results for function F 9 , using as a seed x ( 0 ) = ( 1.5 , 1.5 , 1.5 , , 1.5 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 183.871.240 × 10 4 0.000 × 10 0 ξ ¯ 1 91.0024
M S ( 1 , 1 ) 94.613.092 × 10 4 0.000 × 10 0 ξ ¯ 1 45.1205
Traub-Ste43.545.968 × 10 7 0.000 × 10 0 ξ ¯ 1 20.2254
O s t r o 01 k > 50-----
M 4 , 3 k > 50-----
M 6 , 3 k > 50-----
Table 15. Results for function F 9 , using as a seed x ( 0 ) = ( 0.68 , 0.68 , 0.68 , , 0.68 ) .
Table 15. Results for function F 9 , using as a seed x ( 0 ) = ( 0.68 , 0.68 , 0.68 , , 0.68 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
M S ( 1 , 1 ) 63.936.001 × 10 4 0.000 × 10 0 ξ ¯ 1 32.9368
M S ( 1 , 1 ) 54.671.264 × 10 6 0.000 × 10 0 ξ ¯ 1 26.1185
Traub-Ste43.811.662 × 10 10 0.000 × 10 0 ξ ¯ 1 20.4459
O s t r o 01 k > 50-----
M 4 , 3 154.241.561 × 10 6 0.000 × 10 0 ξ ¯ 1 77.8942
M 6 , 3 105.363.438 × 10 4 0.000 × 10 0 ξ ¯ 1 51.8874
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cordero, A.; Maimó, J.G.; Rodríguez-Cabral, A.; Torregrosa, J.R. Two-Step Fifth-Order Efficient Jacobian-Free Iterative Method for Solving Nonlinear Systems. Mathematics 2024, 12, 3341. https://doi.org/10.3390/math12213341

AMA Style

Cordero A, Maimó JG, Rodríguez-Cabral A, Torregrosa JR. Two-Step Fifth-Order Efficient Jacobian-Free Iterative Method for Solving Nonlinear Systems. Mathematics. 2024; 12(21):3341. https://doi.org/10.3390/math12213341

Chicago/Turabian Style

Cordero, Alicia, Javier G. Maimó, Antmel Rodríguez-Cabral, and Juan R. Torregrosa. 2024. "Two-Step Fifth-Order Efficient Jacobian-Free Iterative Method for Solving Nonlinear Systems" Mathematics 12, no. 21: 3341. https://doi.org/10.3390/math12213341

APA Style

Cordero, A., Maimó, J. G., Rodríguez-Cabral, A., & Torregrosa, J. R. (2024). Two-Step Fifth-Order Efficient Jacobian-Free Iterative Method for Solving Nonlinear Systems. Mathematics, 12(21), 3341. https://doi.org/10.3390/math12213341

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop