Next Article in Journal
Fractal Analysis and Fractal Dimension in Materials Chemistry
Next Article in Special Issue
Oscillatory and Asymptotic Criteria for a Fifth-Order Fractional Difference Equation
Previous Article in Journal
Concentrating Solutions for Fractional Schrödinger–Poisson Systems with Critical Growth
Previous Article in Special Issue
Novel Ostrowski–Type Inequalities for Generalized Fractional Integrals and Diverse Function Classes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Applicability of Two-Step Fractional Newton–Raphson Method

1
Department of Mathematics, Government College University Faisalabad, Faisalabad 38000, Pakistan
2
Department of Mathematics, Politehnica University of Timisoara, 300006 Timisoara, Romania
3
Department of Mathematics, College of Science, University of Bisha, P.O. Box 551, Bisha 61922, Saudi Arabia
*
Author to whom correspondence should be addressed.
Fractal Fract. 2024, 8(10), 582; https://doi.org/10.3390/fractalfract8100582
Submission received: 29 August 2024 / Revised: 26 September 2024 / Accepted: 27 September 2024 / Published: 2 October 2024
(This article belongs to the Special Issue Fractional Systems, Integrals and Derivatives: Theory and Application)

Abstract

:
Developing two-step fractional numerical methods for finding the solution of nonlinear equations is the main objective of this research article. In addition, we present a detailed study of convergence analysis for the methods that have been proposed. By comparing numerically, we can see that the proposed methods significantly improve convergence rate and accuracy. Additionally, we demonstrate how our main results can be applied to basins of attraction.

1. Introduction and Preliminaries

Fractional calculus plays a pivotal role in modern analysis. Many mathematical models use fractional order derivatives are significant because they are more precise and realistic than traditional models. Driven by progress in fractional calculus, many researchers are studying how to solve nonlinear equations that use fractional operators. They are creating different methods, both analytical and numerical, to find approximate solutions. For details, see [1,2,3,4,5,6,7,8,9,10].
One of the most famous and fascinating methods to solve nonlinear equations is Newton–Raphson’s method. This method sought much attention from recent researchers to formulate its fractional variant. In this regard, the fractional Newton method was developed by the researchers [11], but they have only replaced the mode of derivative from classical to fractional. On the other hand, in [12], the researchers derived the fractional Newton method and proved its convergence on the 2 α -th order. Many other studies exist in which fractional calculus was involved to formulate the precise iterative schemes for the solution of nonlinear equations; see [13,14,15] and the references therein for more details.
In this manuscript, our aim is to develop some two-step fractional iterative schemes by using fractional calculus. The breakdown of our study is as follows: In Section 2, the new two-step fractional numerical methods are introduced and their convergence is discussed. In Section 3, we compare the developed numerical methods with the existing fractional Newton method (FNM) in the sense of iteration count and residual logarithms. Furthermore, the dynamical analysis is shown in Section 4 in the form of a basin of attraction. Finally, we conclude our work in Section 5, along with future directions.
We now recall some basic concepts and results of the Caputo-fractional derivative and the generalized fractional Taylor series. Then, we discuss the convergence analysis of the fractional Newton method by proposing a new error equation.
Definition 1. 
One kind of fractional derivative with α , a R is the Caputo-fractional derivative of f ( x ) , which is defined as follows:
D a α C f ( x ) = 1 Γ ( m α ) a x ( x t ) n α 1 d f ( m ) ( t ) d t ( m ) d t , m 1 < α m N d f ( m ) ( t ) d t ( m ) , α = m N .
Theorem 1 
([7]). Let us suppose that D   j C α a f ( x ) C [ a , b ] for j = 1 , 2 , , n + 1 , where α ( 0 , 1 ] . Then, we have
f ( x ) = i = 0 n D a i α C f ( a ) ( x a ) i α Γ ( i α + 1 ) + D a ( n + 1 ) α C f ( ξ ) ( x a ) ( n + 1 ) α Γ ( ( n + 1 ) α + 1 ) ,
with a ξ x , for all x ( a , b ] , where D a n α C f ( a ) = D a α C f ( a )   D a α C f ( a ) D a α C f ( a ) ( n t i m e s ) .
By considering the Taylor series expansion of a function f ( x ) centered at a = x ¯ , we can reframe it in fractional sense, a specialized mathematical concept discussed in [7],
f ( x ) = D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) ( x x ¯ ) α + D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) ( x x ¯ ) 2 α + D x ¯ 3 α C f ( x ¯ ) Γ ( 3 α + 1 ) ( x x ¯ ) 3 α + O ( ( x x ¯ ) 4 α ) .
The fractional derivative of f ( x ) about x ¯ has a corresponding expansion, which is
D x ¯ α C f ( x ) = D x ¯ α C f ( x ¯ ) + D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) ( x x ¯ ) α + D x ¯ 3 α C f ( x ¯ ) Γ ( 2 α + 1 ) ( x x ¯ ) 2 α + D x ¯ 4 α C f ( x ¯ ) Γ ( 3 α + 1 ) ( x x ¯ ) 3 α + O ( ( x x ¯ ) 4 α .
We now present an overview of a Newton-type method incorporating Caputo-fractional derivatives, along with a comprehensive examination of its convergence properties.
Theorem 2. 
Let f : D R R be a continuous function in a domain D that contains the root x ¯ of f ( x ) . For any positive integers k and α , 0 < α 1 , it possesses fractional derivatives of order k α . Furthermore, assume that D x ¯ α C f ( x ) is a continuous, nonzero fractional derivative of Caputo type at x ¯ . Furthermore, if the initial guess x 0 is sufficiently close to x ¯ , then
x k + 1 = x k Γ ( α + 1 ) f ( x k ) D x ¯ α C f ( x k ) , k = 0 , 1 , ,
has a local convergence rate of at least 2 α and the error equation for this scheme is given by
e k + 1 = D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) Γ ( α + 1 ) 2 D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) Γ ( 2 α + 1 ) e k 2 α + O ( e k 3 α ) .
Proof. 
Suppose { x k } k 0 . Expanding f ( x ) and its Caputo derivative at x k , around x ¯ yields
f ( x ) = D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) ( x x ¯ ) α + D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) ( x x ¯ ) 2 α + D x ¯ 3 α C f ( x ¯ ) Γ ( 3 α + 1 ) ( x x ¯ ) 3 α + O ( ( x x ¯ ) 4 α ) ,
and
D x ¯ α C f ( x ) = D x ¯ α C f ( x ¯ ) + D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) ( x x ¯ ) α + D x ¯ 3 α C f ( x ¯ ) Γ ( 2 α + 1 ) ( x x ¯ ) 2 α + D x ¯ 4 α C f ( x ¯ ) Γ ( 3 α + 1 ) ( x x ¯ ) 3 α + O ( ( x x ¯ ) 4 α .
Thus, a Newton-type quotient can be derived and represented in terms of the approximation error e k = x k x ¯ at the k-th iteration, as
f ( x k ) D x ¯ α C f ( x k ) = D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) ( x x ¯ ) α + D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) ( x x ¯ ) 2 α + D x ¯ 3 α C f ( x ¯ ) Γ ( 3 α + 1 ) ( x x ¯ ) 3 α + O ( x x ¯ ) 4 α D x ¯ α C f ( x ¯ ) + D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) ( x x ¯ ) α + D x ¯ 3 α C f ( x ¯ ) Γ ( 2 α + 1 ) ( x x ¯ ) 2 α + D x ¯ 4 α C f ( x ¯ ) Γ ( 3 α + 1 ) ( x x ¯ ) 3 α + O ( ( x x ¯ ) 4 α .
So, to attain a convergence rate of 2 α , it is evident that a Caputo-fractional Newton’s method must incorporate Γ ( α + 1 ) as a damping factor, leading to the following resulting error equation:
e k + 1 α = x k + 1 x ¯ = e k α Γ ( α + 1 ) f ( x k ) D x ¯ α C f ( x k ) = D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) Γ ( α + 1 ) 2 D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) Γ ( 2 α + 1 ) e k 2 α + O ( e k 3 α ) .
This completes the proof.    □

2. New Fractional Numerical Techniques

In this section, we introduce the two-step fractional Newton–Raphson method (TSFNRM), which is a modified version of the Newton–Raphson technique.
Now, the following result demonstrates the convergence analysis of Algorithm 1.
Algorithm 1 Two-step Caputo-Fractional Newton–Raphson Method (TSFNRM1).
For an initial approximation x 0 sufficiently close to x ¯ , find x k + 1 by the following Caputo-type two-step scheme:
y k = x k Γ ( α + 1 ) f ( x k ) D x ¯ α C f ( x k ) ,
x k + 1 = y k Γ ( α + 1 ) f ( y k ) D x ¯ α C f ( x k ) , k = 0 , 1 , .
Theorem 3. 
Assume that f : D R R is a continuous function in a domain D that contains the root x ¯ of f ( x ) . For any positive integers k and α , 0 < α 1 , it possesses fractional derivatives of order k α . Also suppose that D x ¯ α C f ( x ) is a continuous, nonzero fractional derivative of Caputo type at x ¯ . Furthermore x 0 is sufficiently close to x ¯ . Under these circumstances, Algorithm 1 has a local convergence rate of at least 3 α and the error equation for this scheme is given by
e k + 1 = D x ¯ 2 α C f ( x k ) 2 Γ α + 1 C 2 D x ¯ α f ( x k ) + Γ 2 α + 1 D x ¯ α C f ( x k ) 4 Γ α + 1 2 Γ 2 α + 1 e k 3 α + O e k 4 α .
Proof. 
Let { x k } k 0 , obtained using (5), be a sequence of iterations produced by x ¯ of f ( x ) . Expanding f ( x ) and its Caputo derivative at x k , around x ¯ yields as
f ( x ) = D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) ( x x ¯ ) α + D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) ( x x ¯ ) 2 α + D x ¯ 3 α C f ( x ¯ ) Γ ( 3 α + 1 ) ( x x ¯ ) 3 α + O ( ( x x ¯ ) 4 α ) ,
and
D x ¯ α C f ( x ) = C D x ¯ α f ( x ¯ ) + D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) ( x x ¯ ) α + D x ¯ 3 α C f ( x ¯ ) Γ ( 2 α + 1 ) ( x x ¯ ) 2 α + D x ¯ 4 α C f ( x ¯ ) Γ ( 3 α + 1 ) ( x x ¯ ) 3 α + O ( ( x x ¯ ) 4 α .
Thus, a Newton-type quotient can be derived, and represented in terms of the approximation error e k = x k x ¯ at the k-th iteration, as
f ( x k ) D x ¯ α C f ( x k ) = D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) e k α + D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) e k 2 α + D x ¯ 3 α C f ( x ¯ ) Γ ( 3 α + 1 ) e k 3 α + O ( e k ) 4 α D x ¯ α C f ( x ¯ ) + D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) e k α + D x ¯ 3 α C f ( x ¯ ) Γ ( 2 α + 1 ) e k 2 α + D x ¯ 4 α C f ( x ¯ ) Γ ( 3 α + 1 ) e k 3 α + O ( e k 4 α ) .
So, from (10), we have
y k = e k α Γ ( α + 1 ) f ( x k ) D x ¯ α C f ( x k ) = D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) Γ ( α + 1 ) 2 D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) Γ ( 2 α + 1 ) e k 2 α + O ( e k 3 α ) .
From (13) and (16), we obtain
f ( y k ) = D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) 2 Γ ( 2 α + 1 ) Γ ( α + 1 ) 2 Γ ( 2 α + 1 ) e k 2 α + O ( e k 3 α ) .
From (14) and (17), we obtain
f ( y k ) D x ¯ α C f ( x k ) = D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) 2 + Γ ( 2 α + 1 ) ( C D x ¯ α f ( x ¯ ) ) 2 Γ ( α + 1 ) 2 Γ ( 2 α + 1 ) e k 2 α + O ( e k 3 α ) .
So, to attain a convergence rate of 3 α , it is evident that the TSFNRM1 must incorporate Γ ( α + 1 ) as a damping factor, leading to the following resulting error equation
e k + 1 = y k Γ ( α + 1 ) f ( y k ) D x ¯ α C f ( x k ) , = D x ¯ 2 α C f ( x k ) 2 Γ α + 1 C 2 D x ¯ α f ( x k ) + Γ 2 α + 1 D x ¯ α C f ( x k ) 4 Γ α + 1 2 Γ 2 α + 1 e k 3 α + O e k 4 α .
This completes the proof.    □
Now, the following result demonstrates the convergence analysis of Algorithm 2.
Algorithm 2 Two-step Fractional Newton–Raphson Method (TSFNRM2).
For an initial approximation x 0 sufficiently close to x ¯ , find x k + 1 by the following Caputo-type two-step scheme:
y k = x k Γ ( α + 1 ) f ( x k ) D x ¯ α C f ( x k ) ,
x k + 1 = y k Γ ( α + 1 ) f ( y k ) D x ¯ α C f ( y k ) , k = 0 , 1 , .
Theorem 4. 
Let f : D R R be a continuous function in a domain D that contains the root x ¯ of f ( x ) . For any positive integers k and α , 0 < α 1 , it possesses fractional derivatives of order k α . Furthermore, assume that D x ¯ α C f ( x ) is a continuous, nonzero fractional derivative of Caputo type at x ¯ . Under these circumstances, Algorithm 2 has a local convergence rate of at least 4 α if the starting estimate x 0 is sufficiently close to x ¯ . The error equation for this algorithm is as follows.
e k + 1 = D x ¯ 2 α C f ( x k ) 3 Γ α + 1 C 2 D x ¯ α f ( x k ) + Γ 2 α + 1 3 D x ¯ α C f ( x k ) 6 Γ α + 1 3 Γ 2 α + 1 3 e k 4 α + O e k 5 α .
Proof. 
Let { x k } k 0 , obtained by using (5), be a sequence of iterations produced by x ¯ of f ( x ) . Expanding f ( x ) and its Caputo derivative at x k , around x ¯ yields as
f ( x ) = D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) ( x x ¯ ) α + D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) ( x x ¯ ) 2 α + D x ¯ 3 α C f ( x ¯ ) Γ ( 3 α + 1 ) ( x x ¯ ) 3 α + O ( ( x x ¯ ) 4 α ) ,
and
D x ¯ α C f ( x ) = D x ¯ α C f ( x ¯ ) + D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) ( x x ¯ ) α + D x ¯ 3 α C f ( x ¯ ) Γ ( 2 α + 1 ) ( x x ¯ ) 2 α + D x ¯ 4 α C f ( x ¯ ) Γ ( 3 α + 1 ) ( x x ¯ ) 3 α + O ( ( x x ¯ ) 4 α .
Thus, a Newton-type quotient can be derived, and represented in terms of the approximation error e k = x k x ¯ at the k-th iteration, as
f ( x k ) D x ¯ α   C f ( x k ) = D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) ( x x ¯ ) α + D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) ( x x ¯ ) 2 α + D x ¯ 3 α C f ( x ¯ ) Γ ( 3 α + 1 ) ( x x ¯ ) 3 α + O ( x x ¯ ) 4 α D x ¯ α C f ( x ¯ ) + D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) ( x x ¯ ) α + D x ¯ 3 α C f ( x ¯ ) Γ ( 2 α + 1 ) ( x x ¯ ) 2 α + D x ¯ 4 α C f ( x ¯ ) Γ ( 3 α + 1 ) ( x x ¯ ) 3 α + O ( ( x x ¯ ) 4 α .
So, from (19), we have
y k = e k α Γ ( α + 1 ) f ( x k ) D x ¯ α C f ( x k ) = D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) Γ ( α + 1 ) 2 D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) Γ ( 2 α + 1 ) e k 2 α + O ( e k 3 α ) .
From (22), (23) and (25), we obtain
f ( y k ) = D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) 2 Γ ( 2 α + 1 ) Γ ( α + 1 ) 2 Γ ( 2 α + 1 ) e k 2 α + O ( e k 3 α ) ,
D x ¯ α C f ( y k ) = D x ¯ α C f ( x ¯ ) ( C D x ¯ 2 α f ( x ¯ ) ) 2 Γ ( α + 1 ) 2 Γ ( 2 α + 1 ) Γ ( α + 1 ) 2 Γ ( 2 α + 1 ) C D x ¯ α f ( x ¯ ) e k 2 α + O ( e k 4 α ) .
From (26) and (27), we obtain
f ( y k ) D x ¯ α C f ( y k ) = D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) 2 Γ ( 2 α + 1 ) Γ ( α + 1 ) Γ ( 2 α + 1 ) C D x ¯ α f ( x ¯ ) e k 2 α + O ( e k 3 α ) .
So, to attain a convergence rate of 4 α , it is evident that the TSFNRM2 must incorporate Γ ( α + 1 ) as a damping factor, leading to the following resulting error equation
e k + 1 = e k α Γ ( α + 1 ) f ( y k ) D x ¯ α C f ( y k ) , = ( C D x ¯ 2 α f ( x ¯ ) ) 3 Γ ( α + 1 ) 2 Γ ( 2 α + 1 ) 3 ( C D x ¯ α f ( x ¯ ) ) 3 Γ ( α + 1 ) 3 Γ ( 2 α + 1 ) 3 e k 4 α + O ( e k 5 α ) .
This completes the proof.    □
Assume that f ( x ) = 0 is a nonlinear equation with a simple root r. Assume that γ is an initial estimate that is close enough to r. Consider an approximate solution x n for the equation f ( x ) = 0 to demonstrate the procedure.
f ( x n ) 0 .
Let us consider an arbitrary function g ( x ) and a parameter λ , referred to as the Lagrange multiplier, derived from the optimality criterion. This leads to the Definition of an iterative scheme as
x n + 1 = x n + λ g ( x n ) f ( x n ) ,
where x n is an approximate solution, and λ is a scalar that depends on the specific problem. Using the optimality condition from (29), we have
λ = 1 g ( x n ) f ( x n ) + g ( x n ) f ( x n ) .
From (29) and (30), we obtain
x n + 1 = x n g ( x n ) f ( x n ) g ( x n ) f ( x n ) + g ( x n ) f ( x n ) .
The main recurrence relation for iterative methods is given by (31); see [6] for more details. If we let g ( x n ) = e 1 β x n , then g ( x n ) = 1 β e 1 β x n . Substituting these into (31) yields
x n + 1 = x n β f ( x n ) β f ( x n ) f ( x n ) , n = 0 , 1 , .
Now, we modify and extend the iterative scheme (32) using fractional calculus, replacing the ordinary derivative f ( x n ) with the Caputo-fractional derivative D x ¯ α C f ( x n ) of order α > 0 . This leads to new, fast-converging one-step and two-step iterative schemes for finding all real and complex roots of nonlinear equations by taking the fractional derivative of a fixed order for the nonlinear function f ( x ) .
Theorem 5. 
Let f : D R R be a continuous function in a domain D that contains the root x ¯ of f ( x ) . For any positive integers k and α , 0 < α 1 , it possesses fractional derivatives of order k α . Furthermore, assume that D x ¯ α C f ( x ) is a continuous, nonzero fractional derivative of Caputo type at x ¯ . Under these circumstances, Algorithm 3 has a local convergence rate of at least 2 α if the starting estimate x 0 is sufficiently close to x ¯ . The error equation for this algorithm is as follows:
e n + 1 = ( β C D a 2 α f ( x ¯ ) D a α C f ( x ¯ ) ) Γ ( 2 α + 1 ) β C D a 2 α f ( x ¯ ) Γ ( α + 1 ) 2 Γ ( 2 α + 1 ) Γ ( α + 1 ) β C D a α f ( x ¯ ) e n 2 α + O ( e n 3 α ) .
Algorithm 3 Proposed Iterative Method 3 (PIM3).
For an initial approximation x 0 sufficiently close to x ¯ , find x k + 1 by the following Caputo-type two-step scheme:
x n + 1 = x n Γ ( α + 1 ) β f ( x n ) β C D a α f ( x n ) f ( x n ) , n = 0 , 1 , .
Proof. 
Using the Taylor series expansion of f ( x ) around x ¯ , with the Caputo-fractional derivative D a α C f ( x n ) and expressed in terms of the error at the n-th iterate e n = x n x ¯ , we have:
f ( x ) = D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) e n α + D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) e n 2 α + D x ¯ 3 α C f ( x ¯ ) Γ ( 3 α + 1 ) e n 3 α + O ( e n 4 α ) ,
and
D x ¯ α C f ( x ) = D x ¯ α C f ( x ¯ ) + D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) e n α + D x ¯ 3 α C f ( x ¯ ) Γ ( 2 α + 1 ) e n 2 α + D x ¯ 4 α C f ( x ¯ ) Γ ( 3 α + 1 ) e n 3 α + O ( e n 4 α ) .
β C D a α f ( x n ) f ( x n ) = β D x ¯ α C f ( x ¯ ) + D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) e n α + D x ¯ 3 α C f ( x ¯ ) Γ ( 2 α + 1 ) e n 2 α + O ( e n 3 α ) D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) e n α + D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) e n 2 α + O ( e n 3 α ) .
Using (35) and (37), we obtain
β f ( x n ) β C D a α f ( x n ) f ( x n ) = β D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) e n α + D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) e n 2 α + O ( e n 3 α ) β D x ¯ α C f ( x ¯ ) + D x ¯ 2 α C f ( x ¯ ) Γ ( α + 1 ) e n α + D x ¯ 3 α C f ( x ¯ ) Γ ( 2 α + 1 ) e n 2 α D x ¯ α C f ( x ¯ ) Γ ( α + 1 ) e n α + D x ¯ 2 α C f ( x ¯ ) Γ ( 2 α + 1 ) e n 2 α + O ( e n 3 α ) .
Thus,
e k + 1 = e k α Γ ( α + 1 ) β f ( x n ) β D a α C f ( x n ) f ( x n ) , = ( β D a 2 α C f ( x ¯ ) C D a α f ( x ¯ ) ) Γ ( 2 α + 1 ) β C D a 2 α f ( x ¯ ) Γ ( α + 1 ) 2 Γ ( 2 α + 1 ) Γ ( α + 1 ) β C D a α f ( x ¯ ) e n 2 α + O ( e n 3 α ) .
Hence, the Proposed Iterative Method 3, denoted by PIM3, has a 2 α order of convergence. This completes the proof.    □
Now, we utilize the two-step technique to develop a fast-converging iterative scheme.
Theorem 6. 
Let f : D R R be a continuous function in a domain D that contains the root x ¯ of f ( x ) . For any positive integers k and α , 0 < α 1 , it possesses fractional derivatives of order k α . Furthermore, assume that D x ¯ α C f ( x ) is a continuous, nonzero fractional derivative of Caputo type at x ¯ . Under these circumstances, Algorithm 4 has a local convergence rate of at least 3 α if the starting estimate x 0 is sufficiently close to x ¯ . The error equation for this algorithm is as follows:
e n + 1 = D a 2 α C β D a 2 α C f ( x ¯ ) Γ ( α + 1 ) 2 + ( β D a 2 α C f ( x ¯ ) + D a α C ) Γ ( 2 α + 1 ) β Γ ( α + 1 ) 2 Γ ( 2 α + 1 ) ( D a α C f ( x ¯ ) ) 2 e n 3 α + O ( e n 4 α ) .
Algorithm 4 Proposed Iterative Method 4 (PIM4).
For an initial approximation x 0 sufficiently close to x ¯ , find x k + 1 by the following Caputo-type two-step scheme:
y n = x n Γ ( α + 1 ) β f ( x n ) β C D a α f ( x n ) f ( x n ) ,
x n + 1 = y n Γ ( α + 1 ) β f ( y n ) β C D a α f ( x n ) f ( y n ) n = 0 , 1 , .
Theorem 7. 
Let f : D R R be a continuous function in a domain D that contains the root x ¯ of f ( x ) . For any positive integers k and α , 0 < α 1 , it possesses fractional derivatives of order k α . Furthermore, assume that D x ¯ α C f ( x ) is a continuous, nonzero fractional derivative of Caputo type at x ¯ . Under these circumstances, Algorithm 5 has a local convergence rate of at least 4 α if the starting estimate x 0 is sufficiently close to x ¯ . The error equation for this algorithm is as follows:
e n + 1 = β D a α C f ( x ¯ ) Γ ( α + 1 ) 2 + ( β D a 2 α C f ( x ¯ ) D a α C f ( x ¯ ) ) Γ ( 2 α + 1 ) 3 Γ ( α + 1 ) 3 Γ ( 2 α + 1 ) 3 β 3 ( C D a α f ( x ¯ ) ) 3 e n 4 α + O ( e n 5 α ) .
Algorithm 5 Proposed Iterative Method 5 (PIM5).
For an initial approximation x 0 sufficiently close to x ¯ , find x k + 1 by the following Caputo-type two-step scheme:
y n = x n Γ ( α + 1 ) β f ( x n ) β C D a α f ( x n ) f ( x n ) ,
x n + 1 = y n Γ ( α + 1 ) β f ( y n ) β C D a α f ( y n ) f ( y n ) n = 0 , 1 , .
One can easily prove the above theorems by the techniques of Theorems 3 and 4.

3. Numerical Performance of Proposed Schemes

In this section, we present the numerical analysis of proposed techniques. The system environment that we use to conduct the numerical test is an Intel Core i5-8365U processor (1.60 GHz) and 16 GB of RAM. For numerical outcomes, we use Maple 2020 coding, and for visual analysis, we use Matlab 2021. To check how well our new fractional methods (FNM, TSFNRM1, and TSFNRM2) work, we chose the examples given in Table 1. Table 2, Table 3, Table 4, Table 5 and Table 6 show the numerical outcomes for different values of α . We use x n for root of function, iter for iteration count, | x n x n 1 | for the absolute error, and | f ( x n ) | represents the functional value. The graphical comparison in the form of residual logarithms is presented in Figure 1, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9. Also, Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 illustrate the iteration count comparison.
Furthermore, we use ϵ = 10 10 and obtain an approximated simple root, and for the computational work, we use the following convergence criteria:
( i ) | x n + 1 x n | < ϵ , ( ii ) | f ( x n + 1 ) | < ϵ .
The function f 1 ( x ) = x 3 10 has an exact root x n = 2.15443469003188372175 , As we can see from Figure 1, the logarithmic residual drop is better in both proposed methods than FNM.
In Figure 3, it can be observed that the drop of the residual logarithm is more rapid than the previously proposed fractional Newton method.
In Figure 5, the graphs show the supremacy of our proposed method in the sense of residual logarithm drop.
The rapid decrease in the log of residuals in Figure 7 indicates that our proposed methods are better than the fractional Newton method.
The evidence in the form of Figure 9 showcases that the logarithmic residuals are decreasing more quickly in the proposed methods. Moreover, Figure 2, Figure 4, Figure 6, Figure 8 and Figure 10 show that the proposed TSFNRM1 and TSFNRM2 outperform FNM in the sense of iteration count.
Now, we examine the numerical performance of the proposed schemes PIM4 and PIM5 by comparing these schemes with FNM in Table 7 and Table 8, respectively. The graphical representation in the form of residual logarithms is shown in Figure 11 and Figure 12. It can be seen from Figure 11 and Figure 12 that the log of residuals for our proposed fractional methods decreases more rapidly than FNM.

4. Numerical Stability of the Proposed Methods

To study the stability of the fractional Newton method, Akgül et al. [12] used a visual method based on Cayley’s quadratic test. This method looks at polynomiographs, which show the different areas where polynomial functions attract points, depending on a value called the fractional order α . Looking at these graphs can help us to understand the stability of methods.
To visualize the basins of attraction, we selected a rectangular region D, which is a subset of C. Specifically, we chose the rectangle [ 2 , 2 ] × [ 2 , 2 ] , encompassing all roots of the nonlinear equation P ( z ) = 0 . To enhance the visibility of the basins, we assigned a unique color to each root. If the algorithm, using the specified convergence parameters (tolerance = 10 3 and a maximum of 20 iterations), is unable to find a solution, the corresponding area shows more chaotic behavior.
Example 1. 
The first example we consider is P 1 ( z ) = z 2 1 , whose roots are 1 and −1.
As we can see in Figure 13 and Figure 14, FNM and the proposed TSFNRM2 perform less chaotically. On the other hand, Figure 15 shows that TSFNRM1 behaves chaotically for α = 0.2 and α = 0.5 but then settles down for α = 0.7 and α = 0.9 .
Example 2. 
Now, we examine P 2 ( z ) = z 3 1 , whose roots are 0.5 0.866 i , 0.5 + 0.866 i and 1.
The visuals in Figure 16 and Figure 17 show that FNM and TSFNRM2 have the same behavior for all values of α, and Figure 18 shows that TSFNRM1 has slightly chaotic behavior.
Example 3. 
Our third example is P 3 ( z ) = z 5 1 , whose roots are 0.81 0.59 i , 0.81 + 0.59 i , 0.31 0.95 i , 0.31 + 0.95 i , 1 . The graphical illustration in Figure 19, Figure 20 and Figure 21 shows that TSFNRM1 and TSFNRM2 improve their chaoticness with the increasing values of α

5. Conclusions

This study presents two-step numerical methods for finding the solutions of nonlinear equations. We initially devised two-step methods utilizing the fractional Newton method (FNM), referred to as TSFNRM1 and TSFNRM2. We subsequently developed two additional new two-step methods, PIM4 and PIM5, employing the concept of auxiliary functions. Our proposed methods exhibited enhanced performance through a thorough numerical comparison, especially regarding iteration counts and residual logarithms. The effectiveness of these methods was additionally substantiated through a basin of attraction analysis, affirming their improved convergence characteristics. Our findings highlight the capability of these two-step schemes to enhance current methods, providing strong alternatives for resolving nonlinear equations. Subsequent research will concentrate on improving these methods by integrating fractional calculus, which is anticipated to augment their efficacy and yield more profound insights for scholars in the discipline.

Author Contributions

Conceptualization, N.Z.A., A.G.K., and M.U.A.; software, N.Z.A., A.G.K., M.U.A., and L.C.; validation, N.Z.A., A.G.K., M.U.A., L.C., and K.B.; formal analysis, N.Z.A., A.G.K., M.U.A., L.C., and K.B.; investigation, N.Z.A., A.G.K., M.U.A., L.C., and K.B.; writing—original draft preparation, N.Z.A.; writing—review and editing, N.Z.A., A.G.K., M.U.A., L.C., and K.B.; visualization, N.Z.A., A.G.K., L.C., and K.B.; supervision, A.G.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author/s.

Acknowledgments

The authors are thankful to the editor and the anonymous reviewers for their valuable comments and suggestions. Kamel Brahim is thankful to the Deanship of Graduate Studies and Scientific Research at University of Bisha for supporting this work through the Fast-Track Research Support Program.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Columbu, A.; Fuentes, R.D.; Frassu, S. Uniform-in-time boundedness in a class of local and nonlocal nonlinear attraction–repulsion chemotaxis models with logistics. Nonlinear Anal. Real World Appl. 2024, 79, 104135. [Google Scholar] [CrossRef]
  2. Jumarie, G. Modified Riemann-Liouville derivative and fractional Taylor series of non-differentiable functions further results. Comput. Math. Appl. 2006, 51, 1367–1376. [Google Scholar] [CrossRef]
  3. Mathews, J.H.; Fink, K.D. Numerical Methods Using Matlab, 4th ed.; Prentice-Hall Inc.: Upper Saddle River, NJ, USA, 2004; ISBN 0-13-065248-2. [Google Scholar]
  4. Nonlaopon, K.; Khan, A.G.; Ameen, F.; Awan, M.U.; Cesarano, C. Some new quantum numerical techniques for solving nonlinear equations. Symmetry 2022, 14, 1829. [Google Scholar] [CrossRef]
  5. Noor, M.A. Fifth-order convergent iterative method for solving nonlinear equations using quadrature formula. J. Math. Control Sci. Appl. 2018, 4, 95–104. [Google Scholar]
  6. Noor, M.A. New classes of iterative methods for nonlinear equations. Appl. Math. Comput. 2007, 191, 128–131. [Google Scholar] [CrossRef]
  7. Odibat, Z.; Shawagfeh, N. Generalized Taylor’s formula. Appl. Math. Comput. 2007, 186, 286–293. [Google Scholar] [CrossRef]
  8. Podlubny, I. Fractional Differential Equations; Academic Press: San Diego, CA, USA, 1999. [Google Scholar]
  9. Sana, G.; Mohammed, P.O.; Shin, D.Y.; Noor, M.A.; Oudat, M.S. On iterative methods for solving nonlinear equations in quantum calculus. Fractal Fract. 2021, 5, 60. [Google Scholar] [CrossRef]
  10. Vivas-Cortez, M.; Ali, N.Z.; Khan, A.G.; Awan, M.U. Numerical Analysis of new hybrid algorithms for solving nonlinear equations. Axioms 2023, 12, 684. [Google Scholar] [CrossRef]
  11. Torres-Hernandez, A.; Brambila-Paz, F. Fractional Newton-Raphson method. Appl. Math. Sci. Int. J. 2021, 8, 1–13. [Google Scholar] [CrossRef]
  12. Akgül, A.; Cordero, A.; Torregrosa, J.R. A fractional Newton method with 2αth-order of convergence and its stability. Appl. Math. Lett. 2019, 98, 344–351. [Google Scholar] [CrossRef]
  13. Ali, N.; Waseem, M.; Safdar, M.; Akgül, A.; Tolasa, F.T. Iterative solutions for nonlinear equations via fractional derivatives: Adaptations and advances. Appl. Math. Sci. Eng. 2024, 32, 2333816. [Google Scholar] [CrossRef]
  14. Cordero, A.; Girona, I.; Torregrosa, J.R. A variant of Chebyshev’s method with 3αth-order of convergence by using fractional derivatives. Symmetry 2019, 11, 1017. [Google Scholar] [CrossRef]
  15. Torres-Hernandez, A.; Brambila, F.; De-la-Vega, E. Fractional Newton-Raphson method and some variants for the solution of nonlinear systems. Appl. Math. Sci. Int. J. 2020, 7, 13–27. [Google Scholar] [CrossRef]
Figure 1. Comparison of 5 standard problems according to residual logarithm per iteration of (a) FNM, (b) TSFNRM1, (c) TSFNRM2.
Figure 1. Comparison of 5 standard problems according to residual logarithm per iteration of (a) FNM, (b) TSFNRM1, (c) TSFNRM2.
Fractalfract 08 00582 g001
Figure 2. Iterations Comparison of f 1 ( x ) = x 3 10 with respect to FNM, TSFNRM1, and TSFNRM2.
Figure 2. Iterations Comparison of f 1 ( x ) = x 3 10 with respect to FNM, TSFNRM1, and TSFNRM2.
Fractalfract 08 00582 g002
Figure 3. Comparison of 5 standard problems according to residual logarithm per iteration of (a) FNM, (b) TSFNRM1, (c) TSFNRM2.
Figure 3. Comparison of 5 standard problems according to residual logarithm per iteration of (a) FNM, (b) TSFNRM1, (c) TSFNRM2.
Fractalfract 08 00582 g003
Figure 4. Iterations Comparison of f 2 ( x ) = sin 2 x x 2 + 1 with respect to FNM, TSFNRM1, and TSFNRM2.
Figure 4. Iterations Comparison of f 2 ( x ) = sin 2 x x 2 + 1 with respect to FNM, TSFNRM1, and TSFNRM2.
Fractalfract 08 00582 g004
Figure 5. Comparison of 5 problems according to residual logarithm per iteration of (a) FNM, (b) TSFNRM1, (c) TSFNRM2.
Figure 5. Comparison of 5 problems according to residual logarithm per iteration of (a) FNM, (b) TSFNRM1, (c) TSFNRM2.
Fractalfract 08 00582 g005
Figure 6. Iterations Comparison of f 3 ( x ) = x 2 + 2 x + 5 2 sin x x 2 + 3 with respect to FNM, TSFNRM1, and TSFNRM2.
Figure 6. Iterations Comparison of f 3 ( x ) = x 2 + 2 x + 5 2 sin x x 2 + 3 with respect to FNM, TSFNRM1, and TSFNRM2.
Fractalfract 08 00582 g006
Figure 7. Comparison of 5 standard problems according to residual logarithm per iteration of (a) FNM, (b) TSFNRM1, (c) TSFNRM2.
Figure 7. Comparison of 5 standard problems according to residual logarithm per iteration of (a) FNM, (b) TSFNRM1, (c) TSFNRM2.
Fractalfract 08 00582 g007
Figure 8. Iterations Comparison of f 4 ( x ) = x 3 + 3 x 2 + x 2 with respect to FNM, TSFNRM1, and TSFNRM2.
Figure 8. Iterations Comparison of f 4 ( x ) = x 3 + 3 x 2 + x 2 with respect to FNM, TSFNRM1, and TSFNRM2.
Fractalfract 08 00582 g008
Figure 9. Comparison of 5 standard problems according to residual logarithm per iteration of (a) FNM, (b) TSFNRM1, (c) TSFNRM2.
Figure 9. Comparison of 5 standard problems according to residual logarithm per iteration of (a) FNM, (b) TSFNRM1, (c) TSFNRM2.
Fractalfract 08 00582 g009
Figure 10. Iterations Comparison of f 5 ( x ) = e x + cos x with respect to FNM, TSFNRM1, and TSFNRM2.
Figure 10. Iterations Comparison of f 5 ( x ) = e x + cos x with respect to FNM, TSFNRM1, and TSFNRM2.
Fractalfract 08 00582 g010
Figure 11. Comparison of f 3 ( x ) according to residual logarithm per iteration of (a) FNM, (b) PIM4, (c) PIM5.
Figure 11. Comparison of f 3 ( x ) according to residual logarithm per iteration of (a) FNM, (b) PIM4, (c) PIM5.
Fractalfract 08 00582 g011
Figure 12. Comparison of f 5 ( x ) according to residual logarithm per iteration of (a) FNM, (b) PIM4, (c) PIM5.
Figure 12. Comparison of f 5 ( x ) according to residual logarithm per iteration of (a) FNM, (b) PIM4, (c) PIM5.
Fractalfract 08 00582 g012
Figure 13. Basin of attraction for P 1 by using FNM at (a) α = 0.2 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Figure 13. Basin of attraction for P 1 by using FNM at (a) α = 0.2 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Fractalfract 08 00582 g013
Figure 14. Basin of attraction for P 1 by using TSFNRM2 at (a) α = 0.2 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Figure 14. Basin of attraction for P 1 by using TSFNRM2 at (a) α = 0.2 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Fractalfract 08 00582 g014
Figure 15. Basin of attraction for P 1 by using TSFNRM1 at (a) α = 0.2 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Figure 15. Basin of attraction for P 1 by using TSFNRM1 at (a) α = 0.2 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Fractalfract 08 00582 g015
Figure 16. Basin of attraction for P 2 by using FNM at (a) α = 0.2 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Figure 16. Basin of attraction for P 2 by using FNM at (a) α = 0.2 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Fractalfract 08 00582 g016
Figure 17. Basin of attraction for P 2 by using TSFNRM2 at (a) α = 0.2 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Figure 17. Basin of attraction for P 2 by using TSFNRM2 at (a) α = 0.2 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Fractalfract 08 00582 g017
Figure 18. Basin of attraction for P 2 by using TSFNRM1 at (a) α = 0.2 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Figure 18. Basin of attraction for P 2 by using TSFNRM1 at (a) α = 0.2 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Fractalfract 08 00582 g018
Figure 19. Basin of attraction for P 3 by using FNM at (a) α = 0.3 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Figure 19. Basin of attraction for P 3 by using FNM at (a) α = 0.3 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Fractalfract 08 00582 g019
Figure 20. Basin of attraction for P 3 by using TSFNRM1 at (a) α = 0.3 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Figure 20. Basin of attraction for P 3 by using TSFNRM1 at (a) α = 0.3 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Fractalfract 08 00582 g020
Figure 21. Basin of attraction for P 3 by using TSFNRM2 at (a) α = 0.3 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Figure 21. Basin of attraction for P 3 by using TSFNRM2 at (a) α = 0.3 , (b) α = 0.5 , (c) α = 0.7 , (d) α = 0.9 .
Fractalfract 08 00582 g021
Table 1. Test functions and initial guesses.
Table 1. Test functions and initial guesses.
Test FunctionsInitial Guess
f 1 ( x ) = x 3 10 x 0 = 2.5
f 2 ( x ) = sin 2 x x 2 + 1 x 0 = 3.0
f 3 ( x ) = x 2 + 2 x + 5 2 sin x x 2 + 3 x 0 = 1.4
f 4 ( x ) = x 3 + 3 x 2 + x 2 x 0 = 2.6
f 5 ( x ) = e x + cos x x 0 = 1.0
Table 2. Numerical outcomes of FNM, TSFNRM1, and TSFNRM2 for f 1 ( x ) = x 3 10 .
Table 2. Numerical outcomes of FNM, TSFNRM1, and TSFNRM2 for f 1 ( x ) = x 3 10 .
FNM TSFNRM1TSFNRM2
α x n Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) |
0.22.154434689989988527713 2.94 × 10 10 5.83 × 10 10 7 1.06 × 10 10 4.18 × 10 11 7 7.45 × 10 11 2.95 × 10 11
0.32.154434690018297947811 1.59 × 10 10 1.89 × 10 10 6 5.26 × 10 11 6.48 × 10 12 6 1.55 × 10 11 1.91 × 10 12
0.42.15443469003090057429 2.53 × 10 11 1.37 × 10 11 5 2.43 × 10 11 5.55 × 10 13 5 1.07 × 10 12 2.45 × 10 14
0.52.15443469003201361767 3.97 × 10 11 1.81 × 10 12 4 3.68 × 10 11 5.54 × 10 15 4 1.34 × 10 13 2.01 × 10 17
0.62.15443469003210809629 1.09 × 10 11 3.12 × 10 12 5 4.57 × 10 12 2.59 × 10 14 5 3.29 × 10 13 1.88 × 10 15
0.72.15443469003518048119 1.01 × 10 10 4.59 × 10 11 5 4.32 × 10 11 6.02 × 10 13 5 5.11 × 10 12 7.11 × 10 14
0.82.15443469003444147089 7.78 × 10 11 3.56 × 10 11 5 3.71 × 10 11 5.24 × 10 13 5 4.58 × 10 12 6.46 × 10 14
0.92.15443469003722609388 2.46 × 10 10 7.44 × 10 11 5 3.08 × 10 12 1.95 × 10 14 5 2.69 × 10 13 1.70 × 10 15
Table 3. Numerical outcomes of FNM, TSFNRM1, and TSFNRM2 for f 2 ( x ) = sin 2 x x 2 + 1 .
Table 3. Numerical outcomes of FNM, TSFNRM1, and TSFNRM2 for f 2 ( x ) = sin 2 x x 2 + 1 .
FNM TSFNRM1TSFNRM2
α x n Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) |
0.21.4044916484635344066124 5.39 × 10 10 6.16 × 10 10 50 7.33 × 10 10 4.79 × 10 9 55 8.98 × 10 10 5.87 × 10 9
0.31.404491648450673665446 6.04 × 10 10 5.84 × 10 10 21 1.47 × 10 10 2.51 × 10 10 24 1.39 × 10 10 2.38 × 10 10
0.41.404491648436813292528 6.92 × 10 10 5.49 × 10 10 15 1.03 × 10 10 7.28 × 10 11 15 1.72 × 10 10 1.22 × 10 10
0.51.404491648355385020420 5.54 × 10 10 3.48 × 10 10 12 3.75 × 10 11 1.20 × 10 11 11 1.24 × 10 10 3.97 × 10 11
0.61.404491648167587694815 2.53 × 10 10 1.19 × 10 10 10 1.70 × 10 11 2.43 × 10 12 8 1.94 × 10 10 2.76 × 10 11
0.71.404491648239001224713 1.81 × 10 10 5.87 × 10 11 8 6.23 × 10 11 3.58 × 10 12 7 1.54 × 10 10 8.83 × 10 12
0.81.404491648230699890211 1.95 × 10 10 3.81 × 10 11 7 2.19 × 10 11 4.04 × 10 13 6 1.78 × 10 10 3.27 × 10 12
0.91.40449164822435204309 2.57 × 10 10 2.24 × 10 11 6 8.95 × 10 12 2.93 × 10 14 5 2.48 × 10 10 8.11 × 10 13
Table 4. Numerical outcomes of FNM, TSFNRM1, and TSFNRM2 for f 3 ( x ) = x 2 + 2 x + 5 2 sin x x 2 + 3 .
Table 4. Numerical outcomes of FNM, TSFNRM1, and TSFNRM2 for f 3 ( x ) = x 2 + 2 x + 5 2 sin x x 2 + 3 .
FNM TSFNRM1TSFNRM2
α x n Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) |
0.22.331967655394987091633 4.25 × 10 10 1.19 × 10 9 18 1.14 × 10 10 1.11 × 10 10 18 1.87 × 10 10 1.82 × 10 10
0.32.331967655483448317232 3.69 × 10 10 9.72 × 10 10 17 1.91 × 10 10 1.72 × 10 10 18 7.90 × 10 11 7.11 × 10 11
0.42.331967655704155565831 1.83 × 10 10 4.36 × 10 10 16 1.88 × 10 10 1.48 × 10 10 16 2.74 × 10 10 2.16 × 10 10
0.52.331967655472133351327 4.85 × 10 10 9.99 × 10 10 15 1.05 × 10 10 6.81 × 10 11 15 1.49 × 10 10 9.66 × 10 11
0.62.331967655762834970425 1.74 × 10 10 2.94 × 10 10 13 1.76 × 10 10 8.63 × 10 11 13 2.46 × 10 10 1.20 × 10 10
0.72.331967655766005696321 2.24 × 10 10 2.86 × 10 10 11 2.18 × 10 10 7.11 × 10 11 11 3.01 × 10 10 9.86 × 10 11
0.82.331967655815785730117 1.95 × 10 10 1.65 × 10 10 9 1.76 × 10 10 3.07 × 10 11 9 2.46 × 10 10 4.29 × 10 11
0.92.331967655873840106813 5.87 × 10 11 2.46 × 10 11 7 4.53 × 10 11 2.43 × 10 12 7 6.74 × 10 11 3.61 × 10 12
Table 5. Numerical outcomes of FNM, TSFNRM1, and TSFNRM2 for f 4 ( x ) = x 3 + 3 x 2 + x 2 .
Table 5. Numerical outcomes of FNM, TSFNRM1, and TSFNRM2 for f 4 ( x ) = x 3 + 3 x 2 + x 2 .
FNM TSFNRM1TSFNRM2
α x n Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) |
0.30.61803398874997108602134 1.69 × 10 13 4.46 × 10 13 50 3.13 × 10 11 3.67 × 10 10 48 6.41 × 10 11 7.51 × 10 10
0.40.6180339887499826647948 2.34 × 10 13 5.14 × 10 13 22 4.45 × 10 11 1.48 × 10 10 22 4.41 × 10 11 1.46 × 10 10
0.50.6180339887499452900433 1.67 × 10 13 2.95 × 10 13 14 5.02 × 10 11 6.76 × 10 11 15 3.19 × 10 11 4.30 × 10 11
0.60.6180339887499043075125 4.11 × 10 14 5.54 × 10 14 11 2.06 × 10 11 1.18 × 10 11 11 5.75 × 10 11 3.31 × 10 11
0.70.6180339887499005399619 3.50 × 10 14 3.33 × 10 14 9 2.38 × 10 11 5.45 × 10 12 9 2.04 × 10 11 4.68 × 10 12
0.80.6180339887499785886413 8.34 × 10 13 4.90 × 10 13 9 9.28 × 10 14 6.86 × 10 15 7 3.31 × 10 11 2.45 × 10 12
0.90.6180339887499437754910 1.07 × 10 12 2.86 × 10 13 8 2.09 × 10 14 2.82 × 10 16 7 5.49 × 10 14 7.41 × 10 16
Table 6. Numerical outcomes of FNM, TSFNRM1, and TSFNRM2 for f 5 ( x ) = e x + cos x .
Table 6. Numerical outcomes of FNM, TSFNRM1, and TSFNRM2 for f 5 ( x ) = e x + cos x .
FNM TSFNRM1TSFNRM2
α x n Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) |
0.21.746139530208860838825 2.43 × 10 10 2.31 × 10 10 12 3.95 × 10 10 1.17 × 10 10 13 3.52 × 10 10 1.04 × 10 10
0.31.746139530093991941525 3.87 × 10 10 3.64 × 10 10 13 1.91 × 10 10 5.55 × 10 11 13 5.60 × 10 10 1.63 × 10 10
0.41.746139530214764930825 2.51 × 10 10 2.24 × 10 10 13 1.53 × 10 10 4.13 × 10 11 13 3.61 × 10 10 9.74 × 10 11
0.51.746139530098929190823 4.45 × 10 10 3.58 × 10 10 12 3.05 × 10 10 7.12 × 10 11 12 6.28 × 10 10 1.47 × 10 10
0.61.746139530143939932821 4.47 × 10 10 3.06 × 10 10 11 3.25 × 10 10 6.03 × 10 11 11 6.13 × 10 10 1.14 × 10 10
0.71.746139530304139917519 2.24 × 10 10 1.20 × 10 10 10 1.67 × 10 10 2.17 × 10 11 10 2.95 × 10 10 3.82 × 10 11
0.81.746139530213272853915 6.08 × 10 10 2.26 × 10 10 8 4.62 × 10 10 3.35 × 10 11 8 7.56 × 10 10 5.47 × 10 11
0.91.746139530388132396512 1.22 × 10 10 2.31 × 10 11 6 7.14 × 10 10 1.67 × 10 11 7 1.95 × 10 11 4.55 × 10 13
Table 7. Numerical outcomes of FNM, PIM4 and PIM5 for f 3 ( x ) = x 2 + 2 x + 5 2 sin x x 2 + 3 .
Table 7. Numerical outcomes of FNM, PIM4 and PIM5 for f 3 ( x ) = x 2 + 2 x + 5 2 sin x x 2 + 3 .
FNM PIM4PIM5
α x n Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) |
0.22.331967655394987091633 4.25 × 10 10 1.19 × 10 9 19 1.36 × 10 10 1.32 × 10 10 19 1.64 × 10 10 1.59 × 10 10
0.32.331967655483448317232 3.69 × 10 10 9.72 × 10 10 18 2.13 × 10 10 1.92 × 10 10 18 2.52 × 10 10 2.26 × 10 10
0.42.331967655704155565831 1.83 × 10 10 4.36 × 10 10 17 1.99 × 10 10 1.57 × 10 10 17 2.32 × 10 10 1.87 × 10 10
0.52.331967655472133351327 4.85 × 10 10 9.99 × 10 10 16 1.08 × 10 10 6.99 × 10 11 16 1.24 × 10 10 8.03 × 10 11
0.62.331967655762834970425 1.74 × 10 10 2.94 × 10 10 14 1.75 × 10 10 8.59 × 10 11 14 1.99 × 10 10 9.79 × 10 11
0.72.331967655766005696321 2.24 × 10 10 2.86 × 10 10 12 2.12 × 10 10 6.92 × 10 11 12 2.39 × 10 10 7.82 × 10 11
0.82.331967655815785730117 1.95 × 10 10 1.65 × 10 10 10 1.72 × 10 10 2.99 × 10 11 10 1.93 × 10 10 3.37 × 10 11
0.92.331967655873840106813 5.87 × 10 11 2.46 × 10 11 8 5.13 × 10 11 2.75 × 10 12 8 5.85 × 10 11 3.14 × 10 12
Table 8. Numerical outcomes of FNM, PIM4, and PIM5 for f 5 ( x ) = e x + cos x .
Table 8. Numerical outcomes of FNM, PIM4, and PIM5 for f 5 ( x ) = e x + cos x .
FNMPIM4 PIM5
α x n Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) | Iter | x n x n 1 | | f ( x n ) |
0.21.746139530208860838825 2.43 × 10 10 2.31 × 10 10 14 4.69 × 10 10 1.38 × 10 10 14 6.84 × 10 10 2.02 × 10 10
0.31.746139530093991941525 3.87 × 10 10 3.64 × 10 10 14 4.92 × 10 10 1.43 × 10 10 14 6.70 × 10 10 1.95 × 10 10
0.41.746139530214764930825 2.51 × 10 10 2.24 × 10 10 14 2.72 × 10 10 7.36 × 10 11 14 3.55 × 10 10 9.59 × 10 11
0.51.746139530098929190823 4.45 × 10 10 3.58 × 10 10 13 4.38 × 10 10 1.02 × 10 10 13 5.52 × 10 10 1.29 × 10 10
0.61.746139530143939932821 4.47 × 10 10 3.06 × 10 10 12 4.05 × 10 10 7.52 × 10 11 12 4.98 × 10 10 9.23 × 10 11
0.71.746139530304139917519 2.24 × 10 10 1.20 × 10 10 11 1.87 × 10 10 2.43 × 10 11 11 2.24 × 10 10 2.90 × 10 11
0.81.746139530213272853915 6.08 × 10 10 2.26 × 10 10 9 4.78 × 10 10 3.46 × 10 11 9 9.52 × 10 10 5.00 × 10 11
0.91.746139530388132396512 1.22 × 10 10 2.31 × 10 11 7 7.31 × 10 10 1.70 × 10 11 7 7.38 × 10 10 1.82 × 10 11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ali, N.Z.; Khan, A.G.; Awan, M.U.; Ciurdariu, L.; Brahim, K. Design and Applicability of Two-Step Fractional Newton–Raphson Method. Fractal Fract. 2024, 8, 582. https://doi.org/10.3390/fractalfract8100582

AMA Style

Ali NZ, Khan AG, Awan MU, Ciurdariu L, Brahim K. Design and Applicability of Two-Step Fractional Newton–Raphson Method. Fractal and Fractional. 2024; 8(10):582. https://doi.org/10.3390/fractalfract8100582

Chicago/Turabian Style

Ali, Naseem Zulfiqar, Awais Gul Khan, Muhammad Uzair Awan, Loredana Ciurdariu, and Kamel Brahim. 2024. "Design and Applicability of Two-Step Fractional Newton–Raphson Method" Fractal and Fractional 8, no. 10: 582. https://doi.org/10.3390/fractalfract8100582

APA Style

Ali, N. Z., Khan, A. G., Awan, M. U., Ciurdariu, L., & Brahim, K. (2024). Design and Applicability of Two-Step Fractional Newton–Raphson Method. Fractal and Fractional, 8(10), 582. https://doi.org/10.3390/fractalfract8100582

Article Metrics

Back to TopTop