Next Article in Journal
Results on Third-Order Differential Subordination for Analytic Functions Related to a New Integral Operator
Next Article in Special Issue
On Superization of Nonlinear Integrable Dynamical Systems
Previous Article in Journal
Multipole Moments Under Square Vortex and Skyrmion Crystals
Previous Article in Special Issue
Chaotic-Based Improved Henry Gas Solubility Optimization Algorithm: Application to Electric Motor Control
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New High-Order Fractional Parallel Iterative Scheme for Solving Nonlinear Equations

by
Mudassir Shams
1,2 and
Bruno Carpentieri
1,*
1
Faculty of Engineering, Free University of Bozen-Bolzano (BZ), 39100 Bozen-Bolzano, Italy
2
Department of Mathematics and Statistics, Riphah International University I-14, Islamabad 44000, Pakistan
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(11), 1452; https://doi.org/10.3390/sym16111452
Submission received: 9 September 2024 / Revised: 15 October 2024 / Accepted: 24 October 2024 / Published: 1 November 2024
(This article belongs to the Special Issue Symmetry in Nonlinear Dynamics and Chaos II)

Abstract

:
Solving fractional-order nonlinear equations is crucial in engineering, where precision and accuracy are essential. This study introduces a novel fractional parallel technique for solving nonlinear equations. To enhance convergence, we incorporate a simple root-finding method of order 3γ + 1 as a correction term in the parallel scheme. Theoretical analysis shows that the parallel scheme achieves a convergence order of 6γ + 3. Using a dynamical system approach, we identify optimal parameter values, and the symmetry in the dynamical planes for different fractional parameters demonstrates the method’s stability and consistency in handling nonlinear problems. These parameter values are applied to the parallel scheme, yielding highly consistent results. Several engineering problems are examined to assess the method’s efficiency, stability, and consistency compared to existing methods.

1. Introduction

In 1695, Leibniz and L’Hôpital introduced the concept of the semi-derivative [1,2,3], which laid the foundation for both classical and fractional calculus. Fractional calculus [4], an extension of classical calculus to non-integer orders, has gained attention for its effectiveness in modeling complex systems with non-local and memory-dependent behaviors. This makes it particularly valuable in engineering, where fractional-order nonlinear equations are used to simulate systems with memory effects and anomalous diffusion.
One form of the nonlinear problems we are particularly interested in is represented by the following equation:
γ 1 γ c ˇ y ( x ) + ( y ( x ) ) = g ( x ) , y ( x 0 ) = y 0 , y ( x 0 ) = y 1 , θ 0 [ ] < γ θ 1 [ ] ,
In Equation (1), x 0 is the initial point and fractional parameter γ lies within the interval θ 0 [ ] , θ 1 [ ] . While obtaining exact solutions to these equations would be computationally attractive due to their precision and lack of approximation, the inherent complexities of fractional-order systems often make such solutions unattainable. In these cases, analytical methods, such as series expansions and integral transformations, play a critical role. These methods provide closed-form or semi-closed-form solutions, which not only offer valuable insights into the system’s behavior, but also serve as reference points for validating and guiding numerical solutions. By doing so, they help ensure that numerical approaches remain accurate and reliable when exact solutions are not feasible.
The Caputo derivative retains the benefits of classical calculus while enabling the development of more detailed models that can accurately capture the complex dynamics observed in various real-world applications, including finance [5], engineering [6,7], physics [8], and many more. Due to their inherent complexity, nonlinear equations remain one of the most challenging problems in science and engineering. Unlike the Caputo derivative, most other fractional derivatives do not satisfy the condition γ ( 1 ) = 0 when γ is not a natural number. Because of this, we will explore the fundamental concepts of fractional calculus and discuss the fractional iterative approach for solving nonlinear equations using Caputo-type derivatives.
Definition 1.
The Gamma function, also known as the generalized factorial function [9], is defined as:
( x ) = 0 + u x 1 e u d u ,
where x > 0 . The Gamma function satisfies the properties ( 1 ) = 1 and ( n + 1 ) = n ! for any positive integer n.
Definition 2.
Let : R R , where
C + γ 1 , x , < γ 1 < x < + , γ 1 0 , m = γ 1 + 1 .
The Caputo fractional derivative [10] of order γ 1 is defined as:
γ 1 γ c ˇ ( x ) = 1 ( m γ ) γ 1 x d m d t m ( t ) 1 x t γ m + 1 d t , γ N , d m 1 d t m 1 ( x ) , γ = m 1 N 0 ,
where ( x ) is a gamma function for x > 0 .
Theorem 1.
Suppose that γ 1 m γ c ˇ ( x ) C γ 1 , γ 2 for m = 1 , , n + 1 where γ 0 , 1 . Then, the Generalized Taylor Formula [11] is given by:
( x ) = i = 0 n γ 1 i γ c ˇ γ 1 x γ 1 i γ ( i γ + 1 ) + γ 1 n + 1 γ c ˇ ξ x γ 1 n + 1 γ ( n + 1 γ + 1 ) ,
where
γ 1 ξ x , x γ 1 , γ 2 ,
and
γ 1 n γ c ˇ = γ 1 γ c ˇ . γ 1 γ c ˇ γ 1 γ c ˇ ( n t i m e s ) .
Consider the Caputo-type Taylor expansion of ( x ) around γ 1 = ξ :
( x ) = ξ 1 γ c ˇ ξ ( + 1 ) x ξ γ + ξ 2 γ c ˇ ξ ( 2 γ + 1 ) x ξ 2 γ + O x ξ 3 γ .
By factoring out ξ 1 γ c ˇ ξ ( γ + 1 ) , we obtain:
( x ) = ξ 1 γ c ˇ ξ ( γ + 1 ) x ξ γ + c ˇ 2 x ξ 2 γ + O x ξ 3 γ ,
where
c ˇ j = ( γ + 1 ) ( m γ + 1 ) ξ m γ c ˇ ξ ξ γ c ˇ ξ , m = 2 , 3 ,
The corresponding Caputo-type derivative of ( x ) around ξ is:
ξ γ c ˇ x = ξ 1 γ c ˇ ξ ( γ + 1 ) ( γ + 1 ) + ( 2 γ + 1 ) ( γ + 1 ) c ˇ 2 x ξ γ + O x ξ 2 γ .
These expansions are used in the convergence analysis of the proposed method.
The rest of the paper is structured as follows: Section 2 discusses the development, convergence, and complex dynamical analysis of fractional-order schemes for solving Equation (1). Section 3 evaluates the effectiveness and stability of the proposed method through numerical examples and comparisons with other techniques. Finally, the conclusions are presented in Section 5.

2. Construction and Dynamic Analysis of Fractional Iterative Scheme

The Newton method [12,13] refines an initial guess for solving a nonlinear problem by using a sequence of linear approximations. This technique, known for its rapid convergence—often quadratic—has been widely used since its development in the late 14th century to solve nonlinear equations. The fractional Newton method is an extension of the traditional Newton method, designed for nonlinear equations involving fractional derivatives. It incorporates fractional calculus operators, such as fractional integrals and derivatives, into the iterative process. This approach has demonstrated strong convergence properties and has been effective in solving complex nonlinear equations where other methods may fail. Several studies, including those by Torres-Hernandez et al. [14], Akgül et al. [15], Gajori et al. [16], Kumar et al. [17], and Candelario et al. [18] describe a fractional version of the Newton technique with various fractional derivatives.
Based on both stability and efficiency, the method presented in [19] is preferable to other well-known root-finding methods for nonlinear equations, such as those in [20,21,22,23] and the references cited therein. Nonlinear equations, particularly those involving fractional approaches, have gained significant attention due to their wide applications in addressing complex engineering problems. The numerical algorithm discussed exhibits local convergence in approximating simple solutions to fractional-order nonlinear problems. In addition to the basic iterative strategies for solving (1) with integer or fractional convergence orders, there are parallel schemes with global convergence that can find all solutions to a nonlinear equation simultaneously. These schemes are more stable, consistent, and are particularly useful in parallel computing (see, e.g., [24,25,26] and the references cited therein). However, a comprehensive literature review reveals that little work has focused on fractional-order numerical techniques for solving fractional nonlinear problems.
The primary goal of the current research is to enhance the convergence rate, stability, and accuracy of these parallel schemes. Challenges remain, particularly in ensuring efficient computation in higher dimensions. A comprehensive review of these trends, along with their limitations—such as handling fractional-order nonlinearities—provides context for the new numerical scheme. The proposed fractional family of parallel numerical schemes addresses several limitations of traditional iterative methods in the following ways:
  • Parallel computation: Unlike simple root-finding methods, parallel approaches compute all solutions simultaneously.
  • Global convergence: Parallel schemes exhibit global convergence, as all roots contribute to solving nonlinear problems.
  • Robustness: Parallel schemes outperform other methods in terms of robustness and sensitivity to initial values for exact solutions.
  • Efficiency in parallel computing: Fractional-order parallel methods are highly suitable for multicore and distributed systems, as updates for each solution can be computed independently. In contrast to simpler schemes, which often require shared access to derivatives and global convergence checks, parallel updates can be handled with minimal communication overhead.
  • Accuracy and consistency: Compared to simple iterative methods for solving (1), fractional parallel approaches—relying on differences between approximated roots—tend to be more accurate and consistent, requiring fewer modifications to maintain global convergence.
  • Memory efficiency: Fractional-order parallel methods are more memory-efficient in practical implementations compared to basic schemes.
On the other hand, when applied to nonlinear fractional-order differential equations with movable conditions, the method may exhibit slower convergence rates and higher computational demands. These challenges can introduce complex dynamics that might affect the overall effectiveness of the approach. Therefore, while the method is generally reliable, a careful analysis of the problem’s specific nature and conditions is essential to optimize its performance. In most cases, the method performs well, but in certain instances, additional strategies may be necessary to improve convergence and reduce computational costs.
Building on the method described in [19], we consider the following fractional iterative scheme:
v l = x l ( x l ) ( x l ) ( ς l ) ( x l ) + 2 ( ς l ) ( x l ) 2 + β [ ] ( ς l ) ( x l ) 4 ,
where ς l = x l ( x l ) ( x l ) . This method has a convergence order of 4 (abbreviated as M[γ]), which satisfies the following error equation:
e i = β [ ] c ˇ 2 3 + 5 c ˇ 2 3 c ˇ 2 c ˇ 3 e l 4 + O e l 5 ,
where e i = x 1 ξ , e l = x l ξ , and c ˇ j = 1 m ! ξ m c ˇ ξ ξ γ c ˇ ξ , m 2 , 3 , The fractional version of this method is given by:
v l = x l ( γ + 1 ) ( x l ) ( x l ) ( ς l ) ( x l ) + 2 ( ς l ) ( x l ) 2 + β [ ] ( ς l ) ( x l ) 4 1 / γ ,
where ς l = x l ( γ + 1 ) ( x l ) γ 1 γ c ˇ ( x l ) 1 / γ . We abbreviate this method as MM[γ].

2.1. Convergence Analysis

For the iterative scheme given by Equation (14), we establish the following theorem to determine its order of convergence.
Theorem 2.
Let
: R R
be a continuous function, with γ 1 m γ c ˇ ( x ) of order m for any γ 0 and γ 0 , 1 containing the exact root ξ of ( x ) . Furthermore, given a sufficiently close starting value x 0 , the convergence order of the Caputo-type fractional iterative scheme
v l = x l ( γ + 1 ) ( x l ) ( x l ) ( ς l ) ( x l ) + 2 ( ς l ) ( x l ) 2 + β [ ] ( ς l ) ( x l ) 4 1 / γ ,
is least 2 γ + 1 , with the error equation given by:
e = e = £ 1 [ ] + £ 2 [ ] + £ 3 [ ] + £ 4 [ ] + £ 5 [ ] + £ 6 [ ] e l 3 γ + 1 + O e l 4 γ + 1 ,
where the constants £ i [ ] are defined as follows:
£ 1 [ ] = 9 9 c 2 3 2 γ 4 γ + 1 / 2 2 γ 2 γ 2 π , £ 2 [ ] = + 4 c 2 3 2 γ 6 γ + 1 / 2 3 γ 3 γ 3 π 3 / 2 ,
£ 3 [ ] = c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 c 2 γ 2 γ 2 π , £ 4 [ ] = 3 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 c 2 γ γ π 2 γ 2 γ + 1 / 2 ,
£ 5 [ ] = 4 c 2 c 3 + c 2 3 , £ 6 [ ] = 3 2 γ 2 γ + 1 / 2 c 2 c 3 γ γ π + 4 c 2 3 2 γ 2 γ + 1 / 2 γ γ π .
Here, c ˇ j = ( γ + 1 ) ( m γ + 1 ) ξ m γ c ˇ ξ ξ γ c ˇ ξ , m 2 , 3 , and . n = n ( . ) .
Proof. 
Let ξ be a root of and assume x l = ξ + e l . Using a Taylor series expansion of ( x l ) and γ 1 γ c ˇ ( x l ) around x = ξ , and considering that ( ξ ) = 0 , we obtain:
( x l ) = ξ 1 γ c ˇ ξ ( γ + 1 ) e l γ + c ˇ 2 e l 2 γ + c ˇ 3 e l 3 γ + O ( e l 4 γ ) ,
γ 1 γ c ˇ ( x l ) = ξ 1 γ c ˇ ξ ( γ + 1 ) ( γ + 1 ) + ( 2 γ + 1 ) ( γ + 1 ) c ˇ 2 e l γ + ( 3 γ + 1 ) ( 2 γ + 1 ) c ˇ 3 e l 2 γ + O ( e l 3 γ ) ,
1 γ 1 γ c ˇ x l = 1 γ + 1 2 γ + 1 c 2 e l γ γ + 1 3 + Θ 1 [ ] + Θ 2 [ ] + Θ 3 [ ] +
where Θ 1 [ ] = e l 2 γ γ + 1 3 γ + 1 c 3 γ + 1 2 γ + 1 + 2 γ + 1 2 c 2 2 γ + 1 4 ,
Θ 2 [ ] = e l 3 γ γ + 1 c 2 3 γ + 1 c 3 γ + 1 3 2 γ + 1 3 c 2 2 3 γ + 1 c 3 γ + 1 3 c 2 γ + 1 6 ,
Θ 3 [ ] = e l 3 γ γ + 1 c 2 3 γ + 1 c 3 γ + 1 3 2 γ + 1 3 c 2 2 3 γ + 1 c 3 γ + 1 3 c 2 γ + 1 6 .
Thus, by multiplying Equations (18) and (20), we obtain
( x l ) γ 1 γ c ˇ x l = e l γ γ γ + Θ 4 [ ] e l 2 γ + Θ 5 [ ] e l 3 γ + ,
where Θ 4 [ ] = c 2 γ γ 2 γ 2 γ + 1 / 2 c 2 γ 2 π γ 2 ,
Θ 5 [ ] = c 3 γ γ 2 γ 2 γ + 1 / 2 c 2 2 γ 2 π γ 2 1 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 γ 2 π γ 2 2 γ 2 γ + 1 / 2 + 2 γ 4 γ + 1 / 2 2 c 2 2 γ 3 γ 3 π ,
ς l = c 2 + 2 γ 2 γ + 1 / 2 c 2 γ γ π e l γ + 1 + Θ 6 [ ] e l 2 γ + 1 +
and Θ 6 [ ] = c 3 + 2 γ 2 γ + 1 / 2 c 2 2 γ γ π + 1 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 γ γ π 2 γ 2 γ + 1 / 2 2 γ 4 γ + 1 / 2 2 c 2 2 γ 2 γ 2 π . We have:
ς l = Λ { 2 } e l 4 γ + 1 + Λ { 3 } e l 3 γ + 1 + 1 / 2 Λ { 6 } e l 2 γ + 1 + Λ { 7 } e l γ + 1 + ,
where Λ { 1 } = 2 c 3 γ 2 γ 2 π 3 / 2 2 γ 2 γ + 1 / 2 2 2 γ 4 γ + 1 / 2 2 c 2 2 γ γ π c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 γ γ π + 2 2 γ 6 γ + 1 / 2 3 c 2 2 π ,
Λ { 2 } = c 2 2 γ γ π 2 γ 2 γ + 1 / 2 Λ { 1 } γ 3 γ 3 π 2 2 γ 2 γ + 1 / 2 ,
Λ { 3 } = c 2 3 γ γ π 2 γ 2 γ + 1 / 2 2 γ 2 γ 2 π ,
Λ { 4 } = 2 c 3 γ 2 γ 2 π 3 / 2 2 γ 2 γ + 1 / 2 2 2 γ 4 γ + 1 / 2 2 c 2 2 γ γ π c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 γ γ π + 2 2 γ 6 γ + 1 / 2 3 c 2 2 π , ,
Λ { 5 } = γ 2 γ 2 π 3 / 2 2 γ 2 γ + 1 / 2 ,
Λ { 6 } = Λ { 4 } Λ { 5 } , and
Λ { 7 } = c 2 γ γ π 2 γ 2 γ + 1 / 2 γ γ π . Thus,
1 x l = e n 1 c 2 + c 2 2 c 3 e n + c 2 c 3 c 4 + c 2 2 + c 3 c 2 e n 2 +
ς l x l = Λ { 8 } e l γ + Λ { 9 } e l γ + 1 + Λ { 10 } e l 2 γ + 1 + Λ { 11 } e l 3 γ + 1 +
where Λ { 8 } = c 2 + 2 γ 2 γ + 1 / 2 c 2 γ γ π ,
Λ { 9 } = 2 γ 4 γ + 1 / 2 2 c 2 2 γ 2 γ 2 π + 1 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 γ γ π 2 γ 2 γ + 1 / 2 c 3 + c 2 2 ,
Λ { 10 } = 2 c 2 3 2 γ 2 γ + 1 / 2 γ γ π + 2 c 2 3 2 γ 4 γ + 1 / 2 2 γ 2 γ 2 π 1 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 c 2 γ γ π 2 γ 2 γ + 1 / 2 2 γ 2 γ + 1 / 2 c 2 c 3 γ γ π + 2 c 2 c 3 ,
Λ { 11 } = 2 c 2 4 2 γ 4 γ + 1 / 2 2 γ 2 γ 2 π c 2 2 c 3 2 c 2 4 2 γ 6 γ + 1 / 2 3 γ 3 γ 3 π 3 / 2 c 3 2 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 γ γ π 2 γ 2 γ + 1 / 2 1 2 c 2 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 γ γ π 2 γ 2 γ + 1 / 2 + c 2 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 γ 2 γ 2 π 2 γ 2 γ + 1 / 2 c 2 2 c 3 γ γ π + c 3 2 + 2 γ 4 γ + 1 / 2 2 c 2 2 c 3 γ 2 γ 2 π 2 γ 2 γ + 1 / 2 c 2 c 4 γ γ π + c 2 c 4 ,
( ς l ) ( x l ) + 2 ( ς l ) ( x l ) 2 + β [ ] ( ς l ) ( x l ) 4 = Λ 1 { } e l γ + Λ 2 { } e l γ + 1 + Λ 3 { } e l 2 γ + 1 +
where Λ 1 { } = c 2 + 2 γ 2 γ + 1 / 2 c 2 γ γ π
Λ 2 { } = 2 γ 4 γ + 1 / 2 2 c 2 2 γ 2 γ 2 π 4 4 2 γ 2 γ + 1 / 2 c 2 2 γ γ π + 1 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 γ γ π 2 γ 2 γ + 1 / 2 c 3 + 3 c 2 2
Λ 3 { } = 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 c 2 γ 2 γ 2 π 4 c 2 3 5 2 γ 2 γ + 1 / 2 c 2 c 3 γ γ π 5 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 c 2 γ γ π 2 γ 2 γ + 1 / 2 4 c 2 3 2 γ 6 γ + 1 / 2 3 γ 3 γ 3 π 3 / 2 + 2 c 2 3 2 γ 2 γ + 1 / 2 γ γ π + 6 c 2 c 3 + 6 c 2 3 2 γ 4 γ + 1 / 2 2 γ 2 γ 2 π . Therefore,
h 1 [ ] = ( x l ) γ 1 γ c ˇ x l ( ς l ) ( x l ) + 2 ( ς l ) ( x l ) 2 + β [ ] [ ] ( ς l ) ( x l ) 4 ,
h 1 [ ] = Λ { 12 } e l γ + 1 + Λ { 13 } e l 2 γ + 1 + Λ { 14 } e l 3 γ + 1 +
where Λ { 12 } = 2 γ 2 γ + 1 / 2 c 2 γ 2 π γ 2 c 2 γ γ ,
Λ { 13 } = 2 2 γ 2 γ + 1 / 2 c 2 2 γ 2 π γ 2 + 1 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 γ 2 π γ 2 2 γ 2 γ + 1 / 2 c 3 γ γ + 2 c 2 2 γ γ ,
Λ { 14 } = 9 c 2 3 2 γ 4 γ + 1 / 2 2 γ 3 γ 3 π 4 c 2 3 2 γ 6 γ + 1 / 2 3 γ 4 γ 4 π 3 / 2 + c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 c 2 γ 3 γ 3 π 3 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 c 2 γ 2 π γ 2 2 γ 2 γ + 1 / 2 + 4 c 2 c 3 γ γ c 2 3 γ γ 3 2 γ 2 γ + 1 / 2 c 2 c 3 γ 2 π γ 2 4 c 2 3 2 γ 2 γ + 1 / 2 γ 2 π γ 2 ,
γ + 1 h 1 [ ] 1 γ = Λ { 15 } e l 2 γ + 1 + Λ { 16 } e l 3 γ + 1 +
where Λ { 15 } = 3 2 γ 2 γ + 1 / 2 c 2 2 γ γ π 2 γ 4 γ + 1 / 2 2 c 2 2 γ 2 γ 2 π 2 c 2 2 ,
Λ { 16 } = 9 c 2 3 2 γ 4 γ + 1 / 2 2 γ 2 γ 2 π + 4 4 c 2 3 2 γ 6 γ + 1 / 2 3 γ 3 γ 3 π 3 / 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 c 2 γ 2 γ 2 π + 3 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 c 2 γ γ π 2 γ 2 γ + 1 / 2 4 c 2 c 3 + c 2 3 + 3 2 γ 2 γ + 1 / 2 c 2 c 3 γ γ π + 4 4 c 2 3 2 γ 2 γ + 1 / 2 γ γ π ,
v l = x l ( γ + 1 ) ( x l ) ( x l ) ( ς l ) ( x l ) + 2 ( ς l ) ( x l ) 2 + β [ ] [ ] ( ς l ) ( x l ) 4 1 / γ ,
e = e = £ 1 [ ] + £ 2 [ ] + £ 3 [ ] + £ 4 [ ] + £ 5 [ ] + £ 6 [ ] e l 3 γ + 1 + O e l 4 γ + 1 ,
where £ 1 [ ] = 9 9 c 2 3 2 γ 4 γ + 1 / 2 2 γ 2 γ 2 π , £ 2 [ ] = + 4 c 2 3 2 γ 6 γ + 1 / 2 3 γ 3 γ 3 π 3 / 2 , £ 3 [ ] = c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 c 2 γ 2 γ 2 π , £ 4 [ ] = 3 2 c 3 3 γ 3 3 γ + 1 / 3 γ + 2 / 3 c 2 γ γ π 2 γ 2 γ + 1 / 2 ,
£ 5 [ ] = 4 c 2 c 3 + c 2 3 , £ 6 [ ] = 3 2 γ 2 γ + 1 / 2 c 2 c 3 γ γ π + 4 c 2 3 2 γ 2 γ + 1 / 2 γ γ π .
Thus, the theorem is proven.    □

2.2. Dynamical Analysis of the Fractional Scheme MMγ

Ensuring the robustness and reliability of solving nonlinear equations requires a careful examination of the dynamical behavior of solution techniques [19]. Dynamical analysis in this context refers to a method’s ability to converge on the true root from an initial guess, even when there are small perturbations or errors in the calculations. Single root-finding algorithms generally have local convergence around the root, making them effective when the initial guess is close enough. However, their stability depends on the characteristics of the function and the initial approximation. If the function is poorly behaved or the initial guess is far from the root, these methods may diverge or converge to an incorrect solution.
The stability of a root-finding method can be analyzed using concepts from complex dynamical systems, which examine how sensitive the root is to variations in the input, as well as the convergence criterion, which evaluates how quickly the method approaches the root. The choice of parameters plays a crucial role in the method’s performance, particularly within the convergence zone (the red region) on the parametric plane. Stability and faster convergence are achieved when optimal parameters are selected from this region. However, choosing parameters near the boundary of this zone, or from the divergence zone, may lead to reduced stability or slower convergence rates. Selecting parameters from the divergence region can result in failure or divergence, as the method may become unstable. To mitigate the effects of computational errors and ensure reliable performance, stability is typically maintained by carefully balancing the method’s convergence properties with the appropriate selection of the initial guess, function parameters, and stopping criteria.
The following rational map R ( x ) = x a x b is obtained as:
R ( x ) = y = x 2 γ 1 γ + 1 x 2 1 x γ 1 2 γ 1 [ ] ,
where [ ] = γ + 1 x 2 1 3 x γ 1 11 [ i ] + 12 [ i ] + 13 [ i ] + 14 [ i ] + 15 [ i ] 16 [ i ] + 17 [ i ] + 18 [ i ] + 19 [ i ] + 20 [ i ] 21 [ i ] + 22 [ i ] + 23 [ i ] + 14 [ i ] + 15 [ i ] ,
11 [ i ] = 3 + 16 γ 1 γ γ + 1 x 2 1 x 4 γ 1 β [ ] [ ] x 4 42 γ 1 γ γ + 1 x 2 1 x γ 1 β [ ] [ ] x 7 + 122 γ 1 γ γ + 1 x 2 1 x γ 1 β [ ] [ ] x 5 124 γ 1 γ + 1 x 2 1 x 2 γ 1 β [ ] [ ] x 4 ,
12 [ i ] = 122 γ 1 γ γ + 1 x 2 1 x γ 1 β [ ] [ ] x 3 + 124 γ 1 γ + 1 x 2 1 x 2 γ 1 β [ ] x 2 + 42 γ 1 γ γ + 1 x 2 1 x γ 1 β [ ] x 48 γ 1 γ γ + 1 x 2 1 x 3 γ 1 β [ ] x 5 ,
+ 74 γ 1 γ γ + 1 x 2 1 x 2 γ 1 β [ ] [ ] x 6 + 48 γ 1 γ γ + 1 x 2 1 x 3 γ 1 β [ ] [ ] x 3 + 464 γ 1 γ + 1 x 2 1 x 6 γ 1 β [ ] [ ] x 2 ,
13 [ i ] = + 616 γ 1 γ + 1 x 2 1 x 4 γ 1 β [ ] x 4 124 γ 1 γ γ + 1 x 2 1 x 2 γ 1 β [ ] x 4 1216 γ 1 γ + 1 x 2 1 x 4 γ 1 β [ ] x 2 + 64 γ 1 γ γ + 1 x 2 1 x 2 γ 1 β [ ] x 2 ,
14 [ i ] = 4 γ 1 γ γ + 1 x 2 1 x 2 γ 1 4 γ 1 γ γ + 1 x 2 1 x 3 γ 1 2 γ 1 γ x 5 + 84 γ 1 γ + 1 x 2 1 x 3 γ 1 2 γ 1 γ x 3 ,
15 [ i ] = 4 γ 1 γ γ + 1 x 2 1 x 3 γ 1 2 γ 1 γ x 124 2 + γ γ γ + 1 x 2 1 x 4 γ 1 β [ ] x 2 + 124 2 + γ γ γ + 1 x 2 1 x 4 γ 1 β [ ] x 4 4 γ 1 γ γ + 1 x 2 1 x 2 γ 1 β [ ] ,
16 [ i ] = 416 γ 1 γ + 1 x 2 1 x 4 γ 1 x 2 + 24 γ 1 γ γ + 1 x 2 1 x 2 γ 1 x 2 + 616 γ 1 γ + 1 x 2 1 x 4 γ 1 β [ ] 4 2 γ 1 γ γ + 1 x 2 1 x 2 γ 1 x 4 ,
17 [ i ] = 464 γ 1 γ + 1 x 2 1 x 6 γ 1 β [ ] + 216 γ 1 γ + 1 x 2 1 x 4 γ 1 x 4 + 3 34 γ 1 γ γ + 1 x 2 1 x 2 γ 1 x 6 + 256 γ 1 γ + 1 x 2 1 x 8 γ 1 β [ ] ,
18 [ i ] = + 216 γ 1 γ + 1 x 2 1 x 4 γ 1 + 3 x 8 52 γ 1 γ γ + 1 x 2 1 x γ 1 x 7 + 4 γ 1 γ + 1 x 2 1 x 2 γ 1 x 6 + 122 γ 1 γ γ + 1 x 2 1 x γ 1 x 5 ,
19 [ i ] = + 62 γ 1 γ + 1 x 2 1 x γ 1 x 5 154 γ 1 γ + 1 x 2 1 x 2 γ 1 x 4 122 γ 1 γ γ + 1 x 2 1 x γ 1 x 3 62 γ 1 γ + 1 x 2 1 x γ 1 x 3 ,
20 [ i ] = + 154 γ 1 γ + 1 x 2 1 x 2 γ 1 x 2 + 5 52 γ 1 γ γ + 1 x 2 1 x γ 1 x + β [ ] x 8 4 β [ ] x 6 + 6 β [ ] x 4 4 β [ ] x 2 + 18 x 4 12 x 6 4 γ 1 γ + 1 x 2 1 x 2 γ 1 ,
21 [ i ] = 1216 γ 1 γ + 1 x 2 1 x 5 γ 1 2 γ 1 γ β [ ] x 3 124 γ 1 γ + 1 x 2 1 x 3 γ 1 2 γ 1 γ β [ ] x 5 + 1216 γ 1 γ + 1 x 2 1 x 5 γ 1 2 γ 1 γ β [ ] x + 244 γ 1 γ + 1 x 2 1 x 3 γ 1 ,
22 [ i ] = 2 γ 1 γ β [ ] x 3 124 γ 1 γ + 1 x 2 1 x 3 γ 1 2 γ 1 γ β [ ] x 4 γ 1 γ γ + 1 x 2 1 x 5 γ 1 8 γ 1 γ β [ ] x 3 + 616 γ 1 γ + 1 x 2 1 x 6 γ 1 4 γ 1 γ β [ ] x 2 ,
23 [ i ] = 464 γ 1 γ + 1 x 2 1 x 7 γ 1 2 γ 1 γ β [ ] x + β [ ] 12 x 2 .
For γ 1 , we have
R ( x ) = φ 1 [ ] x 10 + φ 2 [ ] x 8 + φ 3 [ ] x 6 + φ 4 [ ] x 4 + ( 5 x 2 1 ) β [ ] 512 x 9 ,
where φ 1 [ ] = β [ ] 160 , φ 1 [ ] = 5 β [ ] 480 , φ 1 [ ] = 10 β [ ] + 160 , φ 1 [ ] = 10 β [ ] 32 and a, b belong to C . Thus, R ( x ) depends on a, b, and x. Using the Möbius transformation M x = x a x b , we see that R ( x ) is conjugate with the operator, and for γ 1 , we have [19]
O x = x 4 x 6 8 x 5 27 x 4 48 x 3 + β [ ] x 47 x 2 24 x 5 β [ ] x 5 5 x 6 24 x 5 47 x 4 . 48 x 3 27 x 2 8 x 1 3 ,
which is independent of a and b, as shown using Maple with 0 or . Thus, R ( x ) fits precisely with:
R [ ] ( x ) = i = 0 n a i x i i = 0 n a n i x i ; a i i = 0 n R ,
which exhibits interesting properties [27].
Proposition 1.
The fixed points of O τ are as follows:
  • τ 0 = 0 ; τ = are super-attracting points.
  • τ 1 = 1 is a repelling point, and
  • The critical points 0 and 1 are super-attracting and repelling points, respectively, for γ 1 , and are given by:
    C 1 [ ] = 16.0 + β [ ] 16.0 β [ ] 5 β [ ] 288 16 5.0 β 1 [ ] 8 10.0 , C 2 [ ] = 16.0 + β [ ] 16.0 β [ ] 5 β [ ] 288 16 5.0 + β 1 [ ] 8 10.0 , C 3 [ ] = 16.0 + β [ ] 16.0 β [ ] 5 β [ ] 288 16 5.0 + β 2 [ ] 2 , C 4 [ ] = 16.0 + β [ ] 16.0 β [ ] 5 β [ ] 288 16 5.0 β 2 [ ] 2 ,
    where β 1 [ ] = 224 β [ ] + 5 β [ ] 2 4608 5 β [ ] 5 β [ ] 288 + 268 5 β [ ] 2 / 3 5 β [ ] 288 5 5 β [ ] 5 / 2 5 β [ ] 288 , β 2 [ ] = 2 + 16 β [ ] 2 32 30 + 2 β [ ] 5 + β 3 [ ] 5 β [ ] 288 , β 3 [ ] = 2 5 β [ ] 32 64 β [ ] 2 64 + 2 16 β [ ] 15 + β [ ] 5 .
The stability function for γ 1 in the single root-finding method is given by:
1 , β [ ] = 81 , 920 + 512 β [ ] 160 + β [ ] 2 .
For other values of the fractional parameter γ 0 , 1 , the stability regions are examined, as shown in Figure 1a–c.
The stability region of the fractional iterative method MMγ for finding the root of nonlinear equations illustrates the initial guesses or parameters for which the method converges to the correct root, as seen in Figure 1a–c. The behavior in which the fractions contribute to assessing the method’s resilience and effectiveness in solving nonlinear equations under a range of γ values is seen in Figure 1a,c. The role of the fractional parameter γ in determining the method’s stability and effectiveness is highlighted in these figures. The method is more stable when γ = 1 , with stability decreasing as γ approaches zero.
By using critical points as starting values, parametric planes are generated (see Figure 2a–c), which depict the behavior of the method’s iterations, including patterns such as convergence, divergence, and chaotic behavior (Figure 3a–h and Figure 4a–c). These figures highlight stable and unstable behavior at various β [ ] values and γ = 1 . In Figure 2, Figure 3 and Figure 4, small circles represent fixed points, squares with stars indicate super-attracting fixed points, and squares denote critical points.
Parametric planes are essential for understanding the convergence behavior of iterative methods used to solve nonlinear equations. They provide insights into the stability and efficiency of iterations, helping to identify optimal parameters for fast and accurate convergence. These planes also reveal divergence zones, allowing for the avoidance of parameters that may cause instability or failure. By analyzing parametric planes, the optimal parameters for minimizing the number of iterations and improving solution reliability can be determined.
The figures also illustrate the method’s sensitivity to parameter changes, showing whether the approach is robust or requires careful tuning. Additionally, they display the nature of convergence—whether linear, super-linear, or quadratic—indicating how quickly the method reaches the solution. This information is crucial for selecting and optimizing parameters to ensure efficient and reliable problem-solving.
Understanding the dynamics of iterative methods is essential for solving nonlinear problems over time. Dynamic analysis helps evaluate how initial estimates affect outcomes, convergence rates, and overall stability. By analyzing these dynamics, one can ensure the reliability and efficiency of the approach, avoiding issues like slow convergence or divergence. In applied sciences and engineering, dynamic analysis plays a critical role in generating accurate solutions from iterative processes.
Dynamic planes offer insights into solution stability, convergence patterns, and behavior throughout the iterations. These planes reveal zones of attraction, where iterations converge, and regions of repulsion, where they diverge. They also highlight fixed points, cycles, and chaotic behavior, providing a deeper understanding of the methods’ sensitivity to initial conditions. This analysis is crucial for optimizing iterative methods and ensuring consistency in solving nonlinear equations. Similar to parametric planes, dynamic planes are generated to evaluate the stability of different schemes. Figure 3a–h shows the stable behavior of the schemes for various values of γ and β [ ] . Choosing β [ ] values from the divergence zone of the parametric planes leads to unstable behavior, as illustrated in Figure 4a,c for different γ values.
Using the dynamical and parametric plane notation, we find the best parameter value that accelerates the rate of convergence of the simple root-finding method. Then, using the newly developed stable fractional-order scheme MMγ as a correction factor, we propose a novel parallel fractional scheme for analyzing (1) in the following section.

3. Development and Analysis of Fractional Parallel Scheme

Among derivative-free simultaneous methods, the Weierstrass–Durand–Kerner (also known as Weierstrass–Dochive) method [28] is one of the most effective approaches. It is defined as:
x i l + 1 = x i l w ( x i l ) ,
where
w ( x i l ) = ( x i l ) Π n j i j = 1 ( x i l x j l ) , ( i , j = 1 , , n ) ,
is known as Weierstrass’ correction. This method exhibits local quadratic convergence.
In 1977, Ehrlich [29] introduced the following third-order simultaneous method:
x i l + 1 = x i l 1 1 N i ( x i l ) j i j = 1 n 1 x i l x j l ,
where x j l = u j l is used as a correction in (15). Petkovic et al. [30] improved the convergence order of this method from three to six:
x i l + 1 = x i l 1 1 N i ( x i l ) j i j = 1 n 1 x i l x j l ,
with the correction term: u j l = x i l ( s j l ) ( x i l ) 2 ( s j l ) ( x i l ) ( x i l ) ( x i l ) , where s j l = x j l ( x j l ) ( x j l ) .
Petkovic et al. [31] further accelerated the convergence order of this method from three to ten, giving the method (PM[∗]) as:
x i l + 1 = x i l 1 1 N i ( x i l ) j i j = 1 n 1 x i l T j l ,
where T j l = γ j l ς j l γ j l ) γ j l x j l x j l x j l γ j l 2 ς j l x j l ς j l γ j l , γ j l = ς j l x j l ς j l x j l x j l x j l ς j l 2 , ς j l = x j l x j l x j l .
Using the method (38) in (16), we developed a new parallel scheme MM[∗] for simultaneously finding all solutions of Equation (1). The updated iterative scheme is:
s i l = ς i [ l ] ( ς i [ l ] ) Π n j i j = 1 ( ς i [ l ] ς j [ l ] ) ( ς i l ) ( x i l ) + 2 ( ς i l ) ( x i l ) 2 + β [ ] ( ς i l ) ( x i l ) 4 ,
where ς i [ k ] = x i l ( x i l ) Π n j i j = 1 ( x i l v j l ) and
v j l = x j l ( γ + 1 ) ( x j l ) ( x j l ) ( ς j l ) ( x j l ) + 2 ( ς j l ) ( x j l ) 2 + β [ ] ( ς j l ) ( x j l ) 4 1 / γ ,
ς j l = x j l ( γ + 1 ) ( x j l ) γ 1 γ c ˇ ( x j l ) 1 / γ ; β [ ] R .
The following theorem defines the local order of convergence for the inverse fractional scheme.
Theorem 3.
Let ζ 1 , , ζ σ be simple roots of a nonlinear equation, and assume that the initial distinct estimates x 1 [ 0 ] , , x n [ 0 ] are sufficiently close to the true roots. Then, the MM[∗] method achieves a convergence order of 6 γ + 3 .
Proof. 
Let e i = x i [ l ] ζ i , e ς = ς i [ l ] ζ i and e i [ ] = s i [ l ] ζ i represent the errors in x i [ l ] , ς i [ l ] , and s i [ l ] , respectively. From the first-step of MM[∗], we have:
ς i [ l ] ζ i = x i [ l ] ζ i ( x j l ) Π n j i j = 1 ( x i l v j l ) ,
e i = e i e i n j i j = 1 x i [ l ] ζ j x i [ l ] v j [ l ] ,
e ς = e i 1 n j i j = 1 x i [ l ] ζ j x i [ l ] v j [ l ] ,
where n j i j = 1 x i [ l ] ζ j x i [ l ] u j [ l ] = 1 + v j [ l ] ζ j x i [ l ] v j [ l ] = 1 + O e i 3 γ + 1 l + 1 and, from (31), v j [ l ] ζ j = O e i 3 γ + 1 . Thus,
e ς = e i 1 + ( l 1 ) O e i 3 γ + 1 ,
Assuming e i 2 γ + 1 = e j 2 γ + 1 , we have
e ς = O e i 3 γ + 2 .
In the second step, we obtain:
s i [ l ] ζ i = x i [ l ] ζ i ( ς i [ l ] ) Π n j i j = 1 ( ς i [ l ] ς j [ l ] ) ( ς i l ) ( x i l ) + 2 ( ς i l ) ( x i l ) 2 + β [ ] ( ς i l ) ( x i l ) 4 ,
leading to:
e i [ ] = e ς ( ς i [ l ] ) Π n j i j = 1 ( ς i [ l ] ς j [ l ] ) ( ς i l ) ( x i l ) + 2 ( ς i l ) ( x i l ) 2 + β [ ] ( ς i l ) ( x i l ) 4 ,
or
e i [ ] = e ς e ς n j i j = 1 x i [ l ] ζ j x i [ l ] u j [ l ] e ς ϑ [ ] e i + 2 e ς ϑ [ ] e i 2 + β [ ] e ς ϑ [ ] e i 4 ,
where ( ς i l ) ( x i l ) = e ς ϑ [ ] e i . Thus,
e i [ ] = e ς 1 e i e ς n j i j = 1 x i [ l ] ζ j x i [ l ] u j [ l ] ϑ [ ] e i + 2 e ς ϑ [ ] e i 2 + β [ ] e ς 3 ϑ [ ] e i 4 .
leading to:
e i [ ] = e ς 1 ϑ [ ] ϑ [ ] e i + 2 e ς ϑ [ ] e i 2 + β [ ] e ς 3 ϑ [ ] 4 e i 3 ,
e i [ ] = e ς 1 O e i 3 γ + 1 Θ [ ] e i 3 ,
where Θ [ ] = e i 2 ϑ [ ] + 2 e i e ς ϑ [ ] + β [ ] e ς 3 ϑ [ ] 4 . We conclude that:
e i [ ] = e ς O e i 3 γ + 1 ,
e i [ ] = O e i 6 γ + 3 .
This completes the proof.    □

4. Numerical Results

In this section, we demonstrate the effectiveness and stability of the method by examining several engineering applications using the following termination criteria implemented in Maple 18:
( i ) e i = x i 1 + 1 x i 1 < = 10 18
where e i represents the residual error norm-2. We compare our method with the PM[∗] method and the approach presented in [32], referred to as ZM[∗]. Using Algorithm 1, we find all solutions to the nonlinear problem (1) simultaneously.
Algorithm 1 Parallel fractional numerical scheme MM[∗] implementation on computer for solving nonlinear problems.
Algorithm 1 Parallel fractional numerical scheme MM[∗].
Symmetry 16 01452 i001

4.1. Example 1

Fractional differential equations are particularly useful in modeling complex engineering systems, such as the quarter car suspension model, where memory effects and damping properties play a critical role. Unlike traditional integer-order equations, fractional equations provide more accurate representations of systems with hereditary behaviors, such as viscoelasticity and energy dissipation in suspension systems. In this context, fractional differential equations are employed to design suspension models with improved performance and stability, making them powerful tools for addressing the challenges in automotive engineering and control system design. This leads to the following fractional differential equation:
γ 1 γ c ˇ y ( x ) + α U [ ] y ( x ) 4 = 0 , y ( 0 ) = 0 , 0 < γ 1 , x 0
Using the method from [32], we simulate Equation (57) using the following nonlinear approximation:
y ( x ) α U [ ] 4 ( γ + 1 ) x γ 4 α 2 U [ ] 7 ( 3 γ + 1 ) x 2 γ + 16 α 3 U [ ] 10 ( 2 γ + 1 ) 2 ( γ + 1 ) ( 3 γ + 1 ) x 3 γ + .
If we set U [ ] = 1 , α = 1 , and γ = 1 , Equation (58) becomes:
y 5 ( x ) x 2 x 2 + 14 x 3 3 35 x 4 3 + 91 3 x 5 .
Equation (59) has five solutions:
ζ 1 = 0.0 , ζ 2 , 3 = 0.152018637 ± 03931364228 i , ζ 4 , 5 = 0.314432633 ± 0.25883527180 i .
The Caputo-type derivative of Equation (59) is:
γ 1 γ c ˇ y 5 ( x ) = 91 3 ( 6 ) ( 6 γ ) x 5 γ 35 3 ( 5 ) ( 5 γ ) x 4 γ + 14 3 ( 4 ) ( 4 γ ) x 3 γ 2 ( 3 ) ( 3 γ ) x 2 γ + ( 2 ) ( 2 γ ) x 1 γ .
For the dynamical analysis, we set γ = 1 2 in Equation (58) and obtain the following nonlinear equation:
y 5 ( x ) 2 x π 4 x + 32 3 + 2 π x 3 2 3 π 3 2 + 68 60 π x 2 ,
The exact root of Equation (61) is zero. The dynamical analysis of the simple root-finding method for this equation includes measurements of elapsed time, percentage convergence, total points, and the number of iterations for different β [ ] values, as shown in Table 1 and Figure 5a–c.
Using the information from Table 1 and the optimal parameter values from the dynamical analysis, these values are applied in parallel schemes to find all solutions to the nonlinear problems. Using initial guesses close to the exact solution = 10 2 , the convergence rate of the simultaneous methods increases, leading to faster convergence to the exact roots with fewer iterations.
In Table 2, σ i [ 4 ] represents the local computational order of convergence [33]. The table clearly demonstrates that when γ 1 , the residual error for each root, along with the order and rate of convergence, is significantly improved compared to existing methods. This indicates higher efficiency and stability of the approach.
Table 2 and Table 3 clearly show that across a range of γ , MM[∗] outperforms ZM[∗] and PM[∗] in terms of efficiency, computation time (CPU time), number of iterations (n), and residual error x i 1 + 1 x i 1 . The convergence order of a fractional parallel iterative scheme is influenced by the fractional parameter, which captures memory effects. The convergence order of parallel systems is determined by the order of the fractional derivative, with faster convergence observed as γ increases. Our method, MM[∗], outperforms the existing methods ZM[∗] and PM[∗] in the literature, achieving a convergence order of 9.002313 when γ 0.99 .

4.2. Example 2

In order to extend classical differential equations and enable more accurate descriptions of physical systems, such as electrical circuits, fluid flow, and viscoelastic materials, fractional calculus is added. This method also enables a more flexible representation of anomalous diffusion and wave propagation, which improves predictions in fields such as biology and materials science and leads to the following fractional initial value problem [34]:
γ 1 γ c ˇ y ( x ) + e y ( x ) = 0 , y ( 0 ) = 0 , y ( 0 ) = 0 , 1 < γ 2 , x 0
Using the method from [32], we simulate Equation (62) with the following nonlinear approximation:
y n ( x ) 1 ( γ + 1 ) x γ + 1 ( 3 γ + 1 ) x 2 γ + 2 ( γ + 1 ) 2 ( 2 γ + 1 ) 2 2 ( γ + 1 ) ( 3 γ + 1 ) x 3 γ + .
For γ = 1 , Equation (63) simplifies to:
y 3 ( x ) x + 1 2 x 2 2 x 3 ,
Equation (64) has the following solutions:
ζ 1 = 0.0 , ζ 2 , 3 = 0.1250000 ± 0.695970545 i .
The Caputo-type derivative of Equation (63) is:
γ 1 γ c ˇ y 5 ( x ) = 2 ( 4 ) ( 4 γ ) x 3 γ + 1 2 ( 3 ) ( 3 γ ) x 2 γ ( 2 ) ( 2 γ ) x 1 γ .
For the dynamical analysis, we set γ = 1 2 in Equation (63) and obtain the following nonlinear equation:
y 5 ( x ) x 2 x π 2 1 2 π + 1 x 3 2 π 3 4 π ,
The exact root of Equation (66) is zero. The dynamical analysis of the simple root-finding method for this equation includes elapsed time, percentage convergence, total points, and the number of iterations for different β [ ] values, as shown in Table 4.
Using initial guesses close to the exact solution (up to = 10 2 ), the convergence rate of the simultaneous methods improves, reaching the exact roots with fewer iterations. Table 4 clearly shows that when γ 1 , the residual error for each root, as well as the convergence order and rate of convergence, are superior to existing methods, indicating better efficiency and stability. The numerical outcomes for random initial guesses are presented in Table 5 and Figure 6a–c.
In Table 5, σ i [ n 1 ] represents the local computational order of convergence. The table clearly demonstrates that when γ 1 , the residual error for each root, as well as the convergence order and rate of convergence, are significantly improved compared to existing methods. This highlights the method’s enhanced efficiency and stability.
The numerical results in Table 6 show that across a range of γ values, MM[∗] outperforms both ZM[∗∗] and PM[∗] in terms of efficiency, CPU time (in seconds), number of iterations, and residual error. The convergence order of a fractional parallel iterative scheme is influenced by the fractional parameter, which captures memory effects. The convergence order of parallel systems is determined by the order of the fractional derivative, with faster convergence observed as γ increases. Our method, MM[∗], outperforms the existing methods ZM[∗] and PM[∗] from the literature, achieving a convergence order of 9.002313 when γ 0.99 .

5. Conclusions

We developed a new Caputo-type fractional scheme with a convergence order of 3 γ + 1 to find simple solutions to Equation (1) and extended it into a fractional parallel numerical scheme for solving all nonlinear fractional problems. The parallel schemes were shown to have a convergence order of 6 γ + 3 based on our convergence analysis. To improve the convergence rate of these fractional schemes, dynamical planes were used to select optimal initial guesses, ensuring convergence to exact solutions. Several nonlinear problems were tested to assess the stability and consistency of MM[∗] in comparison to ZM[∗∗], PM[∗]. The numerical results demonstrate that the MM[∗] method is more stable and consistent in terms of residual error, CPU time, and error graphs for different γ values, outperforming both ZM[∗∗] and PM[∗]. Future research will focus on applying our fractional-order parallel technique to advanced control systems, biological processes, financial models, complex networks, and chaotic dynamics. Additionally, integrating fractional-order parallel schemes with machine learning [35,36] and enhancing computational efficiency could further expand its applicability and impact.

Author Contributions

Conceptualization, M.S. and B.C.; methodology, M.S.; software, M.S.; validation, M.S.; formal analysis, B.C.; investigation, M.S.; resources, B.C.; writing—original draft preparation, M.S. and B.C.; writing—review and editing, B.C.; visualization, M.S. and B.C.; supervision, B.C.; project administration, B.C.; funding acquisition, B.C.; All authors have read and agreed to the published version of the manuscript.

Funding

The work is supported by the Free University of Bozen-Bolzano (IN200Z SmartPrint) and by Provincia Autonoma di Bolzano/Alto Adige—Ripartizione Innovazione, Ricerca, Università e Musei (CUP codex I53C22002100003 PREDICT). Bruno Carpentieri is a member of the Gruppo Nazionale per il Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematica (INdAM), and this work was partially supported by INdAM-GNCS under Progetti di Ricerca 2024.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The work is supported by the Free University of Bozen-Bolzano (IN200Z SmartPrint) and by Provincia Autonoma di Bolzano/Alto Adige—Ripartizione Innovazione, Ricerca, Università e Musei (CUP codex I53C22002100003 PREDICT). Bruno Carpentieri is a member of the Gruppo Nazionale per il Calcolo Scientifico (GNCS) of the Istituto Nazionale di Alta Matematica (INdAM) and this work was partially supported by INdAM-GNCS under Progetti di Ricerca 2024.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Abbreviations

The following abbreviations are used in this manuscript:
MM[∗]Newly developed Schemes
nIterations
CPU timeComputational Time in Seconds
e- 10 ( )
ρ ς [ k 1 ] Computational local convergence order

References

  1. Shafiq, F.; Malik, M. Fractional Derivatives with Applications: A Review. Math. Sci. Appl. 2023, 2, 33–50. [Google Scholar]
  2. Mainardi, F. Fractional calculus: Theory and applications. Mathematics 2018, 6, 145. [Google Scholar] [CrossRef]
  3. Liu, P.; Zhang, Y.; Mohammed, K.J.; Lopes, A.M.; Saberi-Nik, H. The global dynamics of a new fractional-order chaotic system. Chaos Solitons Fractals 2023, 175, 114006. [Google Scholar] [CrossRef]
  4. Machado, J.T.; Kiryakova, V.; Mainardi, F. Recent history of fractional calculus. Commun. Nonlinear Sci. Numer. Simul. 2011, 16, 1140–1153. [Google Scholar] [CrossRef]
  5. Venkateshan, S.P.; Swaminathan, P. Computational Methods in Engineering; Academic Press: Cambridge, MA, USA, 2014; pp. 317–373. [Google Scholar]
  6. Hiptmair, R. Finite elements in computational electromagnetism. Acta Numer. 2002, 11, 237–339. [Google Scholar] [CrossRef]
  7. Lomax, H.; Pulliam, T.H.; Zingg, D.W.; Kowalewski, T.A. Fundamentals of computational fluid dynamics. Appl. Mech. Rev. 2002, 55, B61. [Google Scholar] [CrossRef]
  8. Cantwell, B.J. Organized motion in turbulent flow. Annu. Rev. Fluid Mech. 1981, 13, 457–515. [Google Scholar] [CrossRef]
  9. Sebah, P.; Gourdon, X. Introduction to the gamma function. Am. J. Sci. Res. 2002, 1, 2–18. [Google Scholar]
  10. Gao, G.H.; Sun, Z.Z.; Zhang, H.W. A new fractional numerical differentiation formula to approximate the Caputo fractional derivative and its applications. J. Comput. Phys. 2014, 259, 33–50. [Google Scholar] [CrossRef]
  11. El-Ajou, A.; Arqub, O.A.; Al-Smadi, M. A general form of the generalized Taylor’s formula with some applications. Appl. Math. Comput. 2015, 256, 851–859. [Google Scholar] [CrossRef]
  12. Neta, B. New third order nonlinear solvers for multiple roots. Appl. Math. Comput. 2008, 202, 162–170. [Google Scholar] [CrossRef]
  13. Liu, C.S.; Chang, C.W. New memory-updating methods in two-step Newton’s variants for solving nonlinear equations with high efficiency index. Mathematics 2024, 12, 581. [Google Scholar] [CrossRef]
  14. Torres-Hernandez, A.; Brambila-Paz, F. Sets of fractional operators and numerical estimation of the order of convergence of a family of fractional fixed-point methods. Fractal Fract. 2021, 4, 240. [Google Scholar] [CrossRef]
  15. Akgül, A.; Cordero, A.; Torregrosa, J.R. A fractional Newton method with 2th-order of convergence and its stability. Appl. Math. Lett. 2019, 98, 344–351. [Google Scholar] [CrossRef]
  16. Cajori, F. Historical note on the Newton-Raphson method of approximation. Am. Math. Mon. 1911, 18, 29–32. [Google Scholar] [CrossRef]
  17. Kumar, P.; Agrawal, O.P. An approximate method for numerical solution of fractional differential equations. Signal Process. 2006, 86, 2602–2610. [Google Scholar] [CrossRef]
  18. Candelario, G.; Cordero, A.; Torregrosa, J.R. Multipoint fractional iterative methods with (2α + 1)th-order of convergence for solving nonlinear problems. Mathematics 2020, 8, 452. [Google Scholar] [CrossRef]
  19. Rafiq, N.; Akram, S.; Mir, N.A.; Shams, M. Study of dynamical behavior and stability of iterative methods for nonlinear equation with applications in engineering. Math. Probl. Eng. 2020, 2020, 3524324. [Google Scholar] [CrossRef]
  20. Cordero, A.; Hueso, J.L.; MartÃ-nez, E.; Torregrosa, J.R. New modifications of Potra-Patk method with optimal fourth and eighth orders of convergence. J. Comput. Appl. Math. 2010, 234, 2969–2976. [Google Scholar] [CrossRef]
  21. Sivalingam, S.M.; Govindaraj, V. Physics informed neural network based scheme and its error analysis for ψ-Caputo type fractional differential equations. Phys. Scr. 2024, 99, 096002. [Google Scholar] [CrossRef]
  22. Chun, C. Some variants of King’s fourth-order family of methods for nonlinear equations. Appl. Math. Comput. 2007, 190, 57–62. [Google Scholar] [CrossRef]
  23. Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
  24. Kyncheva, V.K.; Yotov, V.V.; Ivanov, S.I. Convergence of Newton, Halley and Chebyshev iterative methods as methods for simultaneous determination of multiple polynomial zeros. Appl. Numer. Math. 2017, 112, 146–154. [Google Scholar] [CrossRef]
  25. Marcheva, P.I.; Ivanov, S.I. Convergence analysis of a modified Weierstrass method for the simultaneous determination of polynomial zeros. Symmetry 2020, 12, 1408. [Google Scholar] [CrossRef]
  26. Sivalingam, S.M.; Kumar, P.; Govindaraj, V. A Chebyshev neural network-based numerical scheme to solve distributed-order fractional differential equations. Comp. Math. Appl. 2024, 164, 150–165. [Google Scholar] [CrossRef]
  27. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Stability and applicability of iterative methods with memory. J. Math. Chem. 2019, 57, 1282–1300. [Google Scholar] [CrossRef]
  28. Herceg, D.; Tričković, S.; Petković, M. On the fourth order methods of Weierstrass’ type. Nonlinear Anal. Theory, Methods Appl. 1997, 30, 83–88. [Google Scholar] [CrossRef]
  29. Ehrlich, L.W. A modified Newton method for polynomials. Commun. ACM 1967, 10, 107–108. [Google Scholar] [CrossRef]
  30. Petkovic, M.S.; Petkovic, L.D.; Dzunic, J. On an efficient method for the simultaneous approximation of polynomial multiple roots. Appl. Anal. Discrete Math. 2014, 1, 73–94. [Google Scholar] [CrossRef]
  31. Petkovic, M.S.; Petkovic, L.D.; Dzunic, J. On an efficient simultaneous method for finding polynomial zeros. Appl. Math. Lett. 2014, 28, 60–65. [Google Scholar] [CrossRef]
  32. Shams, M.; Carpentieri, B. On highly efficient fractional numerical method for solving nonlinear engineering models. Mathematics 2023, 11, 4914. [Google Scholar] [CrossRef]
  33. Jay, L.O. A note on Q-order of convergence. BIT Numer. Math. 2001, 41, 422–429. [Google Scholar] [CrossRef]
  34. Duan, J.S.; Chaolu, T.; Rach, R.; Lu, L. The Adomian decomposition method with convergence acceleration techniques for nonlinear fractional differential equations. Comput. Math. Appl. 2013, 66, 728–736. [Google Scholar] [CrossRef]
  35. King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  36. Monsi, M.; Hassan, N.; Rusli, S.F. The Point Zero Symmetric Single-Step Procedure for Simultaneous Estimation of Polynomial Zeros. J. Appl. Math. 2012, 2012, 709832. [Google Scholar] [CrossRef]
Figure 1. (ac): Stability regions of the scheme MMγ for different values of γ.
Figure 1. (ac): Stability regions of the scheme MMγ for different values of γ.
Symmetry 16 01452 g001
Figure 2. (ac): Dynamical planes for various parameter values of the rational map O ε .
Figure 2. (ac): Dynamical planes for various parameter values of the rational map O ε .
Symmetry 16 01452 g002
Figure 3. (ah): Stable behavior for various dynamical planes of the rational map O ε .
Figure 3. (ah): Stable behavior for various dynamical planes of the rational map O ε .
Symmetry 16 01452 g003
Figure 4. (ac): Unstable dynamical behavior for various parameter values γ of the rational map O ε .
Figure 4. (ac): Unstable dynamical behavior for various parameter values γ of the rational map O ε .
Symmetry 16 01452 g004
Figure 5. (ac) Dynamical planes of the iterative technique for different values of β [ ] while keeping fractional parameter γ 0.9 for solving (61).
Figure 5. (ac) Dynamical planes of the iterative technique for different values of β [ ] while keeping fractional parameter γ 0.9 for solving (61).
Symmetry 16 01452 g005
Figure 6. (ac) Dynamical planes of the iterative technique for different values of β [ ] while keeping fractional parameter γ 0.9 for solving (66).
Figure 6. (ac) Dynamical planes of the iterative technique for different values of β [ ] while keeping fractional parameter γ 0.9 for solving (66).
Symmetry 16 01452 g006
Table 1. Dynamical outcomes of MM[γ] for solving (61).
Table 1. Dynamical outcomes of MM[γ] for solving (61).
β [ ] IterationsTotal PointsConverging PointsElapsed Time
0.921640,00054%2.14584165
0.722640,00059%3.65483535
0.527640,00058%5.36528456
Table 2. Numerical results for Example 1.
Table 2. Numerical results for Example 1.
ErrPM[∗]ZM[∗]MM[∗]
e 1 [ 5 ] 9.31 × 10 45 1.11 × 10 99 1.1 × 10 104
e 2 [ 5 ] 2.09 × 10 55 3.02 × 10 76 0.0
e 3 [ 5 ] 0.17 × 10 76 0.12 × 10 85 0.0
e 4 [ 5 ] 0.07 × 10 39 6.27 × 10 101 4.9 × 10 165
e 4 [ 5 ] 1.07 × 10 39 6.27 × 10 101 4.9 × 10 165
σ i [ 4 ] 8.127654 9.001923 9.125432
Table 3. Outcomes of the fractional scheme MM[∗] for different γ values for solving engineering application 1.
Table 3. Outcomes of the fractional scheme MM[∗] for different γ values for solving engineering application 1.
γ Iteration x i 1 + 1 x i 1 σ i [ n 1 ] CPU Time
0.1 36 0.2 × 10 5 3.2132 0.097
0.2 24 1.4 × 10 6 4.2654 0.096
0.3 22 0.7 × 10 6 4.8236 0.076
0.4 12 0.5 × 10 4 5.4455 0.066
0.5 10 5.4 × 10 6 6.0432 0.035
0.6 08 1.3 × 10 7 6.6653 0.038
0.7 08 0.6 × 10 9 7.2124 0.015
0.8 07 0.3 × 10 11 7.5434 0.004
0.9 06 2.6 × 10 16 8.1325 0.001
Table 4. Dynamical outcomes of MM[γ] for solving (66).
Table 4. Dynamical outcomes of MM[γ] for solving (66).
β [ ] IterationsTotal PointsConverging PointsElapsed Time
0.921640,00054%1.33464564
0.718640,00059%2.67454366
0.520640,00058%5.233554675
Table 5. Numerical results for Example 2.
Table 5. Numerical results for Example 2.
ErrPM[∗]ZM[∗]MM[∗]
e 1 [ 3 ] 0.2 × 10 53 1.1 × 10 51 0.0
e 2 [ 3 ] 1.4 × 10 44 0.0 0.0
e 3 [ 3 ] 0.6 × 10 45 5.0 × 10 57 5.0 × 10 65
e 4 [ 3 ] 0.6 × 10 45 5.0 × 10 57 5.0 × 10 65
e 5 [ 3 ] 0.6 × 10 45 5.0 × 10 57 5.0 × 10 65
σ i [ 2 ] 7.993434 8.991923 9.001232
Table 6. Outcomes of the fractional scheme MM[∗] for different γ values for solving engineering application 2.
Table 6. Outcomes of the fractional scheme MM[∗] for different γ values for solving engineering application 2.
γ n x i 1 + 1 x i 1 σ i [ n 1 ] CPU Time
0.1 05 0.3 × 10 4 3.0976 0.097
0.2 05 1.5 × 10 8 4.2004 0.086
0.3 06 0.7 × 10 8 4.8546 0.088
0.4 06 7.4 × 10 9 5.4132 0.076
0.5 05 0.5 × 10 11 5.9962 0.063
0.6 06 0.4 × 10 15 6.2353 0.058
0.7 06 0.1 × 10 19 6.9864 0.045
0.8 05 0.4 × 10 20 7.1243 0.038
0.9 05 3.7 × 10 27 7.9876 0.001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shams, M.; Carpentieri, B. A New High-Order Fractional Parallel Iterative Scheme for Solving Nonlinear Equations. Symmetry 2024, 16, 1452. https://doi.org/10.3390/sym16111452

AMA Style

Shams M, Carpentieri B. A New High-Order Fractional Parallel Iterative Scheme for Solving Nonlinear Equations. Symmetry. 2024; 16(11):1452. https://doi.org/10.3390/sym16111452

Chicago/Turabian Style

Shams, Mudassir, and Bruno Carpentieri. 2024. "A New High-Order Fractional Parallel Iterative Scheme for Solving Nonlinear Equations" Symmetry 16, no. 11: 1452. https://doi.org/10.3390/sym16111452

APA Style

Shams, M., & Carpentieri, B. (2024). A New High-Order Fractional Parallel Iterative Scheme for Solving Nonlinear Equations. Symmetry, 16(11), 1452. https://doi.org/10.3390/sym16111452

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop